Movatterモバイル変換


[0]ホーム

URL:


US20230011923A1 - System for providing a virtual focus group facility - Google Patents

System for providing a virtual focus group facility
Download PDF

Info

Publication number
US20230011923A1
US20230011923A1US17/931,956US202217931956AUS2023011923A1US 20230011923 A1US20230011923 A1US 20230011923A1US 202217931956 AUS202217931956 AUS 202217931956AUS 2023011923 A1US2023011923 A1US 2023011923A1
Authority
US
United States
Prior art keywords
test subject
live stream
moderator
data
stream session
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/931,956
Inventor
Duane Varan
Erik Marc Johnson
Michael Ross Menegay
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Science Technology LLC
Original Assignee
Smart Science Technology LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Science Technology LLCfiledCriticalSmart Science Technology LLC
Priority to US17/931,956priorityCriticalpatent/US20230011923A1/en
Assigned to Smart Science Technology, LLCreassignmentSmart Science Technology, LLCASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: JOHNSON, ERIK MARC, MENEGAY, MICHAEL ROSS, VARAN, DUANE
Publication of US20230011923A1publicationCriticalpatent/US20230011923A1/en
Abandonedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

A platform configured to provide virtual glass in order to augment and improve focus group sessions for each actor within the ecosystem. The platform may be configured to allow a moderator, one or more test subjects, and one or more client users to participate in a focus group session at geographically diverse locations. The platform may also be configured to supplement the focus group experience by allowing for dialog and communication between the client users. In some cases, the platform may also be configured to generate and provide real time status indicators associate with the tests subject, real time text-based transcripts of the sessions, and recommendations as to the focus group direction to the moderator.

Description

Claims (20)

What is claimed is:
1. A method comprising:
receiving, as part of a live stream session and at a moderator system, first test subject data of a first test subject from a first test subject device;
receiving, as part of the lives stream session and at the moderator system, the second test subject data including second image data of the second test subject;
determining, based at least in part on the first test subject data, a first status indicator associated with the first test subject;
determining, based at least in part on the second test subject data, a second status indicator associated with the second test subject; and
causing, substantially simultaneously, the live stream session, the first status indicator, and the second status indicator to be displayed by the moderator system.
2. The method as recited inclaim 1, further comprising sending, substantially simultaneously, the live stream session, the first status indicator, and the second status indicator to at least one third party system.
3. The method as recited inclaim 2, wherein the at least one third party system comprises a first third party system and a second third party system and the method further comprises:
receiving a comment associated with the live stream session from the first third party system; and
causing, substantially simultaneously to the live stream session, the comment to be displayed by the moderator system and the second third party system.
4. The method as recited inclaim 1, further comprising:
causing, substantially simultaneously to the live stream session, the first status indicator, and the second status indicator, a first icon of the first test subject and a second icon of the second test subject to be displayed by the moderator system and wherein the first status indicator is superimposed over the first icon and the second status indicator is superimposed over the second icon.
5. The method as recited inclaim 1, further comprising:
causing, substantially simultaneously to the live stream session, the first status indicator, and the second status indicator, a first demographic data associated with the first test subject and second demographic data associated with the second test subject to be displayed by the moderator system.
6. The method as recited inclaim 1, further comprising:
sending, substantially simultaneously to the live stream session, a series of stimuli to the first test subject device and the second test subject device according to a predetermined order.
7. The method as recited inclaim 6, further comprising:
reorganizing the predetermined order of the series of stimuli based at least in part on the first test subject data, the second test subject data, and data associated with at least one prior live stream session.
8. The method as recited inclaim 6, wherein determining the first status indicator associated with the first test subject is based at least in part on a current stimuli of the series of stimuli.
9. The method as recited inclaim 6, wherein determining the first status indicator associated with the first test subject further comprises inputting the first test subject data into one or more machine learned models and receiving as an output of the one or more machine learned models the first test status indicator.
10. One or more non-transitory computer-readable media having computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising:
receiving a live stream session, the live stream session including data associated with at least a first test subject and a moderator;
generate, based at least in part on the live stream session, a text-based transcript of the live stream session;
identifying at least one trend based at least in part on the text-based transcript;
determining at least one chart or graph associated with the at least one trend; and
sending the live stream session, the text-based transcript, and the at least one chart or graph to a client system.
11. The one or more computer-readable media as recited inclaim 10, wherein identifying the at least one trend is based at least in part on another live stream session associated with a second test subject.
12. The one or more computer-readable media as recited inclaim 10, wherein the operations further comprise:
receiving biometric data associated with the first test subject and the live stream session;
analyzing the biometric data to determine a status indictor representative of a mood of the first test subject; and
inserting the status indicator in at least one of the live stream session or the text-based transcript.
13. The one or more computer-readable media as recited inclaim 10, wherein the operations further comprise:
identify from a second text-based transcript of a second live stream session, a portion of the second text-based transcript meeting or exceeding a criterion of the search request; and
sending the portion of the second text-based transcript with the first text-based transcript to the client system.
14. The one or more computer-readable media as recited inclaim 10, wherein the operations further comprise:
receiving a comment associated with the text-based transcript from the client device; and
causing an alert to be output by a second client device in response to receiving the comment, the alert associated with the comment, the live stream session, and the text-based transcript.
15. The one or more computer-readable media as recited inclaim 14, wherein causing the alert to be output by the second client device includes at least one of determining the second client device is associated with a set of clients associated with the live stream session.
16. The one or more computer-readable media as recited inclaim 14, further comprising displaying an indication of a type of comment with respect to the text-based transcript.
17. A system comprising:
one or more communication interfaces;
a display;
one or more processors; and
computer-readable storage media storing computer-executable instructions, which when executed by the one or more processors cause the one or more processors to perform the following operations:
receiving, via the one or more communication interfaces and as part of a live stream session including data associated with at least a first test subject and a moderator;
receiving, via the one or more communication interfaces, an alert associated with the live stream session, the alert indicating a comment and an associated portion of the live stream session; and
causing the alert to be presented on the display concurrently with the live stream session.
18. The system as recited inclaim 17, wherein the operations further comprise receiving a text-based transcript associated with the live stream session, the text-based transcript including the comment.
19. The system as recited inclaim 18, wherein the operations further comprise generating the text-based transcript responsive to determining the live stream session has completed.
20. The system as recited inclaim 17, wherein the operations further comprise:
receiving biometric data associated with the first test subject and the live stream session;
analyzing the biometric data to determine a status indictor representative of a mood of the first test subject; and
inserting the status indicator in at least one of the live stream session or the text-based transcript.
US17/931,9562020-01-282022-09-14System for providing a virtual focus group facilityAbandonedUS20230011923A1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US17/931,956US20230011923A1 (en)2020-01-282022-09-14System for providing a virtual focus group facility

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US16/775,015US11483593B2 (en)2020-01-282020-01-28System for providing a virtual focus group facility
US17/931,956US20230011923A1 (en)2020-01-282022-09-14System for providing a virtual focus group facility

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US16/775,015ContinuationUS11483593B2 (en)2020-01-282020-01-28System for providing a virtual focus group facility

Publications (1)

Publication NumberPublication Date
US20230011923A1true US20230011923A1 (en)2023-01-12

Family

ID=74557254

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US16/775,015Active2040-12-24US11483593B2 (en)2020-01-282020-01-28System for providing a virtual focus group facility
US17/931,956AbandonedUS20230011923A1 (en)2020-01-282022-09-14System for providing a virtual focus group facility

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US16/775,015Active2040-12-24US11483593B2 (en)2020-01-282020-01-28System for providing a virtual focus group facility

Country Status (2)

CountryLink
US (2)US11483593B2 (en)
WO (1)WO2021154470A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20110106750A1 (en)*2009-10-292011-05-05Neurofocus, Inc.Generating ratings predictions using neuro-response data
CN115967797B (en)*2022-11-302024-11-08海南视联通信技术有限公司Product testing method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20140136596A1 (en)*2012-11-092014-05-15Yahoo! Inc.Method and system for capturing audio of a video to display supplemental content associated with the video
US20140150002A1 (en)*2012-11-292014-05-29Qualcomm IncorporatedMethods and apparatus for using user engagement to provide content presentation
US20150074718A1 (en)*2013-09-112015-03-12Sony CorporationElectronic programming guide with real-time audio video content information updates
US20160004773A1 (en)*2009-09-212016-01-07Jan JanninkSystems and methods for organizing and analyzing audio content derived from media files
US20160210602A1 (en)*2008-03-212016-07-21Dressbot, Inc.System and method for collaborative shopping, business and entertainment
US20170011740A1 (en)*2011-08-312017-01-12Google Inc.Text transcript generation from a communication session
US20170039867A1 (en)*2013-03-152017-02-09Study Social, Inc.Mobile video presentation, digital compositing, and streaming techniques implemented via a computer network
US20170238055A1 (en)*2014-02-282017-08-17Second Spectrum, Inc.Methods and systems of spatiotemporal pattern recognition for video content development
US20180330736A1 (en)*2017-05-152018-11-15Microsoft Technology Licensing, LlcAssociating a speaker with reactions in a conference session
US11228810B1 (en)*2019-04-222022-01-18Matan AraziSystem, method, and program product for interactively prompting user decisions

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2013026457A1 (en)*2011-08-192013-02-28Telefonaktiebolaget L M Ericsson (Publ)Technique for video conferencing
US10334300B2 (en)*2014-12-042019-06-25Cynny SpaSystems and methods to present content
US10225582B1 (en)*2017-08-302019-03-05Microsoft Technology Licensing, LlcProcessing live video streams over hierarchical clusters
KR102690201B1 (en)2017-09-292024-07-30워너 브로스. 엔터테인먼트 인크. Creation and control of movie content in response to user emotional states
US10834298B1 (en)*2019-10-142020-11-10Disney Enterprises, Inc.Selective audio visual synchronization for multiple displays

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20160210602A1 (en)*2008-03-212016-07-21Dressbot, Inc.System and method for collaborative shopping, business and entertainment
US20160004773A1 (en)*2009-09-212016-01-07Jan JanninkSystems and methods for organizing and analyzing audio content derived from media files
US20170011740A1 (en)*2011-08-312017-01-12Google Inc.Text transcript generation from a communication session
US20140136596A1 (en)*2012-11-092014-05-15Yahoo! Inc.Method and system for capturing audio of a video to display supplemental content associated with the video
US20140150002A1 (en)*2012-11-292014-05-29Qualcomm IncorporatedMethods and apparatus for using user engagement to provide content presentation
US20170039867A1 (en)*2013-03-152017-02-09Study Social, Inc.Mobile video presentation, digital compositing, and streaming techniques implemented via a computer network
US20150074718A1 (en)*2013-09-112015-03-12Sony CorporationElectronic programming guide with real-time audio video content information updates
US20170238055A1 (en)*2014-02-282017-08-17Second Spectrum, Inc.Methods and systems of spatiotemporal pattern recognition for video content development
US20180330736A1 (en)*2017-05-152018-11-15Microsoft Technology Licensing, LlcAssociating a speaker with reactions in a conference session
US11228810B1 (en)*2019-04-222022-01-18Matan AraziSystem, method, and program product for interactively prompting user decisions

Also Published As

Publication numberPublication date
WO2021154470A1 (en)2021-08-05
US20210235132A1 (en)2021-07-29
US11483593B2 (en)2022-10-25

Similar Documents

PublicationPublication DateTitle
US11887352B2 (en)Live streaming analytics within a shared digital environment
US10628741B2 (en)Multimodal machine learning for emotion metrics
US8886581B2 (en)Affective response predictor for a stream of stimuli
US11430561B2 (en)Remote computing analysis for cognitive state data metrics
US20170095192A1 (en)Mental state analysis using web servers
Karimah et al.Automatic engagement estimation in smart education/learning settings: a systematic review of engagement definitions, datasets, and methods
Al Osman et al.Multimodal affect recognition: Current approaches and challenges
US20200342979A1 (en)Distributed analysis for cognitive state metrics
US10143414B2 (en)Sporadic collection with mobile affect data
US11704574B2 (en)Multimodal machine learning for vehicle manipulation
US20230011923A1 (en)System for providing a virtual focus group facility
Karbauskaitė et al.Kriging predictor for facial emotion recognition using numerical proximities of human emotions
US20250133038A1 (en)Context-aware dialogue system
JP2023500974A (en) Systems and methods for collecting behavioral data for interpersonal interaction support
Nandi et al.A survey on multimodal data stream mining for e-learner’s emotion recognition
JP2021533489A (en) Computer implementation system and method for collecting feedback
Bhosale et al.Stress level and emotion detection via video analysis, and chatbot interventions for emotional distress
Singh et al.MMSAD—A multi-modal student attentiveness detection in smart education using facial features and landmarks
JP7152825B1 (en) VIDEO SESSION EVALUATION TERMINAL, VIDEO SESSION EVALUATION SYSTEM AND VIDEO SESSION EVALUATION PROGRAM
Qodseya et al.Visual-based eye contact detection in multi-person interactions
Aran et al.Analysis of group conversations: Modeling social verticality
Asaju et al.Affect analysis: a literature survey on student-specific and general users’ affect analysis
QodseyaManaging heterogeneous cues in social contexts: A holistic approach for social interactions analysis
KusumamMultimodal analysis of depression in unconstrained environments
Vanichkul et al.Visual-based Confusion Detection using a Cooperative Spatio-Temporal Deep Neural Networks

Legal Events

DateCodeTitleDescription
STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

ASAssignment

Owner name:SMART SCIENCE TECHNOLOGY, LLC, TEXAS

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VARAN, DUANE;MENEGAY, MICHAEL ROSS;JOHNSON, ERIK MARC;REEL/FRAME:061805/0781

Effective date:20200122

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPPInformation on status: patent application and granting procedure in general

Free format text:FINAL REJECTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPPInformation on status: patent application and granting procedure in general

Free format text:FINAL REJECTION MAILED

STCBInformation on status: application discontinuation

Free format text:ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION


[8]ページ先頭

©2009-2025 Movatter.jp