Movatterモバイル変換


[0]ホーム

URL:


US10382383B2 - Social media post facilitation systems and methods - Google Patents

Social media post facilitation systems and methods
Download PDF

Info

Publication number
US10382383B2
US10382383B2US15/948,689US201815948689AUS10382383B2US 10382383 B2US10382383 B2US 10382383B2US 201815948689 AUS201815948689 AUS 201815948689AUS 10382383 B2US10382383 B2US 10382383B2
Authority
US
United States
Prior art keywords
user
keyword
mobile device
natural language
automatically
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/948,689
Other versions
US20190036866A1 (en
Inventor
David ISEMINGER
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Upheaval LLC
Original Assignee
Upheaval LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Upheaval LLCfiledCriticalUpheaval LLC
Priority to US15/948,689priorityCriticalpatent/US10382383B2/en
Publication of US20190036866A1publicationCriticalpatent/US20190036866A1/en
Priority to US16/538,707prioritypatent/US10972254B2/en
Application grantedgrantedCritical
Publication of US10382383B2publicationCriticalpatent/US10382383B2/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Methods and systems are provided in which an improved interface implements a synergistic hybrid of user interactions and automatic operations so that user input is elicited sparingly, making it possible to generate customized social media posts with unexpected speed relative to any art-known techniques. A draft post is pre-populated with a first keyword that identifies a machine-recognized aspect of a photograph, for example, and an event descriptor partly based on the capture location. After adding user text, a complete post is then ready for broadcast.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims the benefit of priority to Provisional Patent Application No. 62/538,445 (titled “SOCIAL MEDIA POST FACILITATION SYSTEMS AND METHODS” filed 28 Jul. 2017, which is incorporated by reference in its entirety.
BRIEF DESCRIPTION OF FIGURES
FIG. 1 illustrates transistor-based circuitry that look for patterns in sensor data.
FIG. 2 illustrates a client device displaying a screen image with indicators and controls.
FIG. 3 illustrates another client device displaying a ranked list of candidates.
FIG. 4 illustrates one or more nonvolatile data storage media upon which output data usable as post pre-population elements may reside.
FIG. 5 illustrates a system in which a client device presents several real-world objects of potential interest.
FIG. 6 illustrates components of an exemplary server.
FIG. 7 illustrates components of an exemplary client device.
FIG. 8 illustrates a data flow with an exemplary series of events suitable for use with at least one embodiment.
FIG. 9 illustrates a data flow with an exemplary series of events suitable for use with at least one embodiment.
FIG. 10 illustrates a client device displaying a screen image usable as a home screen.
FIG. 11 depicts an exemplary operational flow incorporating one or more technologies.
FIG. 12 depicts a pre-population module incorporating one or more technologies.
FIG. 13 depicts a client device displaying a screen image usable in an observation-scanning mode.
FIG. 14 depicts another state of the client device ofFIG. 13.
FIG. 15 depicts an exemplary operational flow incorporating one or more technologies.
DESCRIPTION
The detailed description that follows is represented largely in terms of processes and symbolic representations of operations by conventional computer components, including a processor, memory storage devices for the processor, connected display devices and input devices. Furthermore, some of these processes and operations may utilize conventional computer components in a heterogeneous distributed computing environment, including remote file servers, computer servers and memory storage devices.
The phrases “in one embodiment,” “in various embodiments,” “in some embodiments,” and the like are used repeatedly. Such phrases do not necessarily refer to the same embodiment. The terms “comprising,” “having,” and “including” are synonymous, unless the context dictates otherwise.
“Associated,” “at least,” “based,” “before,” “corroborated,” “distinct,” “invoked,” “likewise,” “local,” “mobile,” “natural,” “second,” “observational,” “on the order of,” “raw,” “semantic,” “single,” “tentative,” “thereafter,” “using,” “wearable,” “within,” or other such descriptors herein are used in their normal yes-or-no sense, not merely as terms of degree, unless context dictates otherwise. In light of the present disclosure those skilled in the art will understand from context what is meant by “remote” and by other such positional descriptors used herein. Terms like “processor,” “center,” “unit,” “computer,” or other such descriptors herein are used in their normal sense, in reference to an inanimate structure. Such terms do not include any people, irrespective of their location or employment or other association with the thing described, unless context dictates otherwise. “For” is not used to articulate a mere intended purpose in phrases like “circuitry for” or “instruction for,” moreover, but is used normally, in descriptively identifying special purpose software or structures. “On the order of” or “within an order of magnitude of” refer to values that differ by at most a factor of ten. A “period-of-day identifier” does not merely identify a moment in time but also an interval having a generally accepted meaning and shorter than 24 hours (“night” or “lunchtime,” e.g.). A “photograph” as used herein includes a watermarked or otherwise modified (with timestamps or other annotations, e.g.) digital expression of shape at least partly based on one or more cameras as well as raw data manifesting other types of optical-sensor-based images (from a charge-coupled device, e.g.) in a still frame or streaming video.
Reference is now made in detail to the description of the embodiments as illustrated in the drawings. While embodiments are described in connection with the drawings and related descriptions, there is no intent to limit the scope to the embodiments disclosed herein. On the contrary, the intent is to cover all alternatives, modifications and equivalents. In alternate embodiments, additional devices, or combinations of illustrated devices, may be added to, or combined, without limiting the scope to the embodiments disclosed herein.
FIG. 1 illustrates transistor-based circuitry100 (implementing event-sequencing digital logic, e.g.) configured as one or more (instances of)recognition modules110 that receive signals from one or more instances of sensors (such asaccelerometers133, radio frequency identification (RFID)readers134,camera135, ormicrophones136, e.g.) and look for patterns therein. Such “raw”expressions140 of informational data may include one or more instances of device-readable labels150 (bar codes151 orQR codes152, e.g.), wirelessly transmittedRFID codes155, biometrics164 (such as faces, fingerprints, retinal configurations, orutterances163, e.g.), orother elements126A-B that can be processed for any machine-recognizable digital patterns144 (i.e. “scanned” for matches or absences thereof) as described below. As used herein a “scan” refers to an observation in which one or more subjects is recognized successfully.
FIG. 2 illustrates aclient device700A (a tablet computer, wearable, or other mobile device, e.g.) displaying ascreen image220 that includes one or more of an Upheaval™ indicator241 (signaling a recognition of a searchable subject with movement or bright colors, e.g.), a “cancel scan” control242 (triggering a return to a home screen, e.g.), and a “settings” control243 (modifying user preferences, e.g.). In the particular arrangement shown a left page of a photographic image (photograph225A or a video clip frame, e.g.) is not highlighted and a right page is highlighted (as a selected “subject”207 of the scan, e.g.). If the “scan”control241 is actuated with this highlighting configuration, the symbolic content of the right page (including its textual or other symbols or sequences thereof, e.g.) may be processed (through optical character recognition, e.g.) which can then become the content of or basis for a pre-population protocol as described below (or both). If the camera pans leftward relative to subject207 (by moving the book rightward, e.g.) a centermost position of the photographic frame will fall upon the left page instead, and an ensuing “scan” control actuation may scan/process thecontent208 thereof instead. Alternatively or additionally, in some variants a non-visual aspect of the subject (a wireless signal oraudible utterance163 at the same location at about the same time, e.g.) may modulate the search (with a visual element as a search expression, e.g.) to obtain various permutations of pre-population elements as exemplified below.
FIG. 3 illustrates another (instance of a configuration of a)client device700B displaying a screen image that includes a ranked list ofpre-population candidates342 each including one or more pre-population elements each detected within theraw expression140 or associated with an element detected within the raw expression140 (or both). In some variants an app as described below can scroll from a likeliest one of the candidates342 (as ranked by a neural network of an Upheaval™ infrastructure, e.g.) down to lesslikely candidates342. The user can scroll though the list by dragging ascroll button341 thereof up or down, for example. Alternatively or additionally, one or more other controls (one or moretranslate controls343, acceptcontrols344,pause controls345, orindex controls346, e.g.) may be used for interacting with even a very large number of permutations of candidates (dozens or hundreds, e.g.) sorted according to up-to-date criteria based upon apparent user preferences (in profiles like those described below with reference toFIG. 12, e.g.).
FIG. 4 illustrates one or more nonvolatiledata storage media400 upon which output data usable as post pre-population elements may reside. Such output data may include one or more instances of semantic descriptors495 (prepositional phrases that contextualize a period-of-day identifier491 orlocation identifier492, e.g.), of primary images496 (comprising a photograph225, a selected portion thereof, or a canonic image of a subject of interest thereof, e.g.), of supplemental metadata497 (derived from related raw data or arising from structured dialogs described herein, e.g.), or of combinations thereof. Such components are a sufficiently complete set, for example, to permit a substantially automatic generation of any or all of thepre-population candidates342 ofFIG. 3 as described herein that may then be validated or otherwise corroborated by one or more crowdworkers or other users as described herein. The compound semantic descriptors there, for example, may be derived automatically from past posts, calendar data, or other suitable resources as described below.
FIG. 5 illustrates asystem500 suitable for use in at least one embodiment. A field ofview508 of aclient device700C depicts various real-world subjects507A-B of potential interest (touser580, e.g.), human or otherwise. Several such indications are device detectable at least as respective portions7A-7H (each delimited by a closed color boundary, e.g.) of aphotograph225B any or all of which may be automatically highlighted, manually corroborated, and otherwise annotated as described herein during a given occasion at a single location (a meeting, e.g.). By a suitable coordination with a suitable infrastructure (resident in one or more cloud-basedservers600, e.g.) via one ormore data networks550 comprising a remote processing facility as described herein, for example, such annotated indications may in many contexts provide a sufficiently selective filtering protocol so as to allow a generation of suitable pre-population elements that a user may simply validate most of them without any editing. In some contexts such pre-population elements may be automatically combined so as to generate entire draft posts (including one or more complete sentences, e.g.), a significant fraction of which (more than 10%, in some contexts) may be of sufficient quality that one ormore users580 will validate them with no editing whatsoever.
As shown, for example,device700C may simultaneously display anentire photograph225B in which a single subject is selected (byuser580 subtly repositioningdevice700C, e.g.) so that portion7B—depicting a face—is highlighted.User580 may then activate a suitable command (a voice command or actuation ofcontrol541 as shown, e.g.) to initiate a scan screen by which the app detects a subject and upon that detection triggers a lookup of that subject (from storage resources ofserver600 or from available online information, e.g.). A collection of information about the subject, along with other relevant information, is thereby responsively returned todevice700C (over an encrypted connection, e.g.).
Pattern recognition circuitry as described herein may comprise an event-sequencing structure generally as described in U.S. Pat. Pub. No. 2015/0094046 but configured as described herein. Such circuitry may include one or more instances of modules configured for local processing, for example, each including an electrical node set upon which informational data is represented digitally as a corresponding voltage configuration. In some variants, moreover, an instance of such modules may be configured for invoking such local processing modules remotely in a distributed implementation. Event detection circuitry as described herein may likewise include one or more instances of modules configured for programmatic response as described below, for example, each including an electrical node set upon which informational data is represented digitally as a corresponding voltage configuration. In some variants, an instance of modules may be configured for invoking such programmatic response modules remotely in a distributed implementation.
In the interest of concision and according to standard usage in information management technologies, the functional attributes of modules described herein are set forth in natural language expressions. It will be understood by those skilled in the art that such expressions (functions or acts recited in English, e.g.) adequately describe structures identified below so that no undue experimentation will be required for their implementation. For example, anyraw expressions140 or other informational data identified herein may easily be represented digitally as a voltage configuration on one or more electrical nodes (conductive pads of an integrated circuit, e.g.) of an event-sequencing structure without any undue experimentation. Each electrical node is highly conductive, having a corresponding nominal voltage level that is spatially uniform generally throughout the node (within a device or local system as described herein, e.g.) at relevant times (at clock transitions, e.g.). Such nodes (lines on an integrated circuit or circuit board, e.g.) may each comprise a forked or other signal path adjacent one or more transistors. Moreover many Boolean values (yes-or-no decisions, e.g.) may each be manifested as either a “low” or “high” voltage, for example, according to a complementary metal-oxide-semiconductor (CMOS), emitter-coupled logic (ECL), or other common semiconductor configuration protocol. In some contexts, for example, one skilled in the art will recognize an “electrical node set” as used herein in reference to one or more electrically conductive nodes upon which a voltage configuration (of one voltage at each node, for example, with each voltage characterized as either high or low) manifests a yes/no decision or other digital data.
FIG. 6 illustrates several components of anexemplary server600. In some embodiments,server600 may include many more components than those shown inFIG. 6. However, it is not necessary that all of these generally conventional components be shown in order to disclose an illustrative embodiment. As shown inFIG. 6,server600 includes adata network interface606 for connecting via a data network550 (to one ormore client devices700 as described herein, e.g.).
Server600 may also include one or more instances ofprocessing unit602, amemory604,display hardware612, all interconnected along with thenetwork interface606 via abus616.Memory604 generally comprises a random access memory (“RAM”), a read only memory (“ROM”), and a permanent mass storage device, such as a disk drive.
Memory604 may likewise contain anoperating system610, hostingapplication614, and download service624 (for downloading apps, e.g.). These and other software components may be loaded from a non-transitory computerreadable storage medium618 intomemory604 of theserver600 using a drive mechanism (not shown) associated with a non-transitory computerreadable storage medium618, such as a floppy disc, tape, DVD/CD-ROM drive, flash card, memory card, or the like. In some embodiments, software components may also be loaded via thenetwork interface606, rather than via a computerreadable storage medium618. Special-purpose circuitry622 may, in some variants, include some or all of the event-sequencing logic described herein.
FIG. 7 illustrates several components of anexemplary client device700. In some embodiments,client device700 may include many more components than those shown inFIG. 7. However, it is not necessary that all of these generally conventional components be shown in order to disclose an illustrative embodiment. As shown inFIG. 7,client device700 includes adata network interface706 for connecting via one or more data networks550 (withsocial media platforms590 viaserver600 or other infrastructure described herein, e.g.).
Client device700 may also include one or more instances ofprocessing unit702, amemory704,user input708,display hardware712, all interconnected along with thenetwork interface706 via abus716.Memory704 generally comprises a random access memory (“RAM”), a read only memory (“ROM”), and a permanent mass storage device, such as a disk drive.
Memory704 may likewise contain anoperating system710,web browser714, and local app724 (obtained viadownload service624, e.g.). These and other software components may be loaded from a non-transitory computerreadable storage medium718 intomemory704 of theclient device700 using a drive mechanism (not shown) associated with a non-transitory computerreadable storage medium718, such as a floppy disc, tape, DVD/CD-ROM drive, flash card, memory card, or the like. In some embodiments, software components may also be loaded via thenetwork interface706, rather than via a computerreadable storage medium718. Special-purpose circuitry722 may, in some variants, include some or all of the event-sequencing logic described herein.
FIG. 8 illustrates adata flow800 with an exemplary series of events (communications or other processes, e.g.) suitable for use with at least one embodiment. In some variants of the above-described methods, for example, an app install806 (ofapp724, e.g.) is downloaded from anUpheaval™ infrastructure830 and implemented so as to create a unique device-app pairing818 identified (by a model and serial number of the device with a unique identifier of the app/version, e.g.) within the infrastructure (in association with a user profile, e.g.). Thereafter when anindication823 of a real-world subject807 of interest (anRFID code155 or photographic image thereof, e.g.) is obtained and scanned, in some contexts one or more parts of the scan (portions7B or7F having a facelike shape, e.g.) of inferred particular interest to theuser580 are identified at targeting826.
Such portions may then be highlighted for the user580 (via blinking or artificial outlines/coloring, e.g.) in near-real-time so as to facilitate a meaningful selection of which specific one ormore indications823 are likeliest to be the intended subject. This can occur, for example, in a context in which only portions7A-H primarily within a center half of the photographic image (comprisingphotograph225B, e.g.) are highlighted as candidates and in which such targeting includes a structured dialog with context-responsive prompts like “Please tap a portion of the image that is of greatest interest.”
Responses to these queries (or a lack thereof) may constitute anannotated indication828 usable as a search parameter in a content search (of searchable content withininfrastructure830, e.g.). In the event of no hits (anunsuccessful search832, e.g.), a clarifyingquery836 like “What is this a picture of?” may be sent to the client device (handheld700, e.g.) and a user'sreply838 may be provided as asemantic supplement846 upon which abroader search851A initiated. If more than one hit ensues aresult852A is presented as aranking856 of hits in a concise listing like that ofFIG. 3. In some variants anUpheaval™ app724 will repeatedly autoscroll from the likeliest candidate through the qualifying hits and then repeat. The overall length of the growing list may be signaled to a user indirectly (by a height of ascroll button341 thereof, e.g.), for example, or directly by a visible numbering of the candidates342 (or both). Alternatively or additionally, a “translate”control343 may be presented by which a user may initiate a machine translation of the list ofcandidates342 into another language. Alternatively or additionally, an “accept”control344 may be presented by which a user may accept a single candidate of the list ofcandidates342. Alternatively or additionally, a “pause”control345 may be presented by which a user may cause an auto-scroll state of the list ofcandidates342 to toggle into a pause state. Alternatively or additionally, a “reverse”control346 may be presented by which a user may cause an auto-scroll direction of the list ofcandidates342 to toggle into an opposite direction-of-movement state (from upward to downward, e.g.).
In response to aselection858 from a user to whom theranking856 has been presented aselection supplement866 manifesting that user input (as an actuation of “accept”control344, e.g.) becomes a basis for a modifiedsearch851B (one that terminates concurrent searching by which a list ofcandidates342 grows, e.g.). An ensuingsearch result852B may include one or more pre-population elements876 (announced by a ringing sound or other suitable audible when presented, e.g.) upon which editing878 (under the user's control, e.g.) may occur. A validatedpost886 may then be uploaded (in response to an actuation of a “post” or “accept” control, e.g.), resulting inpublication895 to user-selected presentation venues (social media sites, e.g.) as well as private storage withininfrastructure830.
In some variants as a result of theinfrastructure830 process, semantic associations are identified and online content (patterns and user activities, e.g.) are created and refined. In some variants such refinement may take the form of content feeds, web pages, or mobile feeds. Alternatively or additionally, such online content may be based on posts and metadata provided by user scans, by additional user posts, or by user engaging directly with online content (rather than going through anUpheaval™ app724 and scanning).
Multiple feeds may be created by anUpheaval™ infrastructure830, and such feeds may use content that overlaps. In some variants online content feeds may be based on an individual subject, by a category of subject, by events, by locations, by timelines, by “thumbs up” votes by users, or by trending scans.
Users can interact with online content feeds, including posting comments, or adding content in other ways. Users who engage with posts from other users can use similar interests (or observed scans) to initiate “connection requests” that establish social connections between Upheaval™ users. In some variants online content may include canonic subject definitions, images, or metadata. In some variants member users may contribute to such canonic subject definitions, images, and metadata, and earn “reputation points” for contributions that the community finds valuable, useful, or correct.
In some variants users may also “Follow” online content feeds, “connected users”, or subjects, to get Upheaval™ app notifications or other communications about updates to online content feeds.
In some variantssuch infrastructure830 protocols may include collecting user ratings of subjects, online content, and posts. When interacting with subjects, either in an Upheaval™ app or with online content, users may assign “Ups” (up ratings) to subjects to assign a positive rating. In some variants aninfrastructure830 logs, associates, manages, and analyzes Ups/up ratings for subjects to create a dynamic assessment of favorability, popularity, or usefulness of subjects. Theinfrastructure830 may then assign an “Up-ness” rating for the subject, which is dynamically adjusted over time based on activity, additional ratings, or a lack of recent ratings.
In some variants two or more factors may influence the weight a given Up rating has on a given subject. For example, the first time an Upheaval™ user gives a product an Up rating has more weight than subsequent Ups that same user gives to the same subject. In some variants the process of Up ratings may also depend upon the frequency of Ups, the time between Ups, or how many of a user's “Connected users” may have given the subject Up ratings.
In some variants users who have an Upheaval™ account may engage in an online community of app users and online content visitors. Theinfrastructure830 may use a point-based system to award users points based on their scan activity, online content engagement, or their number of connected user counts.
In some variants users who are active contributors on online content pages may gain points through a community recognition process in response to moving through levels identifying engagement, frequency of scans, or online posts. Alternatively or additionally they may gain badges for contributions from scans, posts, online contributions, or activities. Alternatively or additionally they may gain badges for being recognized by other Upheaval™ users to have expertise about one or more categories of subjects by a community recognition process, which may include Ups and “expert rating” recognition. Alternatively or additionally they may be recommended by users based on expertise, user connections, or online contributions, one or more of which may be presented to users interacting with the online content based on geography, time, or ephemeral events.
An Upheaval™ community recognition process may (in some variants) use scans, profiles, online content contributions, analytics, semantic connections, and other inputs to determine the “Up-ness” of subjects, users, or subject categories (an aggregated or semantic-based subject hierarchy).
In some variants Upheaval™ processes (in anapp724 orinfrastructure830, e.g.) may be used to determine semantic associations with similar subjects that might be of interest to a user or users. Such processes may use this information to provide purchase links for one or more scanned subjects that may also be of interest to a user based on those processes, based on an affiliate purchasing process. In some variants an affiliate purchasing process allows users to purchase “subject” products/services (if applicable and appropriate) through retailers, outlets, or other sellers with which/whom Upheaval™ has an affiliate purchasing agreement.
FIG. 9 illustrates anotherdata flow900 with an exemplary series of events (communications or other processes, e.g.) suitable for use with at least one embodiment. In some variants of the above-described methods, for example, an app install906 (ofapp724, e.g.) is downloaded from an (instance of an)Upheaval™ infrastructure830 and implemented so as to create a uniqueuser account pairing916 identified (by a unique username, e.g.) within the infrastructure830 (in association with a profile of auser980 of a wearable970 or other instance ofclient device700, e.g.). Thereafter when anindication823 of a real-world subject207,507,807 (a person/thing/event or photographic image thereof, e.g.) is obtained and scanned, in some contexts one or more parts of the scan of (inferred) particular interest are identified at targeting926 and resolved to a single target during a structured dialog or other enriched interaction935 (as variously exemplified herein, e.g.).
In some variants a likeliestpre-populated post977 is then presented via a client device (wearable970, e.g.) one at a time, each of which the user may then reject as many times as appropriate until an acceptable one meets withuser validation979 and is then transmitted as one or more counterpart social media postings986 (viainfrastructure830 with destination-specific filtering tailored to each social media destination, e.g.). This can occur, for example, in a context in which significant editing and menu navigation would be unduly burdensome but in which yes-or-no signals (i.e. Boolean decisions as raw user input) are viable in near-real-time (while the user is still on location and within an hour of the scan, e.g.).
In light of teachings herein, numerous existing techniques may be applied for configuring special-purpose circuitry or other structures effective for pattern recognition, estimation, or other tasks as described herein without undue experimentation. See, e.g., U.S. Pat. No. 9,606,363 (“Head mounted device (HMD) system having interface with mobile computing device for rendering virtual reality content”); U.S. Pat. No. 9,603,569 (“Positioning a wearable device for data collection”); U.S. Pat. No. 9,603,123 (“Sending smart alerts on a device at opportune moments using sensors”); U.S. Pat. No. 9,603,090 (“Management of near field communications using low power modes of an electronic device”); U.S. Pat. No. 9,602,956 (“System and method for device positioning with Bluetooth”); (“U.S. Pat. No. 9,576,213 (“Method, system and processor for instantly recognizing and positioning an object”); U.S. Pat. No. 9,569,439 (“Context-sensitive query enrichment”); (“U.S. Pat. No. 9,466,014 (“Systems and methods for recognizing information in objects using a mobile device”); (“U.S. Pat. No. 9,288,450 (“Methods for detecting and recognizing a moving object in video and devices thereof”); (“U.S. Pat. No. 9,275,299 (“System and method for identifying image locations showing the same person in different images”); U.S. Pat. No. 9,173,567 (“Triggering user queries based on sensor inputs”); (“U.S. Pat. No. 9,152,860 (“Methods and apparatus for capturing, processing, training, and detecting patterns using pattern recognition classifiers”); U.S. Pat. No. 9,074,906 (“Road shape recognition device”); (“U.S. Pat. No. 9,025,022 (“Method and apparatus for gesture recognition using a two dimensional imaging device”); (“U.S. Pat. No. 9,020,252 (“Image recognition method and image recognition system”); U.S. Pat. No. 8,781,995 (“Range queries in binary decision diagrams”); (“U.S. Pat. No. 8,774,504 (“System for three-dimensional object recognition and foreground extraction”); (“U.S. Pat. No. 8,763,038 (“Capture of stylized TV table data via OCR”) (“U.S. Pat. No. 8,635,015 (“Enhanced visual landmark for localization”); U.S. Pat. No. 8,289,390 (“Method and apparatus for total situational awareness and monitoring”); U.S. Pat. No. 7,733,223 (“Effectively documenting irregularities in a responsive user's environment”); (“U.S. Pat. No. 7,077,323 (“Bar code recognizing method and decoding apparatus for bar code recognition”); U.S. Pub. No. 2013/0173305 (“Evidence-based healthcare information management protocols”); and U.S. Pub. No. 20120221687 (“Systems, Methods and Apparatus for Providing a Geotagged Media Experience”). These documents are incorporated herein by reference to the extent not inconsistent herewith.
FIG. 10 illustrates another client device1000 (as an instance ofdevice700, e.g.) displaying ascreen image1020 usable as a home screen (displayed upon app initiation or in response to a social media posting986, e.g.). In some variants, for example, such ascreen image1020 may be displayed as a conditional response to a period of idleness that exceeds a given threshold (on the order of 1 minute or of 5 minutes, e.g.). Alternatively or additionally,screen image1020 may include one or more of a daily performance metric1081 (a current count of successful Ups/scans on a given day, e.g.) or a weekly performance metric1082 (a current count of successful Ups/scans in a given week, e.g.) for a given pairing.
A section of a home screen (as depicted inscreen image1020, e.g.) may displaybuttons1035A-C that identify the most recent scans performed by the user as well as one or more buttons1035D to make other such buttons become visible (by scrolling through them, e.g.). Alternatively or additionally, the screen may providebuttons1036A-C that identify the most recent content (stories, e.g.) as well as one or more buttons1036D to make other such buttons become visible (by scrolling through them, e.g.). Alternatively or additionally, the screen may provide one ormore buttons1037A to implement a scan, one ormore buttons1037B to share information about the app (or to get codes that could subsequently be scanned by the app), one ormore buttons1037C to post content (to othersocial media platforms590, e.g.), or one or more buttons1037D to make other such buttons become visible (by scrolling through them, e.g.).
FIG. 11 depicts an exemplaryoperational flow1100 incorporating one or more technologies.Operation1110 describes discerning a set of one or more distinct physical items or real-world events (or both) from one or more observations (of an RFID code or in a photograph or video, e.g.) obtained by a mobile device at a single location. The single location may be identified by street address or by a facility name (a name of a park or restaurant, e.g.) at which the subjects (items or people, e.g.) were depicted, for example, or at which the real-world event (a festival or an accident, e.g.) occurred.
Operation1115 describes expressing a period-of-day during which the one or more observations were obtained and a natural language identifier of the single location.Operation1125 describes associating a tentative semantic descriptor (a place name or product name, e.g.) with some or all of the one or more observations.Operation1130 describes presenting content matching the tentative semantic descriptor of the one or more distinct physical items.Operation1135 describes receiving from a user a corroborated semantic descriptor of the one or more distinct physical items.Operation1145 describes pre-populating a draft post with the corroborated semantic descriptor of the one or more distinct physical items with the period of the day and the natural language identifier of the single location at which the mobile device obtained the one or more observations all together with a graphical depiction of the one or more distinct physical items (a photograph or video clip, e.g.).Operation1150 describes providing the user an opportunity to edit the draft post.Operation1160 describes transmitting the draft post to one or more social media sites/platforms selected by the user.
FIG. 12 depicts apre-population module1200 incorporating one or more technologies.
FIG. 13 depicts another client device1300 (as an instance ofdevice700, e.g.) displaying ascreen image1320A usable in an observation-scanning mode (displayed upon activation of a “SCAN”button1037A, e.g.). In some variants, for example, such ascreen image1320A may be displayed whenclient device1300 arrives at a facility of particular interest and as a prelude to a recognized-element-selection mode like that described herein (with reference toFIG. 14, e.g.). In an observation-scanning mode, output from one or more sensors (including a camera, e.g.) ofclient device1300 may be presented (as aphotograph1325, e.g.) so that auser580,980 can confirm that one or more machine-recognizable object depictions1393A-D (respectively depicting water bottles, a wine, and two sandwiches in this instance) are detectable by (one or more sensors of)client device1300.
Withinscreen image1320A is shown a “discover/explore”icon1398A (depicting a telescope, e.g.) and acorresponding label1397A. Alsoscreen image1320A may depict a “share app”icon1398B (depicting an upward-swooping arrow, e.g.) and a corresponding label1397B. Also withinscreen image1320A is shown a “how-to”icon1398C (depicting a video clip control, e.g.) and acorresponding label1397C. Also withinscreen image1320A is shown a “following” icon1398D (depicting a rightward-pointing shape grouping, e.g.) and a corresponding label1397D. As depicted and described below, one or more of theselabels1397A-D respectively identify user-actuated controls.
Such screen images may likewise include one or more (instances of)textual instruction labels1397E (displaying “previous scans” with an up-pointingtriangle1396, e.g.) signaling that the symbols to whichlabel1397E is adjacent are user-actuatedcontrols1395A configured to trigger a retrieval of a prior scan result. Such screen images may likewise include one or moretextual instruction labels1397F (displaying “tap to begin,” e.g.) signaling that the symbols to whichlabel1397F is adjacent (including a physical-object-indicative icon1398E, e.g.) are user-actuatedcontrols1395A configured to trigger a scan of a corresponding type.
Such screen images may likewise include one or more location-specifyinglabels1397G (displaying a size-monotonic sequence of natural language place names, e.g.). Such labels may progress from “Lake Stevens Park” and “Lake Stevens City” to “Western Washington” and “USA” as a left-to-right expanding sequence, for example, or vice versa. This can occur, for example, in a context in which alabel1397G that refers to a largest geographic region that does not contain ahome area1278 of theuser580,980 is used as a default (for generating anevent descriptor1282, e.g.). In some variants, for example, the largest natural language place names provided in the sequence identify a nation, state, county, prefecture, or other region having a centralized government. Alternatively or additionally, a currently-activeselected location control1395B may identify a particular large or small geographic region that contains the location associated with the observational data1240 (identified byGPS coordinates1243, e.g.) being processed (“scanned”), which may provide an enriched commercial context upon which a sponsor-identifyinglabel1397H (logo, e.g.) or ad copy label1397I may depend. Moreover in some contexts such screen images may include one or more status-indicative labels1397I (displaying “searching for codes” or the like) signaling that scanning is in progress. Also in some contexts, such screen images may include ascan initiation control1395C containing an appropriate label1397I (“detect from image,” e.g.) signaling that (if enabled) the control is configured to trigger processing as described herein.
FIG. 14 depicts another state of theclient device1300 ofFIG. 13 displaying a screen image1320B usable in a keyword-selection mode (after scanning to detect elements, and representing detected element askeywords1221, e.g.). In some variants, for example, such a screen image1320B may be displayed when client device1400 has processedobservational data1240 at a particular site enough to obtain a plurality ofkeywords1221A-B associated with items of (nominally) apparent interest to a user. After image processing identifiesseveral keywords1221, a designation is made (by a machine learning module, e.g.) of which two ormore keywords1221 are likeliest to be used in an acceptable post (given ahistory1276 of a user, a location-specifyinglabel1397G, the framing of the photograph or video, or other such available indicia, e.g.). In a context in which a detected element represented bykeyword1221A of “water bottle” is prioritized somewhat below a detected element represented bykeyword1221B of “wine” as an initial (default) ranking of likely elements of interest, for example, the highest-rankingkeyword1221B may be identified by one or more selection-indicative referent symbols (brackets, e.g.). This may occur, for example, in a context in which a frame of thephotograph1325 is cropped (as a subset or superset ofphotograph1325, e.g.) to include or magnify the object depictions1393 associated with theparticular keywords1221 simultaneously displayed in image1320B. Zooming out as shown fromphotograph1325 tophotograph1425, for example, will be appropriate if the displayedkeywords1221 include “blanket.” Or if the keywords identify an event like “picnic,” for example, aphotograph1425 with appropriate framing may preferably be selected to depict all physical objects (including the blanket) with which the event is apparently associated.
Within screen image1320B is shown a “discover/explore”icon1498A (depicting a telescope, e.g.) and acorresponding label1497A. Also screen image1320B may depict an “ad content cycle”icon1498B and a corresponding label1497B. Also within screen image1320B is shown a “how-to”icon1498C (depicting a video clip control, e.g.) and acorresponding label1497C. Also within screen image1320B is shown a “share app” icon1498D (depicting a rightward-pointing shape grouping, e.g.) and acorresponding label1497D. As depicted and described below, one or more of theselabels1497A-D respectively identify user-actuated controls.
Such screen images may likewise include one or more (instances of)textual instruction labels1497E (displaying “move and zoom to your subject” with one or more up-pointingtriangles1396, e.g.) signaling that the symbols to whichlabel1497E refers are user-actuated controls1395 configured to trigger panning and zooming (via conventional touchscreen controls, e.g.). Such screen images may likewise include a selected-location-identifyinglabel1497F confirming the now-active natural language location identifier (alocation label1397G, e.g.).
Such screen images may likewise include one or more sponsoredcontent zones1494A-D (containing ads selected in response to a category of the event or location, e.g.). In some contexts, for example, such azone1494A may include a canonic image of an automatically recognized element (a stock photo of a wine bottle, e.g.). Alternatively or additionally such screen images may include one or more provider-selection controls1395D by which a user may order products (in response to a category of the event or location, e.g.).
Also in some contexts, such screen images may include a draftpost generation control1495 containing anappropriate label1497H (“select & go,” e.g.) signaling that (if enabled) the control is configured to trigger generating one or more (instances of) posts as described herein.
FIG. 15 depicts an exemplaryoperational flow1500 incorporating one or more technologies.Operation1515 describes obtaining a profile of a user of a mobile device (one or more modules of special purpose-circuitry622,722 obtaining aprofile1275 of auser580,980 of amobile device700,1000,1300, e.g.).
Operation1525 describes obtaining observational data associated with capture information that include a geographic capture location (one or more modules of special purpose-circuitry622,722 capturing via the mobile deviceobservational data1240 in association withcapture information1248, e.g.).
Operation1535 describes automatically presenting two or more keywords that each identify a recognized aspect of the observational data, wherein the keywords are ranked by default partly based on the capture location and partly based on the profile of the user (one or more modules of special purpose-circuitry622,722 causing theobservational data1240 to be sent to a remote processing facility that extracts the recognized aspects and returns thekeywords1221, e.g.).
Operation1545 describes prioritizing a first one of the keywords over one or more others in response to user input (one or more modules of special purpose-circuitry622,722 obtaininguser input1273 identifying which among several simultaneously-displayed ranked keywords is preferred over the default value, e.g.).
Operation1560 describes automatically obtaining an event descriptor (“trip to Georgia,” e.g.) partly based on the capture location and partly based on the user profile (one or more modules of special purpose-circuitry622,722 selecting a descriptor expressly relating to “Georgia” and not “Athens” even though theobservational data1240 was captured in Athens, e.g.). This can occur, for example, in a context in which the profile identifies a “home area” outside Georgia, for example, and in whichsystem500 is configured to generate an event that refers to a largest geographic region that does not contain ahome area1278 of theuser580,980.
Operation1570 describes automatically generating a draft post (one or more modules of special purpose-circuitry622,722 generating apost1220 that includes a selectedkeyword1221 and adefault event descriptor1282, e.g.).
Operation1580 describes completing the post in an editing mode (one or more modules of special purpose-circuitry622,722 generating a complete validatedpost886 by adding content from anutterance163 orother user text1224 to thedraft post1220, e.g.).
Operation1590 describes transmitting a complete post to one or more social media platforms (one or more modules of special purpose-circuitry622,722 broadcasting the complete validatedpost1220 to multiplesocial media platforms590 selected by theuser580,980, e.g.). This can occur, for example, in a context in which thecomplete post1220 contains the first automatically recognizedshape element126A of the first photograph, the firstrepresentative element keyword1221A, the first naturallanguage event descriptor1282, and the user text added into anediting field1229 in response to one or more data input actions by theuser580,980; in which the initial inclusion of such simultaneously-displayed keywords are likely to include an acceptable subject for a post but in which no acceptable subject for a post would likely be immediately visible otherwise insofar that each keyword is individually unlikely to be acceptable, and in which numerous postings having such pre-populated contextual element combinations would not be suitable for automation without synergistic hybrids of manual and automatic operations as described herein.
All of the patents and other publications referred to above are incorporated herein by reference generally—including those identified in relation to particular new applications of existing techniques—to the extent not inconsistent herewith. While various system, method, article of manufacture, or other embodiments or aspects have been disclosed above, also, other combinations of embodiments or aspects will be apparent to those skilled in the art in view of the above disclosure. The various embodiments and aspects disclosed above are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated in the final claim set that follows.
In the numbered clauses below, specific combinations of aspects and embodiments are articulated in a shorthand form such that (1) according to respective embodiments, for each instance in which a “component” or other such identifiers appear to be introduced (with “a” or “an,” e.g.) more than once in a given chain of clauses, such designations may either identify the same entity or distinct entities; and (2) what might be called “dependent” clauses below may or may not incorporate, in respective embodiments, the features of “independent” clauses to which they refer or other features described above.
With respect to the numbered claims expressed below, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise. Furthermore, terms like “responsive to,” “related to,” or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.
Clauses
1. A social media post creation system comprising:
transistor-based circuitry configured to obtain aprofile1275 of auser580,980 of amobile device700,1000,1300;
transistor-based circuitry configured to obtain at the mobile device first observational data1240 (aphotograph225,1325 or otherraw expression140, e.g.) in association withcapture information1248, wherein the capture information includes a geographic capture location (specified with a facility identifier like “Lake Stevens Park” or withGPS coordinates1243 therein, e.g.);
transistor-based circuitry configured to present automatically via the mobile device (at least) first and second (instances of)keywords1221A-B; wherein thefirst keyword1221A identifies a first automatically recognizedelement126A (a shape or other machine-recognizabledigital pattern144, e.g.) of theobservational data1240; wherein thesecond keyword1221B identifies a second automatically recognizedelement126B of theobservational data1240; wherein a presentation layout1320 ranks the second keyword (in priority, e.g.) above thefirst keyword1221A (by displaying thesecond keyword1221B at or among one or more selection-indicative referent symbols1496 and thefirst keyword1221A being shown in a non-selected position or otherwise in a position that signifies a lower prioritization, e.g.);
transistor-based circuitry configured to obtain automatically at the mobile device at least a first natural language event descriptor1282 (“picnic” or “vacation in Greece,” e.g.) partly based on the geographic capture location and partly based on theprofile1275 of theuser580,980 of the mobile device (selecting “Greece” and not “Athens” based on theprofile1275 identifying ahome area1278 outside Greece in association with theuser580,980, e.g.);
transistor-based circuitry configured to generate automatically afirst draft post1220 pre-populated with both thefirst keyword1221A and the first natural language event descriptor; and
transistor-based circuitry configured to present (at least partly) in response to a pre-populated-post selection gesture (an indexing gesture or “edit now” button referring to thesecond draft post1220, e.g.) at the mobile device theediting field1229 containing asecond draft post1220 pre-populated with both thefirst keyword1221A and the first natural language event descriptor in an editing mode.
2. The system of System Clause 1, wherein all of the transistor-based circuitry is implemented on a single application-specific integrated circuit (ASIC).
3. The system of System Clause 1, wherein the transistor-based circuitry is distributed across two or more mutually remote facilities.
4. The system of ANY of the above System Clauses, comprising:
transistor-based circuitry configured to present thefirst draft post1220 in an editing mode, wherein thefirst draft post1220 also includes an entirety of theobservational data1240.
5. The system of ANY of the above System Clauses, wherein the transistor-based circuitry configured to obtain at the mobile device firstobservational data1240 in association with thecapture information1248 comprises:
anaccelerometer133 in the mobile device; and
transistor-based circuitry configured to obtain at the mobile device one ormore measurements1241 in association with Global Positioning System (GPS) coordinates1243, wherein the one ormore measurements1241 are a component of the firstobservational data1240, wherein the GPS coordinates are a component of thecapture information1248, and wherein the first natural language event descriptor (“a quick run through Marymoor Dog Park,” e.g.) is based on at least onemeasurement1241 from the accelerometer133 (a step frequency, e.g.) in the mobile device.
6. The system of ANY of the above System Clauses, comprising:
transistor-based circuitry configured to generate acomplete post1220 by addinguser text1224 into theediting field1229 in response to one or more data input actions by theuser580,980; and
transistor-based circuitry configured to broadcast thecomplete post1220 to one or moresocial media platforms590 selected by theuser580,980, wherein thecomplete post1220 contains at least the first automatically recognizedelement126A of theobservational data1240, thefirst keyword1221A that identifies the first automatically recognizedelement126A, the first naturallanguage event descriptor1282 partly based on the geographic capture location and partly based on theprofile1275 of theuser580,980 of the mobile device, and theuser text1224 added into theediting field1229 in response to one or more data input actions by theuser580,980 in the editing mode.
7. The system of ANY of the above System Clauses, wherein thesecond draft post1220 also includes a graphical component depicting at least the first automatically recognized element of theobservational data1240.
8. The system of ANY of the above System Clauses, wherein the editing mode comprises an insert mode.
9. The system of ANY of the above System Clauses, wherein the editing mode comprises an insert mode comprising speech recognition (performed by a speech recognition module, e.g.) by which anutterance163 is converted intouser text1224.
10. The system of ANY of the above System Clauses, wherein at least the first andsecond keywords1221A-B are presented simultaneously to theuser580,980 and wherein thefirst keyword1221A is thereafter prioritized above the second keyword in response touser input1273 via the mobile device.
11. The system of ANY of the above System Clauses, comprising:
transistor-based circuitry configured to transmit the firstobservational data1240 to a remote processing facility (comprisingserver600, e.g.), wherein the first andsecond keywords1221A-B are received at the mobile device (directly or indirectly) from the remote processing facility.
12. A social media post creation method comprising:
invoking transistor-based circuitry configured to obtain aprofile1275 of auser580,980 of amobile device700,1000,1300;
invoking transistor-based circuitry configured to obtain at the mobile device first observational data1240 (captured via acamera135,microphone136, or other sensor, e.g.) in association withcapture information1248, wherein the capture information includes a geographic capture location;
invoking transistor-based circuitry configured to present automatically via the mobile device first andsecond keywords1221A-B; wherein thefirst keyword1221A identifies a first automatically recognizedelement126A of theobservational data1240; wherein thesecond keyword1221B identifies a second automatically recognizedelement126B of theobservational data1240; wherein a presentation layout1320 ranks the second keyword above thefirst keyword1221A partly based on the geographic capture location and partly based on theprofile1275 of theuser580,980;
invoking transistor-based circuitry configured to obtain automatically at the mobile device at least a first naturallanguage event descriptor1282 partly based on the geographic capture location and partly based on theprofile1275 of theuser580,980 of the mobile device;
invoking transistor-based circuitry configured to generate automatically afirst draft post1220 pre-populated with both thefirst keyword1221A and the first natural language event descriptor; and
invoking transistor-based circuitry configured to present in response to a pre-populated-post selection gesture at the mobile device theediting field1229 containing asecond draft post1220 pre-populated with both thefirst keyword1221A and the first natural language event descriptor in an editing mode.
13. The method of ANY of the above Method Clauses, comprising:
adding information (preferences1272, e.g.) elicited from the user at the mobile device in a structured dialog of an app resident in the mobile device (an UPHEAVAL′app724, e.g.) to ahistory1276 associated with theuser580,980.
14. The method of ANY of the above Method Clauses, comprising:
recognizing a first subject by detecting a machine-recognizable item worn by the first subject (a human subject507 identified by a badge or the like, e.g.) while presenting a scan screen (a screen image1320, e.g.) of a resident app (an UPHEAVAL′app724, e.g.);
in response to a successful detection of the machine-recognizable item worn by the first subject automatically triggering a lookup of the first subject (obtaining ahistory1276 of that subject from aremote server600, e.g.); and
including one or more elements from the lookup of the first subject in thefirst draft post1220 pre-populated with both thefirst keyword1221A and the first natural language event descriptor.
15. The method of ANY of the above Method Clauses, wherein the invoking transistor-based circuitry configured to obtain theprofile1275 of theuser580,980 of the mobile device is triggered by one or more processors (instances ofprocessing unit702, e.g.) of the mobile device locally executing a resident app (an UPHEAVAL′app724, e.g.).
16. The method of ANY of the above Method Clauses, wherein the invoking transistor-based circuitry configured to obtain the firstobservational data1240 in association withcapture information1248 is performed by one or more processors of the mobile device locally capturing the firstobservational data1240 by executing a resident app (an UPHEAVAL′app724, e.g.).
17. The method of ANY of the above Method Clauses, wherein the first naturallanguage event descriptor1282 comprises a prepositional phrase (“in the afternoon,” e.g.) that contextualizes a period-of-day identifier491.
18. The method of ANY of the above Method Clauses, wherein the first naturallanguage event descriptor1282 comprises a phrase (“hosting a trade show,” e.g.) that contextualizes a vocational activity.
19. The method of ANY of the above Method Clauses, wherein the first naturallanguage event descriptor1282 comprises a prepositional phrase (“in Virginia,” e.g.) that contextualizes a natural language location identifier (alocation label1397G selected by the user among options presented by the system or provided as a default value by the system, e.g.).
20. The method of ANY of the above Method Clauses, wherein theobservational data1240 comprises a photograph depicting the first and second automatically recognizedelements126A-B and wherein thesecond draft post1220 is pre-populated with a canonic image of the first automatically recognizedelement126A (a manufacturer-provided photograph or other stock photo of a subject507 in lieu of any portion of the photograph depicting the first and second automatically recognizedelements126A-B.
21. The method of ANY of the above Method Clauses, wherein the invoking the transistor-based circuitry configured to obtain at the mobile device firstobservational data1240 in association with thecapture information1248 comprises:
invoking transistor-based circuitry configured to obtain at the mobile device one or more audio clips in association with Global Positioning System (GPS) coordinates1243, wherein the one or moreaudio clips1242 are a component of the firstobservational data1240, wherein the GPS coordinates are a component of thecapture information1248, and wherein the first natural language event descriptor is (at least partly) based on the one or moreaudio clips1242.
22. The method of ANY of the above Method Clauses, wherein the invoking the transistor-based circuitry configured to obtain at the mobile device firstobservational data1240 in association with thecapture information1248 comprises:
invoking transistor-based circuitry configured to obtain at the mobile device one or more photographs (as avideo clip1242, e.g.) in association with Global Positioning System (GPS) coordinates1243, wherein the one or more photographs225 are a component of the firstobservational data1240, wherein the GPS coordinates are a component of thecapture information1248, and wherein the first natural language event descriptor is (at least partly) based on the one or more photographs.
23. The method of ANY of the above Method Clauses, wherein the invoking the transistor-based circuitry configured to obtain at the mobile device firstobservational data1240 in association with thecapture information1248 comprises:
invoking transistor-based circuitry configured to receive at the mobile device one or more location-specific historical indicia (measurements1241 from a published weather report, e.g.) in association with Global Positioning System (GPS) coordinates1243, wherein the one ormore measurements1241 are a component of the firstobservational data1240, wherein the GPS coordinates are a component of thecapture information1248, and wherein the first natural language event descriptor (“a chilly rainy afternoon,” e.g.) is based on the one or more location-specific historical indicia (of weather, e.g.).
24. The method of ANY of the above Method Clauses, wherein the invoking the transistor-based circuitry configured to obtain at the mobile device firstobservational data1240 in association with thecapture information1248 comprises:
invoking transistor-based circuitry configured to receive at the mobile device one or more location-specific historical indicia (measurements1241 from a published weather report, e.g.) in association with Global Positioning System (GPS) coordinates1243, wherein the one ormore measurements1241 are a component of the firstobservational data1240, wherein the GPS coordinates are a component of thecapture information1248, and wherein the first natural language event descriptor (“a chilly rainy afternoon in Richmond,” e.g.) is partly based on the one or more location-specific historical indicia and partly based on ahome area1278 of theuser580,980 (presenting “Richmond” rather than “Virginia” because the user lives near Richmond, e.g.).
25. The method of ANY of the above Method Clauses, wherein the invoking the transistor-based circuitry configured to present automatically via the mobile device the first andsecond keywords1221A-B comprises:
automatically recognizing ashape element126A as the first automatically recognized element of theobservational data1240.
26. The method of ANY of the above Method Clauses, wherein the invoking the transistor-based circuitry configured to present automatically via the mobile device the first andsecond keywords1221A-B comprises:
automatically recognizing a first portion of the first photograph delimited by a first closed color boundary of the first photograph as afirst shape element126A and a second portion of the first photograph delimited by a second closed color boundary of the first photograph as asecond shape element126B, wherein the first andsecond shape elements126A-B respectively comprise the first and second automatically recognized elements of theobservational data1240.
27. The method of ANY of the above Method Clauses, wherein the event is a leisure activity.
28. The method of ANY of the above Method Clauses, wherein the event is at least one of a vacation or a festival attended by the user.
29. The method of ANY of the above Method Clauses, wherein one or more processors (instances ofprocessing unit702, e.g.) generate the first and second draft posts within the mobile device by executing post-generation code received from another client device (of a post-generation crowdworker or other content-generation professional, e.g.).
30. The method of ANY of the above Method Clauses, wherein alabel1397G that refers to a largest geographic region that contains the location associated with the observational data1240 (identified byGPS coordinates1243, e.g.) but does not contain ahome area1278 of theuser580,980 is used as a default (for generating anevent descriptor1282, e.g.).
31. The method of ANY of the above Method Clauses, wherein alabel1397G that refers to a largest geographic region that contains the location associated with the observational data1240 (identified byGPS coordinates1243, e.g.) but does not contain ahome area1278 of theuser580,980 is used as a default (for generating anevent descriptor1282, e.g.) and wherein a name of the largest geographic region among those simultaneously presented (among two ormore labels1397G that identify natural language place names, e.g.) identifies a region having a sovereign central government (being a unified nation like “South Africa” rather than a heterogeneous region like “Africa,” e.g.).
32. The method of ANY of the above Method Clauses, comprising:
automatically ranking thesecond keyword1221B at a higher priority than that of thefirst keyword1221A at least partly based on the geographic capture location (the second keyword being more popular than the first among posts associated with a vicinity of that location or with other facilities of a general facility type that includes a facility containing the geographic capture location, e.g.).
33. The method of ANY of the above Method Clauses, comprising:
automatically ranking thesecond keyword1221B at a higher priority than that of thefirst keyword1221A at least partly based on ahistory1276 of theuser580,980; wherein theprofile1275 of theuser580,980 includes thehistory1275 of theuser580,980 and wherein thehistory1276 identifies one or more prior posts incorporating one or more elements of thesecond keyword1221B (mentioning “wine,” e.g.).
34. The method of ANY of the above Method Clauses, comprising:
automatically ranking thesecond keyword1221B at a higher priority than that of thefirst keyword1221A at least partly based on aschedule1277 of theuser580,980; wherein theprofile1275 of theuser580,980 includes theschedule1277 of theuser580,980 and wherein theschedule1277 contains one or more tasks incorporating one or more elements of thesecond keyword1221B (a task of “buy wine” or a calendar entry of “wine and cheese party,” e.g.).
35. The method of ANY of the above Method Clauses, wherein the first naturallanguage location descriptor1282 has been automatically selected as a default value in preference over second and third natural language location descriptors partly based on the second naturallanguage location descriptor1282 being smaller than the first natural language descriptor1282 (a subset thereof, e.g.) and partly based on the thirdnatural language descriptor1282 including ahome area1278 associated with theuser580,980 (with a user account or current location of the user, e.g.). This can occur, for example, in a context in which the user is from or in Oregon and in which thefirst descriptor1282 of “Georgia” is therefore (presumptively) preferred over a narrower second descriptor of “Athens” and anoverbroad descriptor1282 of “United States” or “North America.”
36. The method of ANY of the above Method Clauses, wherein the automatically generating thefirst draft post1220 pre-populated with both thefirst keyword1221A and the first natural language event descriptor comprises:
    • generating thefirst draft post1220 pre-populated with both thefirst keyword1221A and the first naturallanguage event descriptor1282 as an automatic and conditional response (in lieu of adraft post1220 generation control1495A being activated, e.g.) to the user actuating a control1395 associated with thefirst keyword1221A (displaying or adjacent thefirst keyword1221A, e.g.).
37. The method of ANY of the above Method Clauses, comprising:
presenting thefirst draft post1220 in an editing mode.
38. The method of ANY of the above Method Clauses, comprising:
presenting thefirst draft post1220 in an editing mode, wherein thefirst draft post1220 also includes a graphical component depicting at least the first automatically recognizedelement126A of theobservational data1240.
39. The method of ANY of the above Method Clauses, comprising:
presenting thefirst draft post1220 in an editing mode, wherein thefirst draft post1220 also includes a graphical component depicting at least the first automatically recognizedelement126A of theobservational data1240 but does not include an entirety of theobservational data1240.
40. The method of ANY of the above Method Clauses, wherein thesecond draft post1220 also includes a graphical component depicting at least the first automatically recognized element of theobservational data1240.
41. The method of ANY of the above Method Clauses, wherein the editing mode comprises an overwrite mode.
42. The method of ANY of the above Method Clauses, wherein (at least) the first andsecond keywords1221A-B are presented simultaneously to theuser580,980 and wherein thefirst keyword1221A is thereafter prioritized above the second keyword in response touser input1273 via the mobile device.
43. The method of ANY of the above Method Clauses, comprising:
invoking transistor-based circuitry configured to transmit the firstobservational data1240 to a remote processing facility (comprisingserver600, e.g.), wherein the first andsecond keywords1221A-B are received at the mobile device (directly or indirectly) from the remote processing facility.
44. The method of ANY of the above Method Clauses, comprising:
after generating acomplete post1220 by addinguser text1224 into theediting field1229 in response to one or more data input actions (one ormore utterances163 or keystrokes, e.g.) by theuser580,980, broadcasting the complete post1220 (at least) to first and secondsocial media platforms590 selected by theuser580,980 (in theprofile1275 of theuser580,980, e.g.); wherein thecomplete post1220 contains at least the first automatically recognizedelement126A of theobservational data1240, thefirst keyword1221A that identifies the first automatically recognizedelement126A, the first naturallanguage event descriptor1282 partly based on the geographic capture location and partly based on theprofile1275 of theuser580,980 of the mobile device, and theuser text1224 added into theediting field1229 in response to one or more data input actions by theuser580,980 in the editing mode.
With respect to the numbered claims expressed below, those skilled in the art will appreciate that recited operations therein may generally be performed in any order. Also, although various operational flows are presented in sequence(s), it should be understood that the various operations may be performed in other orders than those which are illustrated, or may be performed concurrently. Examples of such alternate orderings may include overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise. Furthermore, terms like “responsive to,” “related to,” or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.

Claims (19)

What is claimed is:
1. A social media post creation method comprising:
obtaining a profile of a user of a mobile device;
at said mobile device capturing first observational data including a first photograph in association with capture information, wherein said capture information includes a geographic capture location;
transmitting said first photograph to a remote processing facility;
automatically presenting via said mobile device at least first and second keywords;
wherein said first keyword identifies a first automatically recognized shape element of said first photograph; wherein said second keyword identifies a second automatically recognized shape element of said first photograph; wherein said first and second keywords are received at said mobile device from said remote processing facility; wherein a presentation layout ranks said second keyword above said first keyword partly based on said geographic capture location and partly based on said profile of said user; and wherein at least said first and second keywords are presented simultaneously to said user;
obtaining a user preference update via said mobile device, wherein said first keyword is thereafter prioritized above said second keyword in response to said user preference update via said mobile device;
automatically obtaining at said mobile device at least a first natural language event descriptor partly based on said geographic capture location and partly based on said profile of said user of said mobile device;
automatically generating a first draft post pre-populated with both said first keyword and said first natural language event descriptor;
in response to a pre-populated-post selection gesture at said mobile device presenting an editing field containing a second draft post pre-populated with both said first keyword and said first natural language event descriptor in an editing mode; wherein said second draft post also includes a graphical component depicting at least said first automatically recognized shape element of said first photograph; and
after generating a complete post by adding user text into said editing field in response to one or more data input actions by said user broadcasting said complete post at least to a first social media platform selected by said user; wherein said complete post contains at least said first automatically recognized shape element of said first photograph, said first keyword that identifies said first automatically recognized shape element, said first natural language event descriptor partly based on said geographic capture location and partly based on said profile of said user of said mobile device, and said user text added into said editing field in response to said one or more data input actions by said user.
2. The method ofclaim 1, comprising:
automatically ranking said second keyword at a higher priority by default than that of said first keyword at least partly based on said geographic capture location.
3. The method ofclaim 1, comprising:
automatically ranking said second keyword at a higher priority by default than that of said first keyword at least partly based on a history of said user; wherein said profile of said user includes said history of said user and wherein said history identifies one or more prior posts incorporating one or more elements of said second keyword.
4. The method ofclaim 1, comprising:
automatically ranking said second keyword at a higher priority by default than that of said first keyword at least partly based on a schedule of said user; wherein said profile of said user includes said schedule of said user and wherein said schedule contains one or more tasks incorporating one or more elements of said second keyword.
5. The method ofclaim 1, wherein said first natural language location descriptor has been automatically selected as a default value in preference over second and third natural language location descriptors partly based on said second natural language location descriptor being smaller than said first natural language descriptor and partly based on said third natural language descriptor including a home area associated with said user.
6. The method ofclaim 1, wherein said automatically generating said first draft post pre-populated with both said first keyword and said first natural language event descriptor comprises:
generating said first draft post pre-populated with both said first keyword and said first natural language event descriptor as an automatic and conditional response to said user actuating a control associated with said first keyword.
7. A social media post creation method comprising:
obtaining a profile of a user of a mobile device;
obtaining at said mobile device first observational data including a first photograph in association with capture information wherein said capture information includes a geographic capture location;
automatically presenting via said mobile device at least first and second keywords; wherein said first keyword identifies a first automatically recognized shape element of said first photograph; wherein said second keyword identifies a second automatically recognized shape element of said first photograph; and wherein a presentation layout ranks said second keyword above said first keyword partly based on said geographic capture location and partly based on said profile of said user;
obtaining a user preference update via said mobile device, wherein said first keyword is thereafter prioritized above said second keyword in response to said user preference update via said mobile device;
automatically obtaining at said mobile device at least a first natural language event descriptor partly based on said geographic capture location and partly based on said profile of said user of said mobile device;
automatically generating a first draft post pre-populated with both said first keyword and said first natural language event descriptor;
presenting in response to a pre-populated-post selection gesture at said mobile device an editing field containing a second draft post pre-populated with both said first keyword and said first natural language event descriptor in an editing mode; wherein said second draft post also includes a graphical component depicting at least said first automatically recognized shape element of said first photograph; and
transmitting a complete post at least to a first social media platform selected by said user; wherein said complete post contains at least said first automatically recognized shape element of said first photograph, said first keyword that identifies said first automatically recognized shape element, said first natural language event descriptor partly based on said geographic capture location and partly based on said profile of said user of said mobile device, and said user text added into said editing field in response to one or more data input actions by said user.
8. The method ofclaim 7, comprising:
transmitting said first observational data to a remote processing facility; and
receiving said first and second keywords at said mobile device from said remote processing facility.
9. The method ofclaim 7, comprising:
recognizing a first subject by detecting a machine-recognizable item worn by said first subject while presenting a scan screen of a resident app;
in response to a successful detection of said machine-recognizable item worn by said first subject automatically triggering a lookup of said first subject; and
including one or more elements from said lookup of said first subject in said first draft post pre-populated with both said first keyword and said first natural language event descriptor.
10. The method ofclaim 7, wherein said first natural language event descriptor comprises a prepositional phrase that contextualizes a natural language location identifier.
11. The method ofclaim 7, wherein said capturing said first observational data including said first photograph in association with said capture information comprises:
receiving at said mobile device one or more location-specific historical indicia in association with Global Positioning System (GPS) coordinates, wherein said one or more measurements are a component of said first observational data including said first photograph, wherein said GPS coordinates are a component of said capture information describing said geographic capture location, and wherein said first natural language event descriptor is based on said one or more location-specific historical indicia.
12. The method ofclaim 7, wherein said capturing said first observational data including said first photograph in association with said capture information comprises:
receiving at said mobile device one or more location-specific historical indicia in association with Global Positioning System (GPS) coordinates, wherein said one or more measurements are a component of said first observational data including said first photograph, wherein said GPS coordinates are a component of said capture information describing said geographic capture location, and wherein said first natural language event descriptor is partly based on said one or more location-specific historical indicia and partly based on a home area of said user.
13. The method ofclaim 7, wherein one or more processors generate said first and second draft posts within said mobile device by executing post-generation code received from another client device.
14. The method ofclaim 7, wherein a label that refers to a largest geographic region that contains said location associated with said observational data but does not contain a home area of said user, is used as a default and wherein a name of said largest geographic region among those simultaneously presented identifies a region having a sovereign central government.
15. The method ofclaim 7, comprising:
automatically ranking said second keyword at a higher priority than that of said first keyword at least partly based on said geographic capture location.
16. The method ofclaim 7, comprising:
automatically ranking said second keyword at a higher priority than that of said first keyword at least partly based on a history of said user, wherein said profile of said user, includes said history of said user, and wherein said history identifies one or more prior posts incorporating one or more elements of said second keyword.
17. A social media post creation system comprising:
one or more modules of special purpose transistor-based circuitry that include a user input configured to obtain a profile of a user of a mobile device;
one or more modules of special purpose transistor-based circuitry that include a camera configured to obtain at said mobile device first observational data including a first photograph in association with capture information wherein said capture information includes a geographic capture location;
one or more modules of special purpose transistor-based circuitry that include display hardware configured to present automatically via said mobile device at least first and second keywords; wherein said first keyword identifies a first automatically recognized shape element of said first photograph; wherein said second keyword identifies a second automatically recognized shape element of said first photograph; and wherein a presentation layout ranks said second keyword above said first keyword partly based on said geographic capture location and partly based on said profile of said user;
one or more modules of special purpose transistor-based circuitry that include a user input configured to obtain a user preference update via said mobile device, wherein said first keyword is thereafter prioritized above said second keyword in response to said user preference update via said mobile device;
one or more modules of special purpose transistor-based circuitry that include a descriptor-handling pre-population module configured to obtain automatically at said mobile device at least a first natural language event descriptor partly based on said geographic capture location and partly based on said profile of said user of said mobile device;
one or more modules of special purpose transistor-based circuitry that include a post-handling pre-population module configured to generate automatically a first draft post pre-populated with both said first keyword and said first natural language event descriptor;
one or more modules of special purpose transistor-based circuitry that include an editing-mode module configured to present in response to a pre-populated-post selection gesture at said mobile device an editing field containing a second draft post pre-populated with both said first keyword and said first natural language event descriptor in an editing mode; wherein said second draft post also includes a graphical component depicting at least said first automatically recognized shape element of said first photograph; and
one or more modules of special purpose transistor-based circuitry that include a network interface configured to transmit a complete post at least to a first social media platform selected by said user; wherein said complete post contains at least said first automatically recognized shape element of said first photograph, said first keyword that identifies said first automatically recognized shape element, said first natural language event descriptor partly based on said geographic capture location and partly based on said profile of said user of said mobile device, and said user text added into said editing field in response to one or more data input actions by said user.
18. The system ofclaim 17, comprising:
an accelerometer in said mobile device; and
means for obtaining at said mobile device one or more measurements in association with Global Positioning System (GPS) coordinates, wherein said one or more measurements are a component of said first observational data, wherein said GPS coordinates are a component of said capture information, and wherein said first natural language event descriptor is based at least one measurement from said accelerometer in said mobile device.
19. The method ofclaim 1, wherein said transmitting said first photograph includes transmitting said capture information that includes said geographic capture location to said remote processing facility, wherein said remote processing facility comprises a neural network, and wherein said automatically presenting via said mobile device at least first and second keywords comprises receiving from said neural network of said remote processing facility an automatic ranking of said second keyword above said first keyword.
US15/948,6892017-07-282018-04-09Social media post facilitation systems and methodsActiveUS10382383B2 (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
US15/948,689US10382383B2 (en)2017-07-282018-04-09Social media post facilitation systems and methods
US16/538,707US10972254B2 (en)2017-07-282019-08-12Blockchain content reconstitution facilitation systems and methods

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US201762538445P2017-07-282017-07-28
US15/948,689US10382383B2 (en)2017-07-282018-04-09Social media post facilitation systems and methods

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
US16/538,707Continuation-In-PartUS10972254B2 (en)2017-07-282019-08-12Blockchain content reconstitution facilitation systems and methods

Publications (2)

Publication NumberPublication Date
US20190036866A1 US20190036866A1 (en)2019-01-31
US10382383B2true US10382383B2 (en)2019-08-13

Family

ID=65038389

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US15/948,689ActiveUS10382383B2 (en)2017-07-282018-04-09Social media post facilitation systems and methods

Country Status (1)

CountryLink
US (1)US10382383B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11416126B2 (en)*2017-12-202022-08-16Huawei Technologies Co., Ltd.Control method and apparatus
US20220358323A1 (en)*2020-01-232022-11-10Rebls, Inc.Machine learning systems and methods for facilitating parcel combination
US12406144B2 (en)*2023-05-192025-09-02Raj AbhyankerLinguistic analysis to automatically generate a hypothetical likelihood of confusion office action using Dupont factors

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8595317B1 (en)2012-09-142013-11-26Geofeedr, Inc.System and method for generating, accessing, and updating geofeeds
US8655983B1 (en)*2012-12-072014-02-18Geofeedr, Inc.System and method for location monitoring based on organized geofeeds
US10614426B2 (en)*2017-11-272020-04-07International Business Machines CorporationSmarter event planning using cognitive learning

Citations (27)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7077323B2 (en)2002-10-102006-07-18Fujitsu LimitedBar code recognizing method and decoding apparatus for bar code recognition
US7733223B2 (en)2007-08-172010-06-08The Invention Science Fund I, LlcEffectively documenting irregularities in a responsive user's environment
US20120221687A1 (en)2011-02-272012-08-30Broadcastr, Inc.Systems, Methods and Apparatus for Providing a Geotagged Media Experience
US8289390B2 (en)2004-07-282012-10-16Sri InternationalMethod and apparatus for total situational awareness and monitoring
US20130173305A1 (en)2011-12-302013-07-04Elwha LlcEvidence-based healthcare information management protocols
US8635015B2 (en)2009-12-172014-01-21Deere & CompanyEnhanced visual landmark for localization
US20140059040A1 (en)*2012-08-242014-02-27Samsung Electronics Co., Ltd.Method of recommending friends, and server and terminal therefor
US8763038B2 (en)2009-01-262014-06-24Sony CorporationCapture of stylized TV table data via OCR
US8774504B1 (en)2011-10-262014-07-08Hrl Laboratories, LlcSystem for three-dimensional object recognition and foreground extraction
US8781995B2 (en)2011-09-232014-07-15Fujitsu LimitedRange queries in binary decision diagrams
US9020252B2 (en)2012-10-192015-04-28National Taiwan University Of Science And TechnologyImage recognition method and image recognition system
US9025022B2 (en)2012-10-252015-05-05Sony CorporationMethod and apparatus for gesture recognition using a two dimensional imaging device
US9074906B2 (en)2009-07-292015-07-07Hitachi Automotive Systems, Ltd.Road shape recognition device
US9152860B2 (en)2013-05-102015-10-06Tantrum Street LLCMethods and apparatus for capturing, processing, training, and detecting patterns using pattern recognition classifiers
US9173567B2 (en)2011-05-132015-11-03Fujitsu LimitedTriggering user queries based on sensor inputs
US9262596B1 (en)*2012-04-062016-02-16Google Inc.Controlling access to captured media content
US9275299B2 (en)2010-07-232016-03-01Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek TnoSystem and method for identifying image locations showing the same person in different images
US9288450B2 (en)2011-08-182016-03-15Infosys LimitedMethods for detecting and recognizing a moving object in video and devices thereof
US9466014B2 (en)2012-08-062016-10-11A2iA S.A.Systems and methods for recognizing information in objects using a mobile device
US9569439B2 (en)2011-10-312017-02-14Elwha LlcContext-sensitive query enrichment
US9576213B2 (en)2013-02-082017-02-21Chuck FungMethod, system and processor for instantly recognizing and positioning an object
US9603090B2 (en)2013-08-082017-03-21Apple Inc.Management of near field communications using low power modes of an electronic device
US9603123B1 (en)2015-06-042017-03-21Apple Inc.Sending smart alerts on a device at opportune moments using sensors
US9602956B1 (en)2015-08-252017-03-21Yahoo! Inc.System and method for device positioning with bluetooth low energy distributions
US9606363B2 (en)2014-05-302017-03-28Sony Interactive Entertainment America LlcHead mounted device (HMD) system having interface with mobile computing device for rendering virtual reality content
US9603569B2 (en)2014-07-112017-03-28Verily Life Sciences LlcPositioning a wearable device for data collection
US9791995B2 (en)2012-09-282017-10-17Pfu LimitedForm input/output apparatus, form input/output method, and program

Patent Citations (27)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7077323B2 (en)2002-10-102006-07-18Fujitsu LimitedBar code recognizing method and decoding apparatus for bar code recognition
US8289390B2 (en)2004-07-282012-10-16Sri InternationalMethod and apparatus for total situational awareness and monitoring
US7733223B2 (en)2007-08-172010-06-08The Invention Science Fund I, LlcEffectively documenting irregularities in a responsive user's environment
US8763038B2 (en)2009-01-262014-06-24Sony CorporationCapture of stylized TV table data via OCR
US9074906B2 (en)2009-07-292015-07-07Hitachi Automotive Systems, Ltd.Road shape recognition device
US8635015B2 (en)2009-12-172014-01-21Deere & CompanyEnhanced visual landmark for localization
US9275299B2 (en)2010-07-232016-03-01Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek TnoSystem and method for identifying image locations showing the same person in different images
US20120221687A1 (en)2011-02-272012-08-30Broadcastr, Inc.Systems, Methods and Apparatus for Providing a Geotagged Media Experience
US9173567B2 (en)2011-05-132015-11-03Fujitsu LimitedTriggering user queries based on sensor inputs
US9288450B2 (en)2011-08-182016-03-15Infosys LimitedMethods for detecting and recognizing a moving object in video and devices thereof
US8781995B2 (en)2011-09-232014-07-15Fujitsu LimitedRange queries in binary decision diagrams
US8774504B1 (en)2011-10-262014-07-08Hrl Laboratories, LlcSystem for three-dimensional object recognition and foreground extraction
US9569439B2 (en)2011-10-312017-02-14Elwha LlcContext-sensitive query enrichment
US20130173305A1 (en)2011-12-302013-07-04Elwha LlcEvidence-based healthcare information management protocols
US9262596B1 (en)*2012-04-062016-02-16Google Inc.Controlling access to captured media content
US9466014B2 (en)2012-08-062016-10-11A2iA S.A.Systems and methods for recognizing information in objects using a mobile device
US20140059040A1 (en)*2012-08-242014-02-27Samsung Electronics Co., Ltd.Method of recommending friends, and server and terminal therefor
US9791995B2 (en)2012-09-282017-10-17Pfu LimitedForm input/output apparatus, form input/output method, and program
US9020252B2 (en)2012-10-192015-04-28National Taiwan University Of Science And TechnologyImage recognition method and image recognition system
US9025022B2 (en)2012-10-252015-05-05Sony CorporationMethod and apparatus for gesture recognition using a two dimensional imaging device
US9576213B2 (en)2013-02-082017-02-21Chuck FungMethod, system and processor for instantly recognizing and positioning an object
US9152860B2 (en)2013-05-102015-10-06Tantrum Street LLCMethods and apparatus for capturing, processing, training, and detecting patterns using pattern recognition classifiers
US9603090B2 (en)2013-08-082017-03-21Apple Inc.Management of near field communications using low power modes of an electronic device
US9606363B2 (en)2014-05-302017-03-28Sony Interactive Entertainment America LlcHead mounted device (HMD) system having interface with mobile computing device for rendering virtual reality content
US9603569B2 (en)2014-07-112017-03-28Verily Life Sciences LlcPositioning a wearable device for data collection
US9603123B1 (en)2015-06-042017-03-21Apple Inc.Sending smart alerts on a device at opportune moments using sensors
US9602956B1 (en)2015-08-252017-03-21Yahoo! Inc.System and method for device positioning with bluetooth low energy distributions

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11416126B2 (en)*2017-12-202022-08-16Huawei Technologies Co., Ltd.Control method and apparatus
US20230004267A1 (en)*2017-12-202023-01-05Huawei Technologies Co., Ltd.Control Method and Apparatus
US20220358323A1 (en)*2020-01-232022-11-10Rebls, Inc.Machine learning systems and methods for facilitating parcel combination
US11676228B2 (en)2020-01-232023-06-13Rebls, Inc.Systems, methods, and program products for facilitating parcel combination
US12406144B2 (en)*2023-05-192025-09-02Raj AbhyankerLinguistic analysis to automatically generate a hypothetical likelihood of confusion office action using Dupont factors

Also Published As

Publication numberPublication date
US20190036866A1 (en)2019-01-31

Similar Documents

PublicationPublication DateTitle
US10382383B2 (en)Social media post facilitation systems and methods
US11039053B2 (en)Remotely identifying a location of a wearable apparatus
US10887486B2 (en)Wearable device and methods for transmitting information based on physical distance
US11164213B2 (en)Systems and methods for remembering held items and finding lost items using wearable camera systems
WO2018012924A1 (en)Augmented reality device and operation thereof
US10972254B2 (en)Blockchain content reconstitution facilitation systems and methods
EP2040185B1 (en)User Interface for Selecting a Photo Tag
JP2017174339A (en)Information presentation device and information processing system
US20250024135A1 (en)Multi shutter camera app that selectively sends images to different artificial intelligence and innovative platforms that allow for fast sharing and informational purposes
US20240184972A1 (en)Electronic device for providing calendar ui displaying image and control method thereof

Legal Events

DateCodeTitleDescription
FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:ADVISORY ACTION MAILED

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCFInformation on status: patent grant

Free format text:PATENTED CASE

FEPPFee payment procedure

Free format text:PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PTGR); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment:4


[8]ページ先頭

©2009-2025 Movatter.jp