INCORPORATION BY REFERENCE/CROSS REFERENCE TO RELATED APPLICATIONSThe present patent application is a continuation of U.S. Ser. No. 14/071,372, which was filed on Nov. 4, 2013, the entire content of which is hereby incorporated herein by reference.
BACKGROUNDCommunication applications, such as email, text messaging applications, social networking applications, and the like are widely available on mobile devices and other computer systems. These communication applications are largely text based, allowing a user to type a text, message, or letter for transmission to another user in order to communicate. An ideograph is a graphic character used for the meaning of a thing, rather than a word. Communication applications have been augmented to convey emotion in a number of ways. Emoticons are a type of ideograph in which a text based pictorial was used as a representation of a facial expression to convey tone or emotion within a text based message. Emojis are another type of ideograph used in electronic communication to convey tone or emotions which appear inline in a communication. Digital “stickers” are yet another type of ideograph, but often appears out of the normal line of text in an electronic communication. Emoticons, emojis, and stickers may be animated or non-animated. In some applications, “packs” of multiple ideographs are offered to users and presented as long strings of characters which, once selected by a user, are inserted as the desired ideograph. However, the character strings may be difficult to remember and prone to confusion as there are no standards among chat platforms and communication applications for character strings used to insert certain ideographs. Ideographs may also be placed in menus of these communication applications or may be ported into a communication application from another application on the computing system. In order to include ideographs in text communications, the user may be forced to search a large library of emojis/emoticons/stickers, choose the ideograph that best represents the emotion or feeling the user wishes to convey, and then enter the ideograph into the text communication.
It is difficult in any communication application whether it be mobile, tablet, desktop, or web, for users to quickly locate ideographs, such as emoji, emoticons, or stickers, to include in a text or chat while remaining within the application in order to quickly and accurately represent and convey a specified emotion or concept. Therefore, it would be beneficial to have a software technology system which can recognize words and automatically suggest ideographs suitable to replace the word or suitable for insertion into a message containing the word.
SUMMARYThis summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used as an aid in limiting the scope of the claimed subject matter.
In one embodiment a non-transitory processor readable medium is described having processor executable instructions that when executed by a processor cause the processor to monitor application data from a user device via a communications network, analyze the application data for one or more indicator(s), retrieve one or more ideograph indicative of a graphical representation of one or more meaning of the one or more indicator(s), and transmit the one or more ideograph to the user device via a communications network. The application data is indicative of text entered into an application by a user via the user device. The one or more indicator(s) is at least one word or a portion of the text entered into the application and has one or more meaning(s). The application data may include the word “happy.” This word is an indicator that the user of the user device is conveying that they or someone is in a good mood. One or more ideograph, such as a smiley face, may be automatically retrieved and entered into the application data. The one or more ideograph is retrieved without immediate user action beyond entering the descriptive text into the application.
In another embodiment a mobile device is described with a processor, an input device, and a non-transitory processor readable medium storing processor executable instructions that when executed cause the processor to receive data indicative of text entered into an application by the input device, analyze the data for one or more indicator(s), retrieve one or more ideograph graphically, representing one or more meaning(s) of the one or more indicator(s), and display at least a portion of the text and the one or more ideograph to the user in the application. The application is stored on the non-transitory processor readable medium. The one or more indicator is at least a portion of the text entered into the application and has one or more meanings. The one or more ideograph is retrieved without requiring immediate user action beyond entering the text into the application.
In another version, a mobile device is described as having a processor configured to access a database, an input device coupled to the processor, an output device coupled to the processor, a communications device coupled to the processor, and a non-transitory processor readable medium coupled to the processor. The input device is configured to receive data from a user and pass the data to the processor. The output device is configured to transmit data received from the processor to a user in a user perceivable format. The communications device is configured to transmit information from a communications network to the processor and transmit information from the processor to the communications network. The non-transitory processor readable medium stores processor executable instructions that when executed cause the processor to receive data indicative of text entered into an application by the input device, analyze the data for one or more indicator(s), retrieve one or more ideograph graphically representing one or more meaning of the one or more indicator(s), and display at least a portion of the text and the one or more ideograph to the user via the output device. The application is stored on the non-transitory processor readable medium. The one or more indicator is at least a portion of the text entered into the application and has one or more meanings. The one or more ideograph is retrieved without immediate user action beyond entering the text into the application.
In another embodiment, a non-transitory processor readable medium is described having processor executable instructions that when executed by a processor cause the processor to receive one or more branded visual content from one or more advertiser, monitor application data received from a user device via a communications network, analyze the application data for one or more indicator, retrieve one or more of the one or more branded visual content representative of the one or more meanings of the one or more indicator(s), and transmit the branded visual content to the user device via the communications network. The application data is indicative of text entered into an application by a user via the user device. The one or more indicator is at least a portion of the text entered into the application, and the one or more indicator has one or more meaning.
However, many modifications are possible without materially departing from the teachings of this disclosure. Accordingly, such modifications are intended to be included within the scope of this disclosure as defined in the claims.
BRIEF DESCRIPTION OF THE DRAWINGSCertain embodiments of the disclosure will hereafter be described with reference to the accompanying drawings, wherein like reference numerals denote like elements. It should be understood, however, that the accompanying figures illustrate the various implementations described herein and are not meant to limit the scope of various technologies described herein.
FIG. 1 is a schematic depiction of a word recognition and ideograph insertion system constructed in accordance with the present disclosure.
FIG. 2 is a block diagram of one embodiment of the execution of processor executable instructions stored on a non-transitory processor readable medium forming a word recognition and ideograph insertion program in accordance with the inventive concepts disclosed herein.
FIG. 3 is a block diagram of another embodiment of the execution of processor executable instructions stored on a non-transitory processor readable medium forming the word recognition and ideograph insertion software program in accordance with the inventive concepts disclosed herein.
FIGS. 4-1 through4-8 depict exemplary screenshots generated by a user device executing one embodiment of the word recognition and ideograph insertion program in accordance with the inventive concepts disclosed herein:
FIG. 4-1 is an embodiment of a mobile device having a processor executing a word recognition and ideograph insertion program, a messaging application, and a pop-up window of the word recognition and ideograph insertion program having ideographs to be inserted into application data entered into the messaging application;
FIG. 4-2 is an embodiment of the mobile device having a processor executing the messaging application ofFIG. 4-1 with a selected ideograph inserted into the application data;
FIG. 4-3 is an embodiment of the mobile device having a processor executing another version of a word recognition and ideograph insertion program, a messaging application, and a pop-up window of the word recognition and ideograph insertion program having ideographs to be inserted into application data and an icon for selecting a position of a selected ideograph within the application data entered into the messaging application;
FIG. 4-4 is an embodiment of the mobile device having a processor executing the messaging application ofFIG. 4-3 with the selected ideograph inserted into the application data in a desired location;
FIG. 4-5 is another embodiment of a mobile device having a processor executing a word recognition and ideograph insertion program, a messaging application, and a pop-up window of the word recognition and ideograph insertion program having ideographs to be inserted into application data and an icon for selecting a position of the selected ideograph within the application data entered into the messaging application;
FIG. 4-6 is an embodiment of the mobile device having a processor executing the messaging application ofFIG. 4-5 with the selected ideograph inserted into the application data in a desired location;
FIG. 4-7 is an embodiment of a mobile device having a processor executing a word recognition and ideograph insertion program, a messaging application, and a pop-up window of the word recognition and ideograph insertion program having ideographs to be inserted into application data entered into the messaging application; and
FIG. 4-8 is an embodiment of the mobile device having a processor executing the messaging application ofFIG. 4-7 with the ideograph replacing the application data.
FIG. 5 is a block diagram of some embodiments of the execution of processor executable instructions stored on a non-transitory processor readable medium forming a word recognition and ideograph insertion program in accordance with the inventive concepts disclosed herein.
FIG. 6-1 through6-8 depict exemplary screenshots generated by a user device executing one embodiment of a word recognition and advertising insertion program in accordance with the inventive concepts disclosed herein:
FIG. 6-1 is an embodiment of a mobile device having a processor executing an embodiment of a word recognition and advertising insertion program, a messaging application, and a display of the word recognition and advertising insertion program having a single advertiser and single branded visual content to be inserted into application data entered into the messaging application;
FIG. 6-2 is an embodiment of the mobile device having a processor executing the messaging application ofFIG. 6-1 with a selected branded visual content from a single advertiser appended to the application data;
FIG. 6-3 is an embodiment of the mobile device having a processor executing another version of a word recognition and advertising insertion program, a messaging application, and the word recognition and advertising insertion program presenting a digital sticker icon indicative of a digital sticker set from a single advertiser;
FIG. 6-4 is an embodiment of the mobile device having a processor executing the messaging application ofFIG. 6-3 with branded visual content of the digital sticker set of the single advertiser displayed;
FIG. 6-5 is another embodiment of a mobile device having a processor executing an embodiment of a word recognition and advertising insertion program, a messaging application, and the word recognition and advertising insertion program presenting a digital sticker icon indicative of a plurality of sponsored digital sticker sets from a plurality of advertisers;
FIG. 6-6 is an embodiment of the mobile device having a processor executing the messaging application ofFIG. 6-5 with branded visual content of the plurality of sponsored digital sticker sets from the plurality of advertisers displayed;
FIG. 6-7 is an embodiment of a mobile device having a processor executing a word recognition and advertising insertion program, a messaging application, and the word recognition and advertising insertion program identifying an indicator within application data entered into the messaging application; and
FIG. 6-8 is an embodiment of the mobile device having a processor executing the messaging application ofFIG. 6-7 with branded visual content graphically representative of the indicator appended to the application data without user interaction.
DETAILED DESCRIPTIONIn the following description, numerous details are set forth to provide an understanding of some embodiments of the present disclosure. However, it will be understood by those of ordinary skill in the art that the system and/or methodology may be practiced without these details and that numerous variations or modifications from the described embodiments may be possible.
Unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by anyone of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the inventive concept. This description should be read to include one or at least one and the singular also includes the plural unless otherwise stated.
The terminology and phraseology used herein is for descriptive purposes and should not be construed as limiting in scope. Language such as “including,” “comprising,” “having,” “containing,” or “involving,” and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited.
Finally, as used herein any references to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily referring to the same embodiment.
The inventive concepts disclosed herein are generally directed to a word recognition and ideograph insertion system, which may allow insertion of specific emojis, emoticons, digital stickers, or other media, in place of corresponding words or messages, of application data as a user is typing the application data into an application in the form of chat, text, email, or the like. The words may correspond to human emotions (i.e. happy, sad, angry, etc.), commercial items (i.e. coffee, food, etc), commercial services, or other words able to be represented by ideographs, such as digital images, emojis, emoticons, or stickers. As the words are typed, a selection of corresponding ideographs (e.g. smiley faces, frowny faces, animated faces, characters, licensed images, product icons, advertising icons, etc.) may be offered, automatically or otherwise, to the user as a replacement therefor or in addition thereto. This may enable chat, text, or email to be more interactive, emotive, or better carry tone and context. This may also enable communications between two users to be used as an advertising media for businesses and as a revenue stream for message platforms, telecommunications providers, and the like from advertisers offering branded visual content for use in these communications. In one embodiment, the present disclosure may describe a computer based system configured to receive or monitor application data from one or more user device, analyze the data for one or more indicator, retrieve one or more ideograph indicative of a graphical representation of one or more meaning of the one or more indicator, and transmit the one or more ideograph to the user device. The word recognition and ideograph insertion system may also cause the user device to display at least a portion of the data and the one or more ideograph including, but not limited to, one or more branded visual content and provide for selection of the one or more ideograph for insertion into the application data. In another embodiment, the present disclosure may describe an ideograph insertion system capable of inserting advertising images or material into user communications, evaluate user engagement with the advertising material, the user's acceptance or exposure to the advertising material, an overall advertising campaign performance, and other metrics described below.
Referring now toFIG. 1, one embodiment of a word recognition and ideograph or in-app advertising system10 is shown according to the inventive concepts disclosed herein. In one embodiment, the word recognition and ideograph or in-app advertising system10 may comprise one ormore computer system12 and one or more mobile device14 (also referred to hereinafter as a user device). The one ormore computer system12 and the one or moremobile device14 may communicate/interface via acommunications network16. In general terms, the one ormore computer system12 may be implemented as any desired computer system and may include a host system, one or more web server hosting one or more web site, or a stand-alone computer system, for example. Theuser device14 may be any mobile or stationary computing device, such as a smartphone, a laptop, a desktop computer, a netbook, a tablet computer (such as an iPad, for example), a personal digital assistant (PDA), or other computing device, for example. Thenetwork16 may be any desired network, such as the internet, a local area network, a wide area network, an optical network, a wireless network, a cellular network, a public telephone network, a circuit switched network, a packet switched network, a datagram network, a virtual circuit network, a satellite network, a 3G network, a 4G network, an LTE network, a metropolitan network, a bluetooth network, or combinations thereof, for example.
The one ormore computer system12 may comprise one or more personal computers, mainframe computers, servers, web servers, local servers, internet servers, virtual servers, distributed servers, distributed computer networks, and combinations thereof. In one embodiment, the one ormore computer system12 may have at least oneprocessor18 capable of executing processor executable instructions, a non-transitory processor readable medium20 capable of storing processor executable instructions, aninput device22, anoutput device24, and acommunications device26, all of which may be partially or completely network-based or cloud based, and may not necessarily be located in a single physical location. The one ormore computer system12 may be hosted on thenetwork16, in some embodiments, or may be operably coupled with thenetwork16.
Theprocessor18 can be implemented as a single processor or multiple processors working together to execute processor executable instructions including the logic described herein. Exemplary embodiments of theprocessor18 may include a digital signal processor (DSP), a central processing unit (CPU), a field programmable gate array (FPGA), a microprocessor, a multi-core processor, a quantum processor, application-specific integrated circuit (ASIC), a graphics processing unit (GPU), a visual processing unit (VPU) and combinations thereof. Theprocessor18 is operably coupled with the non-transitory processorreadable medium20 via apath28 which can be implemented as a data bus allowing bi-directional communication between theprocessor18 and the non-transitory processorreadable medium20, for example. Theprocessor18 is capable of communicating with theinput device22 and with theoutput device24 viapaths30 and32, which may be one or more data busses, for example. Theprocessor18 may be further capable of interfacing and/or bi-directionally communicating with theuser device14 via thenetwork16 using thecommunications device26, such as by exchanging electronic, digital, analogue, and/or optical signals via one or more physical, virtual, or logical ports using any desired network protocol such as TCP/IP, for example. It is to be understood that in certain embodiments using more than oneprocessor18, multiple processors may be located remotely from one another, located in the same location, or comprising a unitary multi-core processor (not shown). Theprocessor18 is capable of reading and/or executing processor executable code stored in the one or more non-transitory processorreadable medium20 and/or of creating, manipulating, altering, and storing computer data structures into the one or more non-transitory processorreadable medium20.
The non-transitory processorreadable medium20 may store a word recognition and ideograph insertion program having processor executable instructions. The non-transitory processorreadable medium20 may be implemented as any type of memory, such as random access memory (RAM), a CD-ROM, a hard drive, a solid state drive, a flash drive, a memory card, a DVD-ROM, a floppy disk, an optical drive, and combinations thereof, for example. While the non-transitory processorreadable medium20 may be located in the same physical location as theprocessor18, the non-transitory processorreadable medium20 may also be located remotely from theprocessor18 and may communicate with theprocessor18 via thenetwork16. Additionally, when more than one non-transitory processorreadable medium20 is used, one or more non-transitory processorreadable medium20 may be located in the same physical location as theprocessor18, and one or more non-transitory processorreadable medium20 may be located in a remote physical location from theprocessor18. The physical location of the non-transitory processor readable medium20 can be varied, and the non-transitory processorreadable medium20 may be implemented as a “cloud memory” i.e., one or more non-transitory processor readable medium20 which is partially, or completely based on or accessed using thenetwork16, for example. Further, the one ormore processor18 may not communicate directly with the non-transitory processorreadable medium20, but may communicate with anotherprocessor18 communicating with the non-transitory processor readable medium20 over thenetwork16, for example. In some exemplary embodiments, theprocessor18 may include a first processor communicating with a second processor executing processor executable instructions including the word recognition and ideograph insertion program over thenetwork16. Thesecond processor18 may be part of a computer station (not shown), or may be a part of a separate computer system or server configured to communicate with thecomputer system12 over thenetwork16 or otherwise operably coupled with thecomputer system12, for example.
Theinput device22 may pass data to theprocessor18, and may be implemented as a keyboard, a mouse, a touch-screen, a camera, a cellular phone, a tablet, a smart phone, a personal digital assistant (PDA), a microphone, a network adapter, and combinations thereof, for example. Theinput device22 may also be implemented as a stylus, a mouse, a trackball, and combinations thereof, for example. Theinput device22 may be located in the same physical location as theprocessor18, or may be remotely located and/or partially or completely network-based.
Theoutput device24 passes information from theprocessor18 to a user in a user perceivable format. For example, theoutput device24 can be implemented as a server, a computer monitor, a cell phone, a smartphone, a tablet, a speaker, a website, a PDA, a fax, a printer, a projector, a laptop monitor, and combinations thereof. The term “pass” as used herein may refer to either push technology, or to pull technology, and to combinations thereof. Theoutput device24 can be physically co-located with theprocessor18, or can be located remotely from theprocessor18, and may be partially or completely network based (e.g., a website). Theoutput device24 communicates with theprocessor18. As used herein the term “user” is not limited to a human, and may comprise a human, a computer, a host system, a smart phone, a tablet, and combinations thereof, for example.
Theuser device14 may be connected or otherwise operably coupled with thenetwork16, and can be implemented as a smartphone, a tablet, a laptop computer, a personal computer, a desktop computer, a computer terminal, a computer workstation, an e-book reader, a wireless network-capable handheld device, a digital video recorder, a personal digital assistant, or combinations thereof, for example. Similar to thecomputer system12, described above, in one embodiment, theuser device14 may be provided with one ormore processor34, one or more non-transitory processorreadable medium36, aninput device38, anoutput device40, and acommunications device42. Theprocessor34, the one or more non-transitory processorreadable medium36, theinput device38, theoutput device40, and thecommunications device42 of theuser device14 may be implemented similarly to or the same as theprocessor18, the one or more non-transitory processorreadable medium20, theinput device22, theoutput device24, and thecommunications device26, respectively. Theuser device14 may be capable of interfacing with thenetwork16, via a wireless or wired interface. The one or more non-transitory processor readable medium of the user device may store processor executable instructions or software applications, and a web browser or a smartphone application (e.g., native software application running on a user device and configured to communicate with a web server over a computer network) running on theuser device14 and capable of accessing a website and/or communicating information and/or data with a web server such as thecomputer system12 over a network, such as thenetwork16.
Thenetwork16 may be implemented as a wireless and/or wired network16 (e.g., the world wide web or Internet, or any other computer network configured to allow bidirectional exchange of data and/or signals between computer processors), and may permit bi-directional communication of information and/or data between thecomputer system12 and/or one ormore user device14 operably coupled with thenetwork16, for example.
Thenetwork16 may interface with thecomputer system12 and with theuser device14 operably coupled with thenetwork16 in a variety of ways, such as by optical, wireless, wired, and/or electronic interfaces, and may use a plurality of network topographies and protocols, such as Ethernet, TCP/IP, circuit switched paths, and combinations thereof, for example. Thenetwork16 may use a variety of network protocols to permit bi-directional interface and communication of data and/or information between thecomputer system12 and theuser device14 operably coupled with thenetwork16. In some embodiments, thenetwork16 may be secured using any desired secured networking protocol, such as a gateway server, a firewall, data encryption, public or private key cryptography infrastructure, secure socket layer protocol, hypertext transfer protocol secure, a virtual private network, a tunnel, secure shell, and any combinations thereof, for example.
The one ormore computer system12 may include processorexecutable instructions44 and theuser device14 may also include processorexecutable instructions46. The processorexecutable instructions44 which may comprise one or more computer program having processor executable instructions written in any suitable programming language, such as C++, C#, Java, Python, Perl, Binary, assembly language, or other high or low level programming language for example. A computer program for the word recognition and ideograph or in-app advertising system10 on thecomputer system12 or theuser device14 may be implemented as processor executable instructions, software, firmware, or a combination of software and firmware, for example, and may be at least partially stored in the non-transitory processor readable medium20 or36, respectively. Thecomputer system12 oruser device14 may access and execute the processorexecutable instructions44 or46, respectively, to execute at least a portion of the functions of the word recognition and ideograph or in-app advertising system10. For example, thecomputer system12 may execute the processorexecutable instructions44 for a portion of the functions of the word recognition and ideograph or in-app advertising system10 and the one ormore user device14 may execute the processorexecutable instructions46 to perform a remaining portion of the functions of the word recognition and ideograph or in-app advertising system10. In some embodiments, the entirety of the processor executable instructions which performs the functions of the word recognition and ideograph or in-app advertising system10 may be stored on either the non-transitory processorreadable medium20 of thecomputer system12 or the non-transitory processorreadable medium36 of theuser device14.
In one embodiment, the processorexecutable instructions44 includes a word recognition and ideograph insertion program44-1 and other processor executable instructions44-2 such as an operating system and application programs such as a word processor, spreadsheet program, email program, or text communication program, for example. In some embodiments, the other processor executable instructions44-2 may include a word recognition and advertising insertion program44-2, as will be described in more detail below. The processor executable instructions for the word recognition and ideograph insertion program44-1 and the other processor executable instructions44-2 may be written in any suitable high level or low level programming language as described above. Similarly, in one embodiment, the processorexecutable instructions46 stored on theuser device14 may include a word recognition and ideograph insertion program46-1 and other processor executable instructions46-2. The other processor executable instructions46-2 may be implemented similarly to the other processor executable instructions44-2. The word recognition and ideograph insertion programs44-1 and46-2 may work in concert or independently to perform the functions of the word recognition and ideograph or in-app advertising system10 described below in more detail. For example, the word recognition and ideograph insertion programs44-1 and46-1 may be implemented as a single application, which may be stored on either thecomputer system12, theuser device14, or both, such that the word recognition and ideograph insertion programs44-1 and46-1 perform the functions of the word recognition and ideograph or in-app advertising system10 on either or both of thecomputer system12 and theuser device14. As an additional example, the word recognition and ideograph insertion programs44-1 and46-1 may be implemented as separate applications, where the word recognition and ideograph insertion program44-1 is stored on thecomputer system12 and the word recognition and ideograph insertion program46-1 is stored on theuser device14. Similarly, the word recognition and advertising insertion program44-2 may work in concert and/or independently from the word recognition and ideograph insertion program44-1 and46-1, as well as in concert and/or independently from the other processor executable instructions46-2. The word recognition and advertising insertion program44-2 may also be implemented, similarly to the word recognition and advertising insertion program44-1, as a single application with the word recognition and ideograph insertion programs44-1 and/or46-1 such that the word recognition and advertising insertion program44-2 may perform the functions of another embodiment of a word recognition and ideograph or in-app advertising system10-1, as will be described in more detail below.
In one embodiment, thecomputer system12 and theuser device14 may further include or have access to adatabase48 having a plurality of indicators and a plurality of ideographs linked to form relationships between certain of the plurality of indicators and certain of the plurality of ideographs, as will be explained in more detail below. Thedatabase48 may be implemented as any desired non-transitory processor readable medium configured to store processor executable instructions and/or data, for example. Thedatabase48 may be operably coupled with theprocessor18 and/or the processor34 (e.g., directly or via the network16) so that thedatabase48 and theprocessor18 or34 may exchange data and/or signals bi-directionally with one another. It is to be understood that in some exemplary embodiments, thedatabase48 may be implemented as a part of the non-transitory processorreadable medium20 where implemented within thecomputer system12 or as part of the non-transitory processorreadable medium36 where implemented within theuser device14. In some other embodiments, thedatabase48 may be implemented and located at a separate physical location from the word recognition and ideograph or in-app advertising system10 and may be operably coupled to theprocessor18 or34 via thenetwork16.
Referring now toFIG. 2, therein shown is block diagram of one embodiment of the word recognition and ideograph insertion program44-1. As previously discussed, at least a portion of the word recognition and ideograph insertion program44-1 may be stored on the one or more non-transitory processorreadable medium20 of thecomputer system12. In one embodiment, when executed, the word recognition and ideograph insertion program44-1 may cause the one ormore processor18 to monitorapplication data50 from theuser device14 via thecommunications network16, as indicated byblock52. The word recognition and ideograph insertion program44-1 may further cause theprocessor18 to analyze the application data for one ormore indicator54 within theapplication data50, as indicated byblock56. Theprocessor18 may then retrieve one ormore ideograph58 indicative of a graphical representation of the one or more meanings of the one ormore indicator54, as indicated byblock60. Theprocessor18 may then transmit the one ormore ideograph58 to theuser device14 via thecommunications network16, as indicated byblock62.
Theapplication data50 may be indicative of text entered into an application by a user via theuser device14. For example, theapplication data50 may be a text string entered into a smartphone on a text messaging application or email application, stored on the non-transitory processorreadable medium36 of theuser device14. The one ormore indicator54, identified within theapplication data50 by theprocessor18, may be at least a portion of the text entered into the application and may have one or more meaning. In some embodiments, the one or more meaning may be an emotional denotation or connotation. In some embodiments, theprocessor18 may determine the one or more meaning of the one ormore indicator54 from thedatabase48 from the relations between the plurality of indicators and the plurality of ideographs stored on thedatabase48. Theprocessor18 may also determine meaning from internal or external dictionaries, or other suitable methods.
The one ormore ideograph58 may be a pictorial representation of the one ormore indicator54 or the one or more meaning of the one ormore indicator54 for which the one ormore ideograph58 is indicative. In some embodiments, the one ormore ideograph58 is a non-textual graphic, such as an emoticon, emoji, sticker, image, video, series of images, .gif, animations, sounds, sound recordings, video with sound, other media, or combination thereof, for example. In some embodiments, the one ormore ideograph58 may include one or more branded visual content such as product icons, licensed images, advertising icons, slogans, trademarks, and/or other images, emoticons, emoji, stickers, video, .gif, animations, sounds, sound recordings, video with sound, jingles, other media, and/or combinations thereof, for example. However, it should be understood that the one ormore ideograph58 may be any ideograph conveying at least one of the one or more meaning of the one ormore indicator54 and/or providing context or tone clues to theapplication data50. For example, when the one ormore indicator54 in theapplication data50 is the textual word “coffee,” the one ormore ideograph58 may be presented as a coffee cup, a mug with coffee, a coffee bean, or other similar images indicative of the word and/or contextual meaning for coffee within theapplication data50. It should also be understood that, in addition to general meanings, the one ormore indicator54 may be replaced by branded visual content where the one ormore indicator54 specifically references, refers to, or implicates the content. For example, where the one ormore indicator54 is the name of a company, a company's product, or other similar word, the one ormore ideograph58 acting as a pictorial representation of that one ormore indicator54 may be a graphic of a trademark for the company or company product. The one ormore ideograph58 may be retrieved from thedatabase48, from thecommunications network16, from the one or more non-transitory processorreadable medium36 of theuser device14, from the one or more non-transitory processorreadable medium20 of thecomputer system12, from combinations thereof, or from any other suitable source of ideographs accessible to thecomputer system12. The one ormore ideograph58 may be retrieved without immediate user action beyond entering the text into the application. For example, upon entering theapplication data50 into the application, theprocessor18 may analyze theapplication data50 for the one ormore indicator54 and retrieve one ormore ideograph58 indicative of the meaning of the one ormore indicator54 automatically in response to entering the one ormore indicator54 into the application. By way of another example, theprocessor18 may be prompted to retrieve one ormore ideograph58 for the one ormore indicator54 entered into the application where the application determines that the one ormore indicator54 has one or more meaning representable by a graphic. The term “immediate” as used herein refers to a time period in which the user is entering theapplication data50 into the application. Non-immediate user action may be used to set up or enable the word recognition and ideograph insertion program44-1, configure custom associations between the one ormore ideograph58 and the one ormore indicator54, or initiate the application into which theapplication data50 is entered, for example.
Referring now toFIG. 3 and generally to FIGS.4-1-4-8, therein shown is another representation of an embodiment of the word recognition and ideograph insertion program46-1. When executed, the word recognition and ideograph insertion program46-1 may cause theprocessor34 to receivedata70 indicative oftext72 entered into anapplication74 by theinput device38, as indicated byblock76. Thetext72 may be a text stream entered into theapplication74. Theapplication74 may be stored on the one or more non-transitory processorreadable medium36. Theprocessor34 may then analyze thedata70 for one ormore indicator78, as indicated byblock80. Theprocessor34 may then retrieve one ormore ideograph82 indicative of a graphical representation of the one or more meanings of the one ormore indicator78, as indicated byblock84. Theprocessor34 may then display at least a portion of thetext72 and the one ormore ideograph82 to the user in theapplication74, as indicated byblock86.
The one ormore indicator78 may be at least a portion of thetext72 entered into theapplication74. The one ormore indicator78 may have one or more meanings, which in some embodiments may be one or more emotional denotations or connotations such as happy, sad, angry, or the like, for example. In some embodiments, theprocessor34 may determine the one or more meaning of the one ormore indicator78 from thedatabase48, from the relations between the plurality of indicators and the plurality of ideograph stored on thedatabase48. Theprocessor34 may also determine the one or more meaning from internal or external dictionaries, or other suitable methods such as a data table forming links from particular words or phrases to particular ideographs, for example.
As previously discussed, the one ormore ideograph82 may be a pictorial representation of the one ormore indicator78 or the one or more meaning of the one ormore indicator78 and may be associated with the one ormore indicator78 and one or more meaning by the relational tags. In some embodiments, the one ormore ideograph82 is a non-textual graphic, such as an emoticon, emoji, sticker, image, video, series of images, .gif, other media, or combination thereof, for example. In some embodiments, the one ormore ideograph82 may include one or more branded visual content such as product icons, licensed images, advertising icons, slogans, trademarks, and/or other images, emoticons, emoji, stickers, video, .gif, other media, and/or combinations thereof, for example. However, it should be understood that the one ormore ideograph82 may be any media conveying at least one of the one or more meaning of the one ormore indicator78 and/or providing context or tone clues to thedata70.
In analyzing thedata70, theprocessor34 may pass thedata70 to an application programming interface (API), a software development kit (SDK), or other interface. The interface may be included in thedatabase48, stored on the non-transitory processorreadable medium20, stored on the non-transitory processorreadable medium36, or accessible via thecommunications network16. The interface may search for the one ormore indicator78, for example by searching thedata70 for keywords such as the one ormore indicator78. Once one ormore indicator78 has been found within thedata70, the interface may further search for any corresponding words or emotions related to the one ormore indicator78 located within thedata70 which may be suitable for graphical representation by the one or more ideograph. For example, if a user inputs a first indicator78-1 “smile”, the first indicator78-1 would be passed to the interface and associated indicators78-2, which may have a relationship to the first indicator78-1, such as “happy,” “pleased,” “grin,” and “delighted” would also be collected. Theprocessor34 may then search for the one or morecorresponding ideograph82 based on one or more relational tags associated with the one ormore ideograph82 and indicative of a relationship between certain of the one ormore indicator78 and the one ormore ideograph82.
The one or more relational tags may be words associated with the one ormore ideograph82 within thedatabase48, the non-transitory processorreadable medium20, or the non-transitory processorreadable medium36 where the one ormore ideograph82 are stored. For example, where the one ormore ideograph82 is a heart, the relational tags may include love, like, heart, affection, amity, ardor, cherish, crush, lust, passion, and other words related to or depicted by the heart. It should be understood that the word recognition and ideograph insertion program46-1 may be able to weight certain factors in analyzing and selecting the one ormore ideograph82 to be displayed. For example, the word recognition and ideograph insertion program46-1 may weight tense, context, word placement, alternative spellings, and the like to appropriately select the one ormore ideograph82. Further, the word recognition and ideograph insertion program46-1 may enable the user to add additional relational tags, remove relational tags, or include additional ideograph(s) to existing relational tags. In certain embodiments, the relational tags may also include a brand, a character, a trademark, onomatopoeia, or other suitable words or phrases. In this case, the user may enter a desired relational tag, such as the name of an athlete or a sports franchise, into theapplication74, and receive one ormore ideograph82 indicative of the desired relational tag. In this embodiment, the relational tags may be used similar to an in-application image search function to automatically populate messages, email, and the like, with multi-media content without leaving the application to search for the content.
The one ormore ideograph82 may be retrieved without immediate user action beyond entering thetext72 into theapplication74, as discussed above. For example, upon entering thedata70 into the application, theprocessor34 may analyze thedata70 for the one ormore indicator78 and the retrieve one ormore ideograph82 indicative of the meaning of the one ormore indicator78 automatically in response to entering the one ormore indicator78 into theapplication74. By way of another example, in some embodiments, theprocessor34 may also prompt the user to select the one ormore ideograph82 for the one ormore indicator78 oncemultiple ideograph82 have been retrieved and theideograph82 entered into theapplication74 where theprocessor34 determines that the one ormore indicator78 has one or more meaning representable by a graphic. In one embodiment, theprocessor34 may retrieve the one ormore ideograph82 from thedatabase48, where the one ormore indicator78 is included in the plurality of indicators within thedatabase48 and the one ormore ideograph82 is included in the plurality ofideograph82. Retrieving the one ormore ideograph82 from thedatabase48 may be performed, as described above, where theprocessor34 or the interface searches thedatabase48 for the one ormore ideograph82 linked to the one ormore indicator78, by the relational tags, and retrieves the one ormore ideograph82 resulting from the search of thedatabase48. The one ormore ideograph82 may also be retrieved via thecommunications network16, from the one or more non-transitory processorreadable medium36 of theuser device14, from the one or more non-transitory processorreadable medium20 of thecomputer system12, from combinations thereof, or from any other suitable source of media accessible to theuser device14.
Referring now to FIGS.4-1-4-8, theprocessor34 may display the one ormore ideograph82 within theapplication74, in a pop-up window92 (as shown inFIGS. 4-1,4-3,4-5, and4-7) overlaying theapplication74, in a separate application, or any other suitable method. In the embodiment shown inFIGS. 4-1,4-3, and4-5, a plurality ofideograph82 have been retrieved for theindicator78 “love”. The plurality ofideographs82 is displayed in the pop-upwindow92 overlaying at least a portion of a displayed portion of theapplication74. The user may close the pop-upwindow92 by selecting anicon94, shown as an “X”. In other embodiments, the user may close the pop-upwindow92 by touching, clicking with a mouse or track ball, continuing to type in the application, or otherwise selecting or indicating another area of the display of theapplication74. In still other embodiments, the user may close the pop-upwindow92 via keyboard, mouse, or other input device, as described in more detail above in relation to theinput device22.
In some embodiments, after displaying the one ormore ideograph82, theprocessor34 may enable selection of the one ormore ideograph82 via theinput device38. Selection of the one ormore ideograph82 may then insert the one ormore ideograph82 into theapplication data70, for example thetext72 or the text stream. In some embodiments, as shown in FIGS.4-1-4-2 and4-7-4-8, insertion of the one ormore ideograph82 into the text stream may replace the one ormore indicator78 with the one ormore ideograph82 within thetext72 or text stream. In other embodiments, as shown in FIGS.4-3-4-4, insertion of the one ormore ideograph82 into the text stream may insert the one ormore ideograph82 into the text stream in sequence after the one ormore indicator78. In this embodiment, the user may be presented with the option to select aninsertion icon96 indicating desired placement of the one ormore ideograph82 after the one ormore indicator78. In yet another embodiment, shown in FIGS.4-5-4-6, inserting the one ormore ideograph82 may append the one ormore ideograph82 at the end of the text stream. In this embodiment, theinsertion icon96 may be selected in an alternative manner to the above embodiment to indicate a desired position at the end of the text stream for the one ormore ideograph82. For example, the user may select theshift key98 causing theinsertion icon96 to be highlighted to indicate that the one ormore ideograph82 when selected will be appended to the end of the text stream.
In some embodiments, theprocessor34 may enable insertion of the one ormore ideograph82 in multiple of the methods discussed above, and differ the method of insertion of the one ormore ideograph82 based on input from the user via theinput device38. For example, in some embodiments, the one ormore ideograph82 may replace the one ormore indicator78 by a gesture on a touch screen, double clicking the one ormore indicator78, or pressing a key. In other embodiments, the one ormore ideograph82 may be inserted after the one ormore indicator78 or appended to the end of the text stream via user input, such as one or more gestures on a touch screen, one or more key strokes on a keyboard, or interaction with the insertion icon shown inFIGS. 4-3 and4-5. For example, the user may be presented with the pop-upwindow92 having theinsertion icon96. When the user selects one of the one ormore ideograph82 presented in the pop-upwindow92 without selecting theinsertion icon96, theprocessor34 may replace the one ormore indicator78 with the selected one ormore ideograph82. When the user selects theinsertion icon96, theprocessor34 may insert the selected one ormore ideograph82 after the one ormore indicator78, as described above. Finally, when the user selects theinsertion icon96, for example by a double tap or additional selection of theshift key98, theprocessor34 may append the selected one ormore ideograph82 to the end of the text stream.
As shown inFIGS. 4-7 and4-8, and similar to the ideograph insertion described above, where the one ormore ideograph82 is a sticker, animation, or other media not traditionally included inline in text streams, theprocessor34 may, upon selection by the user, insert the sticker, animation, or other media into the conversation without inserting the one ormore ideograph82 into theapplication data70 currently being entered into the application. For example, as shown, theprocessor34 may display the sticker and replace theapplication data70 with the sticker to be transmitted to another user device, and thereby inserted into an ongoing conversation in theapplication74 between theuser device14 and another user device.
Referring now toFIG. 5 and FIGS.6-1-6-8, shown inFIG. 5 is a block diagram of some embodiments of the word recognition and advertising insertion program44-2. Similar to the word recognition and ideograph insertion program44-1 and the word recognition and ideograph insertion program46-1, the word recognition and advertising insertion program44-2 may be implemented in whole or in part on the non-transitory processor readable medium20 in thecomputer system12 or in whole or in part on the non-transitory processorreadable medium36 of theuser device14. In one embodiment, when executed, the word recognition and advertising insertion program44-2 may cause the one ormore processor18 to receive one or more brandedvisual content100 from one ormore advertiser102, as indicated byblock104. The word recognition and advertising insertion program44-2 may further cause theprocessor18 to monitorapplication data106 from theuser device14 via thecommunications network16, as indicated byblock108. Theprocessor18 may then analyze theapplication data106 for one ormore indicator110 within theapplication data106, as indicated byblock112. Theprocessor18 may then select and retrieve one or more selected brandedvisual content114, of the one or more brandedvisual content100, indicative of a graphical representation of the one ormore indicator110 or one or more meaning of the one ormore indicator110, as indicated byblock116. The processor may then transmit the one or more selected brandedvisual content114 to theuser device14 via thecommunications network16, as indicated byblock118.
The one or more brandedvisual content100 may be advertising branded iconic visual content such as licensed images, product icons, advertising icons, slogans, trademarks, and/or other branded content, for example. In some embodiments, for example, the one or more brandedcontent100 may be representative of a word, meaning, company, product, idea, slogan, political candidate, or the like which an entity may wish to advertise or disseminate to the public. Further, the brandedvisual content100 may be implemented as emoticons, emoji, stickers, video, .gif, other media, and/or combinations thereof. For example, as shown in FIGS.6-1-6-8, the brandedvisual content100 may be digital stickers containing trademarked images, names, and phrases.
The one ormore advertiser102 may be organizations, companies, candidates seeking political office, non-profit organizations, partnerships, individuals, governmental entities, and/or other groups or individuals seeking to bring consumer product messages and other messages into social media and mobile apps in a seemingly natural and user-friendly manner. For example, advertisers may be individuals or companies seeking to reach users in an engaging and cooperative way so their brand or product stands out, or becomes connected to a given activity, idea, or conveyed message.
As indicated byblock104, the word recognition and ideograph or in-app advertising system10-1, and thereby the word recognition and advertisement insertion program44-2, receives the one or more brandedvisual content100 from the one ormore advertiser102. The one ormore advertiser102 may provide the word recognition and ideograph or in-app advertising system10-1 with the one or more brandedvisual content100 by signing up for the word recognition and ideograph or in-app advertising system10-1 through a subscribe platform, for example hosted by or stored on the non-transitory processorreadable medium20 of thecomputer system12.
In one embodiment, the one ormore advertiser102 may register with the word recognition and ideograph or in-app advertising system10-1 via a website or other registration function or mechanism in order to provide the one or more brandedvisual content100 to the word recognition and ideograph or in-app advertising system10-1. After registration with the word recognition and ideograph or in-app advertising system10-1, the one ormore advertiser102 may provide the word recognition and ideograph or in-app advertising system10-1 with a list of targeted keywords, phrases, and/or targeted uses, for example through a website or other registration function or mechanism. The one ormore advertiser102 may then provide the one or more brandedvisual content100 or allow the word recognition and ideograph or in-app advertising system10-1 to provide the one or more brandedvisual content100. In some embodiments, the word recognition and ideograph or in-app advertising system10-1 may provide the one or more brandedvisual content100 for a fee after an agreement has been reached by the one ormore advertiser102 and a representative of the word recognition and ideograph or in-app advertising system10-1. In some embodiments, so long as the one ormore advertiser102 is registered with the word recognition and ideograph or in-app advertising system10-1, the one or more brandedvisual content100 of the one ormore advertiser102 may continue to be selected and transmitted to theuser devices14 through selected messaging platforms or programs stored on theuser device14. In some embodiments, the selected messaging platforms or programs may have opted in or made an agreement to participate in the word recognition and ideograph or in-app advertising system10-1 to provide users ofuser devices14 with the one or more brandedvisual content100.
The one ormore advertiser102, registering with the word recognition and ideograph or in-app advertising system10-1, may enter into an advertising purchasing agreement for the word recognition and ideograph or in-app advertising system10-1 based on a bidding function, exclusive advertising position, or other advertisement purchasing agreement. For example, as will be explained in more detail below, the one or more brandedvisual content100 may be presented to the user as a single image or sticker, or may be presented to the user in a group of images or stickers. In this embodiment, when the one ormore advertiser102 pays for an exclusive advertising position, the one or more brandedvisual content100 of the one or more advertiser may be presented as the single image or sticker. Where the one ormore advertiser102 pays for non-exclusive presentation, such as a bidding function, the one or more brandedvisual content100 of the one ormore advertiser102 may be presented within a group of images or stickers with the placement of the one or more brandedvisual content100 in the group of images or stickers based on a bid placed by the one ormore advertiser102, where the highest bid places the one or more brandedvisual content100 at the first choice of the group of images presented to the user.
After registering with the word recognition and ideograph or in-app advertising system10-1, the one ormore advertiser102 may be required to pay for display and insertion of the one or more brandedvisual content100, as referenced above. The word recognition and ideograph or in-app advertising system10-1 may include an accounting module which is configured to read the data, receive and process credit card information and other forms of payment, send invoicing and bill, and perform other accounting and billing functions. In some embodiments, when a messaging platform implements the functionality of the word recognition and advertising insertion program44-2, the one ormore advertiser102 may be charged a one-time advertising/sponsorship fee. In this embodiment, the messaging platform may receive the one-time fee from each of the one ormore advertiser102 using the word recognition and ideograph or in-app advertising system10-1 and sponsoring brandedvisual content100. The one ormore advertisers102, in some embodiments, may also be charged a processing fee in relation to continued registration and/or use of the word recognition and ideograph or in-app advertising system10-1 and/or access to the messaging platforms using the word recognition and ideograph or in-app advertising system10-1. The one ormore advertiser102 may also be charged based on a cost per use, cost per thousand, or other advertising billing method, by the messaging platform and/or the word recognition and ideograph or in-app advertising system10-1. For example, the one ormore advertiser102 may be charged, under the cost per use model, each time a user selects one of the one or more brandedvisual content100 associated with thatadvertiser100 and transmits that brandedvisual content100 to another user. Under the cost per thousand model, the one ormore advertiser102 may be charged based on the number of times the one or more brandedvisual content100 associated with thatadvertiser102 is displayed to the user, for example a fee for each thousand times that one or any of the one or more brandedvisual content100 is displayed. In some embodiments, the revenue generated from the one ormore advertiser102 may be divided between the word recognition and ideograph or in-app advertising system10-1 and the messaging platform implementing the word recognition and ideograph or in-app advertising system10-1.
Theprocessor18 may monitor theapplication data106 from theuser device14 similarly to the manner described with reference to theapplication data50 as described above. Further, theprocessor18 may analyze theapplication data106 for the one ormore indicator110 in a similar manner to that described in regards to theapplication data50 and the one ormore indicator52. In some embodiments, the messaging platform used by theuser device14 for theapplication data106 may utilize an API and/or SDK of the word recognition and ideograph or in-app advertising system10-1. As such the word recognition and advertising insertion program44-2 may filter theapplication data106 to decipher and identify the one ormore indicator110, such as keywords, key phrases, contextual meanings, andother indicators110. As shown inFIGS. 6-1,6-3,6-5, and6-7, in some embodiments, theprocessor18 may highlight the one ormore indicator110 within theapplication data106 to indicate to the user that the one or more indicator has one or more brandedvisual content100 which may graphically represent the one ormore indicator110.
Theprocessor18 may select and retrieve the one or more selected brandedvisual content114 of the one or more brandedvisual content100. The one or more selected brandedvisual content114 may be indicative of a graphical representation of the one ormore indicator110 or one or more meaning of the one ormore indicator110 as described above. In some embodiments, theprocessor18 may select the one or more selected brandedvisual content114 with selection criteria, in part, based on the agreement under which the one ormore advertiser102 uses the word recognition and ideograph or in-app advertising system10-1. As referenced above, where the messaging platform on theuser device14 implements the API and/or SDK of the word recognition and ideograph or in-app advertising system program10-1, the word recognition and advertising insertion program44-2 may automatically search to one or more brandedvisual content100 of the one ormore advertisers102 stored in the word recognition and ideograph or in-app advertising system10-1, for example stored on the non-transitory processorreadable medium20. By way of illustration, if a match between the one ormore indicator110, such as user generated keywords, and keywords associated with one or more of the brandedvisual content100 is found, theprocessor18 may select the brandedvisual content100 associated with those keywords and retrieve that brandedvisual content100 for distribution to theuser device14. For example, when a user types the word coffee, into theapplication data106, as shown inFIGS. 6-1,6-3,6-5, and6-7, theprocessor18, running the word recognition and advertising insertion program44-2, may identify “coffee” as the one ormore indicator110 and return selected brandedvisual content114 including graphics of a cup of coffee branded with the STARBUCKS® logo and a cup of coffee branded with the DUNKIN DONUTS® logo, as shown inFIG. 6-6. In some embodiments, as shown inFIGS. 6-4 and6-6, as will be discussed in more detail below, where the one ormore indicator110 are related to a plurality of the brandedvisual content100, theuser device14 may receive a set of choices, including all or some of the related plurality of selected brandedvisual content114. The user may then select a desired one or more of the one or more selected brandedvisual content114 for insertion into theapplication data106.
In some embodiments, with regard to presenting the selected brandedvisual content114 to the user on theuser device14, as shown inFIGS. 6-1 and6-2, the one or more selected brandedvisual content114 may be delivered in a singledigital sticker implementation120. In these embodiments, the digital sticker, a single selected brandedvisual content122, may be displayed on theuser device14 peeking out from behind a chat bar as a selectable icon, such that a portion of the digital sticker may be obscured by the chat bar. In other embodiments a plurality of selected brandedvisual content114 may be presented in a group of selectable icons, a portion of which may be hidden behind the chat bar, similar to that shown inFIG. 4-1. In some embodiments, as previously described, where the plurality of selected brandedvisual content114 is presented as peeking from the chat bar, the selected brandedvisual content114 associated with theadvertiser102 with the highest bid may be placed first in the group. In the embodiments with the singledigital sticker implementation120, the word recognition and advertising insertion program44-2 may enable the user to select one of the selected brandedvisual content122, at which time the selected brandedvisual content122 may be inserted into the user'sapplication data106, such as a text message conversation. In some embodiments, as shown inFIG. 6-2, when the user taps the selected brandedvisual content122, the selected brandedvisual content122 may be appended to the user'sapplication data106, appearing after or below theapplication data106. The user may also swipe the one or more selected brandedvisual content114, removing from theoutput device40, such as a display, and thereby rejecting the one or more selected brandedvisual content114. In some embodiments, the user may use other gestures or methods, such as an “x” icon on the sticker, to reject the one or more selected brandedvisual content114.
In some embodiments, the one or more selected brandedvisual content114 may be delivered in a digital sticker setimplementation124, as shown inFIGS. 6-3 and6-4. In these embodiments, the word recognition and advertising insertion program44-2 may present the one or more brandedvisual content114 as one or moredigital sticker icon126 appearing proximate to the chat bar, without a portion hidden behind the chat bar. Thedigital sticker icon126 may be the selected brandedvisual content114, or may be a different icon indicating the presence of an option to select brandedvisual content100 that can be enabled upon user selection. The presentation of the one or moredigital sticker icon126 may inform the user that there are brandedvisual content114 selections available, from a sponsored digital sticker set128, matching and/or representing one or more of theindicators110. A singledigital sticker icon126 may be shown for each sponsored digital sticker set128. Upon selection of thedigital sticker icon126, in some embodiments, theoutput40 of theuser device14 may display a popover view showing the entire digital sticker set128, all of the brandedvisual content100, from theadvertiser102 to which the digital sticker set128 is ascribed. In some embodiments, the digital sticker set128 may be shown within the messing platform proximate to theapplication data106, or in any other suitable manner. For example, as shown inFIG. 6-4, a plurality of branded visual content100-1-100-8 may be displayed within a chat body within the messaging platform on theuser device14, where each of the plurality of branded visual content100-1-100-8 are STARBUCKS® branded icons. The user may then choose one or more of the brandedvisual content100, without regard to its specific selection by the word recognition and advertising insertion program44-2. Selecting one of the one or more brandedvisual content100 from the digital sticker set may insert the selected brandedvisual content114 into theapplication data106. If multiple digital sticker sets128 from a plurality of the one ormore advertisers102 is representative of the one ormore indicator110 of theapplication data106, theadvertiser102 with the highest bid may receive the highest priority for insertion, with that advertiser's102 digital sticker set128 displayed first.
In some embodiments, the one or more selected brandedvisual content114 may be delivered in amultiple advertiser implementation130, as shown inFIGS. 6-5 and6-6. Similar to the digital sticker set128 implementation, themultiple advertiser130 implementation may enable theprocessor18 to identify the one ormore indicator110, keywords and/or phrases, within theapplication data106 which relate to the brandedvisual content100 in a plurality of sponsored digital sticker sets132 for a plurality ofadvertisers102. The word recognition and advertising insertion program44-2 may display to the user a singledigital sticker icon134 representative of the digital sticker sets132 with brandedvisual content100 relating to the one ormore indicator110. In some embodiments, the plurality ofadvertisers102, the digital sticker sets132, and/or the brandedvisual content100 within the digital sticker sets132 may be ordered based on bidding of the plurality ofadvertisers102 or other methods. In embodiments where the order of digital sticker sets132 is based on bidding, the digital sticker sets132 may be organized such that the digital sticker set132 of ahighest bidding advertiser102 of the plurality ofadvertisers102 appears first or in an otherwise prominent location. Upon selection of thedigital sticker icon134, the word recognition and advertising insertion program44-2 may cause theuser device14 to display a menu with all of the digital sticker sets132, with brandedvisual content100 related to the one ormore indicator100, from the plurality ofadvertisers102. The user may then select the brandedvisual content100 for insertion into theapplication data106 from the collected portfolio of the digital sticker sets132 displayed. In some embodiments, theadvertiser102 associated with the brandedvisual content100, selected from the portfolio of digital sticker sets132, may be billed at a cost per use rate. Theother advertisers102 whose brandedvisual content100 was displayed within the portfolio of digital sticker sets may be billed at a cost per thousand rate. In this manner, a monetization model may apply a premium charge for brandedvisual content100 delivered to and ultimately selected by the user. In this embodiment, a conventional charge, cost per thousand, may be charged toadvertisers102 based on the rate at which the brandedvisual content100 of theadvertisers102 are displayed for the user, but not selected.
In some embodiments, as shown inFIGS. 6-7 and6-8, the one or more selected brandedvisual content114 may be delivered in a pusheddigital sticker136 implementation. In embodiments with a pusheddigital sticker136 implementation, the word recognition and advertising insertion program44-2 may enable theuser device14 to automatically insert, push, the selected brandedvisual content114 into theapplication data106 on a keyword or key phrase contextual basis, by accessing and using targeted profile information about a user to display relevant selected brandedvisual content114, or other methods, for example. In these embodiments, theuser device14 may insert or push the selected brandedvisual content114 into theapplication data106 without user interaction. The selected brandedvisual content114 may also be appended to theapplication data106 appearing after or below theapplication data106 as shown inFIG. 6-8. The word recognition and advertising insertion program44-2 may allow for theuser device14 or messaging platform running on theuser device14 to have predetermined or selectable controls for when and/or how often selected brandedvisual content114 may be pushed or automatically inserted into or appended to theapplication data106. For example, each time there is a match for one ormore indicators110 within theapplication data106 the selected brandedvisual content114 may be inserted, as shown inFIG. 6-8. In other embodiments, after a certain number of messages have been sent using a messaging platform or at predetermined time intervals, the word recognition and advertising insertion program44-2 may enable a selected brandedvisual content114 to be pushed into theapplication data106. In other embodiments, the word recognition and advertising insertion program44-2 may enable pushing an unrelated brandedvisual content110 into theapplication data106 of a communication between two users or to the messaging platform of the user based on user criteria. For example, user criteria may include age, location, gender, device or operating system, purchasing habits, prior selected brandedvisual content114, or other user specific criteria. The frequency of pushing unrelated brandedvisual content110 to the user or into the user'sapplication data106 may be controlled by the word recognition and advertising insertion program44-2, by settings on theuser device14, or by the user, for example.
In any event, once the selected brandedvisual content114 is delivered to the user, theuser device14 or the messaging platform from which the user chose the selected brandedvisual content114 may send the user's selection to the word recognition and advertising insertion program44-2, such that the word recognition and advertising insertion program44-2 may create a record of the selection, a record of billing reflecting that selection, and bill the advertiser on a cost per use, cost per thousand, or other agreed billing interval.
During and after transmission of the selected brandedvisual content114 to theuser device14 and the selection or rejection of the selected brandedvisual content114 by the user, the word recognition and ideograph or in-app advertising system10-1 may also track information indicating how many times the brandedvisual content100 of the one ormore advertisers102 are presented, used, and/or cycled through. Theprocessor18, running the word recognition and advertising insertion program44-2, may store various values in the non-transitory processor readable medium20 or other electronic storage media indicative of the number of times the one or more brandedvisual content100 is transmitted and/or displayed to the user. Theprocessor18 may also store the number of times the one or more brandedvisual content100 is selected and/or inserted into theapplication data106 by the user. The information indicative of the display and/or selection of the one or more brandedvisual content100 may be provided to the one ormore advertiser102. The word recognition and ideograph or in-app advertising system10-1 and/or the messaging platform may also provide the one ormore advertiser102 with information on the amount of a budget of the one ormore advertiser102 which has been spent in a predetermined time period and total budgeted dollars remaining within the budget. Further, the word recognition and ideograph or in-app advertising system10-1 and/or the messaging platform may provide the one ormore advertiser102 with information on the user profile of users who select and/or reject the selected brandedvisual content114 of the one ormore advertiser102. For example, the one ormore advertiser102 may be provided with demographic information of users selecting the brandedvisual content100 of theadvertiser102. The information in the user profile may be provided by the user to the messaging platform, the communications provider of the user, theuser device14, or other user information shared by the user or collected as part of routine business practices.
Although a few embodiments of the disclosure have been described in detail above, those of ordinary skill in the art will readily appreciate that many modifications are possible without materially departing from the teachings of this disclosure. Accordingly, such modifications are intended to be included within the scope of this disclosure as defined in the claims.