TECHNICAL FIELDExample embodiments of the present application generally relate to image recognition, and more specifically, to a method and system for identifying items in video frames.
BACKGROUNDMobile devices such as smart phones have increasingly become prevalent. Most smart phones include an optical lens for taking pictures. A user interested in an item, for example, at a friend's place or while walking on the street, may use the photo feature on the smart phone to take a picture of the item. Unfortunately, the user of the smart phone has to hold the mobile device steady and the object being pictured needs to remain static otherwise the picture will come out blurry. As such, the user may opt to record a video of a dynamic scene instead of taking pictures.
BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which:
FIG. 1 is a network diagram depicting a network system, according to one embodiment, having a client-server architecture configured for exchanging data over a network;
FIG. 2 is a block diagram illustrating an example embodiment of a video processor application;
FIG. 3 is a block diagram illustrating an example embodiment of a video frame selector module;
FIG. 4 is a block diagram illustrating an example embodiment of an item identification module;
FIG. 5 is a block diagram illustrating an example embodiment of a location-based incentive module;
FIG. 6 is a block diagram illustrating an example embodiment of a location identification module;
FIG. 7 is a block diagram illustrating an example embodiment of an incentive module;
FIG. 8 is a table illustrating an example embodiment of a data structure;
FIG. 9A is a block diagram illustrating an example of a tagged video frame;
FIG. 9B is a block diagram illustrating another example of a tagged video frame;
FIG. 10 is a flow diagram of an example method for tagging a video frame with items;
FIG. 11 is a flow diagram of an example method for selecting a video frame;
FIG. 12 is a flow diagram of an example method for tagging a video frame;
FIG. 13A is a flow diagram of an example method for identifying an item in a video frame;
FIG. 13B is a flow diagram of another example method for identifying an item in a video frame;
FIG. 14A is a flow diagram of an example method for providing information on an item in a tagged video frame;
FIG. 14B is a flow diagram of an example method for providing location-based information on an item in a tagged video frame;
FIG. 15A is a flow diagram of another example method for identifying a location-based incentive;
FIG. 15B is a flow diagram of another example method for identifying a targeted incentive;
FIG. 15C is a flow diagram of an example method for expanding a search of local incentives;
FIG. 16 shows a diagrammatic representation of machine in the example form of a computer system within which a set of instructions may be executed to cause the machine to perform any one or more of the methodologies discussed herein.
DETAILED DESCRIPTIONAlthough the present invention has been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
In various embodiments, a method and a system generate offers to a user of a mobile device based on items identified in a video frame from a mobile device. A video frame selector module determines a video frame to process from the mobile device. An item identification module identifies an item in the determined video frame using an image recognition algorithm and tags the determined video frame with an identification of the item. Tags identifying the item can also be placed in the video frame adjacent to the identified item. In one embodiment, the offers just include offers to buy the product through one or more merchants. In another embodiment, the offers include an incentive to a user of a mobile device based on a geographic location of the mobile device. Incentives include and are not limited to promotions, discounts, sales, rebates, coupons, and so forth. In another embodiment, the incentive may also include item recommendations.
FIG. 1 is a network diagram depicting anetwork system100, according to one embodiment, having a client-server architecture configured for exchanging data over a network. For example, thenetwork system100 may be a publication/publisher system102 where clients may communicate and exchange data within thenetwork system100. The data may pertain to various functions (e.g., online item purchases) and aspects (e.g., managing content and user reputation values) associated with thenetwork system100 and its users. Although illustrated herein as a client-server architecture as an example, other embodiments may include other network architectures, such as a peer-to-peer or distributed network environment.
A data exchange platform, in an example form of a network-basedpublisher102, may provide server-side functionality, via a network104 (e.g., the Internet) to one or more clients. The one or more clients may include users that utilize thenetwork system100 and more specifically, the network-basedpublisher102, to exchange data over the network114. These transactions may include transmitting, receiving (communicating) and processing data to, from, and regarding content and users of thenetwork system100. The data may include, but are not limited to, content and user data such as feedback data; user reputation values; user profiles; user attributes; product and service reviews; product, service, manufacture, and vendor recommendations and identifiers; product and service listings associated with buyers and sellers; auction bids; and transaction data, among other things.
In various embodiments, the data exchanges within thenetwork system100 may be dependent upon user-selected functions available through one or more client or user interfaces (UIs). The UIs may be associated with a client machine, such as aclient machine106 using aweb client110. Theweb client110 may be in communication with the network-basedpublisher102 via aweb server120. The UIs may also be associated with aclient machine108 using aprogrammatic client112, such as a client application, or a third party server114 hosting athird party application116. It can be appreciated in various embodiments theclient machine106,108, or third party application114 may be associated with a buyer, a seller, a third party electronic commerce platform, a payment service provider, or a shipping service provider, each in communication with the network-basedpublisher102 and optionally each other. The buyers and sellers may be any one of individuals, merchants, or service providers, among other things.
Amobile device132 may also be in communication with the network-basedpublisher102 via aweb server120. Themobile device132 may include a portable electronic device providing at least some of the functionalities of theclient machines106 and108. Themobile device132 may include a third party application116 (or a web client) configured communicate withapplication server122. In one embodiment, themobile device132 includes aGPS module134 and anoptical lens136. TheGPS module134 is configured to determine a location of themobile device132. Theoptical lens136 enables themobile device132 to take pictures and videos.
Turning specifically to the network-basedpublisher102, an application program interface (API)server118 and aweb server120 are coupled to, and provide programmatic and web interfaces respectively to, one ormore application servers122. Theapplication servers122 host one or more publication application (s)124. Theapplication servers122 are, in turn, shown to be coupled to one or more database server(s)126 that facilitate access to one or more database(s)128.
In one embodiment, theweb server120 and theAPI server118 communicate and receive data pertaining to listings, transactions, and feedback, among other things, via various user input tools. For example, theweb server120 may send and receive data to and from a toolbar or webpage on a browser application (e.g., web client110) operating on a client machine (e.g., client machine106). TheAPI server118 may send and receive data to and from an application (e.g.,client application112 or third party application116) running on another client machine (e.g.,client machine108 or third party server114).
A publication application(s)124 may provide a number of publisher functions and services (e.g., listing, payment, etc.) to users that access the network-basedpublisher102. For example, the publication application(s)124 may provide a number of services and functions to users for listing goods and/or services for sale, facilitating transactions, and reviewing and providing feedback about transactions and associated users. Additionally, the publication application(s)124 may track and store data and metadata relating to listings, transactions, and user interaction with the network-basedpublisher102.
FIG. 1 also illustrates athird party application116 that may execute on a third party server114 and may have programmatic access to the network-basedpublisher102 via the programmatic interface provided by theAPI server118. For example, thethird party application116 may use information retrieved from the network-basedpublisher102 to support one or more features or functions on a website hosted by the third party. The third party website may, for example, provide one or more listing, feedback, publisher or payment functions that are supported by the relevant applications of the network-basedpublisher102.
The network-basedpublisher102 may provide a multitude of feedback, reputation, aggregation, and listing and price-setting mechanisms whereby a user may be a seller or buyer who lists or buys goods and/or services (e.g., for sale) published on the network-basedpublisher102.
The publication application(s)124 are shown to include, among other things, one or more application(s) which support the network-basedpublisher102, and more specifically, the listing of goods and/or services for sale, the receipt of feedback in response to a transaction involving a listing, and the generation of reputation values for users based on transaction data between users.
Theapplication server122 may include a video-processor application130 that communicates withpublication application124. The video processor application processes video frames sent from themobile device132 to identify items contained in a video frame and to provide item listings and generate offers or incentives to the mobile device as further described below. As items are identified in processed video frames, the video frame is tagged to allow for a “shopping pause” where a user of themobile device132 can pause video content and learn more about or purchase the identified item being shown in the video frame.
FIG. 2 is a block diagram illustrating an example embodiment of thevideo processor application130. Thevideo processor application130 can include a videoframe selector module202, anitem identification module204, amarket price module206, and a location-basedincentive application208. Each module (or component or sub-module thereof) may be implemented in hardware, software, firmware, or any combination thereof. In an example embodiment, each of the foregoing modules may be implemented by at least one processor.
The videoframe selector module202 determines which video frame (from a video clip) to process from themobile device132. An embodiment and operation of the videoframe selector module202 is explained in more detail with respect toFIG. 3.
Theitem identification module204 identifies an item in the selected video frame and tags the determined video frame with an identification of the item. An embodiment and operation of theitem identifier module204 is explained in more detail with respect toFIG. 4.
Themarket price module206 generates offers of the identified item from at least one merchant to the mobile device. For example, themarket price module206 determines a current market price of the identified item using online databases, online price comparison websites, and/or online retailer prices. In one embodiment, themarket price module206 can provide the latest bidding prices from an online auction website for the identified item. In another embodiment, themarket price module206 can provide the price of the identified item sold at retail stores (nearby or online).
The location-basedincentive application208 offers incentives from at least one local merchant based on the identified item and a geographic location of themobile device132. An embodiment and operation of the location-basedincentive application208 is explained in more detail with respect toFIG. 5.
FIG. 3 is a block diagram illustrating an example embodiment of a videoframe selector module202. The videoframe selector module202 comprises a videoframe analyzer module302 and a videoframe tag module304.
To efficiently process video frames, thevideo processor application130 only processes video frames that exceeds a predetermined amount of motion, thereby indicating a change or movement in the video frame subject matter. As such, the videoframe analyzer module302 determines a difference in a scene between a first video frame and a second video frame in a video clip from themobile device132. For example, the video may include a subject walking down a street. As such, the person will be moving relative to the street in the video clip. The videoframe analyzer module302 thus analyzes a difference of how much the subject matter has moved between a first video frame and a second frame.
The videoframe tag module304 tags the first or second video frame for item identification when the difference exceeds a predetermined amount of motion. As such, not every video frame is processed for item identification to preserve resources. Video frames that are to be processed for item identification are tagged for identification purposes. For example, a video frame that has been selected to be processed is tagged with a “shopping pause” tag. The tagged video frame is also referred to as the determined or selected video frame. In another embodiment, the video frame tag module fist determines whether a video frame contains an item to be identified before tagging the video frame for a shopping pause.
FIG. 4 is a block diagram illustrating an example embodiment of theitem identification module204. Theitem identification module204 includes ascene deconstructor module402, an image-recognition module404, anarea selector module406, and auser tag module408.
Thescene deconstructor module402 deconstructs a scene in the determined video frame into several areas. For example, thescene deconstructor module402 analyzes each area of the video frame for item identification. For example, the video frame may contain an image of a person with a hat, a handbag, and shoes. Thescene deconstructor module402 separately analyzes the hat in one area, the handbag in another area, and shoes in another area.
The image-recognition module404 identifies the item based on a comparison of an image of the item from the determined video frame with a library of item images using an image recognition algorithm. The image-recognition module404 further labels the image of the identified item in the determined video frame. In another embodiment, theimage recognition module404 identifies the item in the corresponding area of the determined video frame. In another embodiment, theimage recognition module404 identifies the item in the selected area in the determined video frame. In one embodiment, image-recognition module404 determines a name of the identified item and a price of the identified item, and labels the name and price of the identified item adjacent to the image of the identified item in the determined video frame.
Thearea selector module406 receives a user selection of an area in the determined video frame to identify the item. For example, a user may select an area in the video frame on which to focus on. Using the previous example, the user may tap on the image of the hat in the video frame to identify the item which is of interest to the user. In another example, the user may tap and drag a rectangular area in the video frame for theimage recognition module404 to focus on and analyze items in the selected rectangular area.
Theuser tag module408 receives a user input tag to help identify the item in the determined video frame. For example, the user may tap on the image of a hat in the video frame and then enter the word “hat” for theimage recognition module404 to focus its search on hats. The word “hat” may be tagged to the identified item.
FIG. 5 is a block diagram illustrating an example embodiment of a location-based incentive module. The location-basedincentive application208 has alocation identification module502 and anincentive module506.
Thelocation identification module502 determines a geographic location of themobile device132. Theincentive module506 communicates an incentive from one or more local merchants based on the identified item and the geographic location of themobile device132. The incentive can include a coupon, a discount, or a recommendation.
In one embodiment, the location-basedincentive application502 receives a communication from themobile device132. For example, the communication may include a location of themobile device132. Based on the location of themobile device132 and the identified item fromitem identifier module204, theincentive module506 consults with thedatabase server126 anddatabase128 to determine and communicate incentives from local merchants to themobile device132.
In another embodiment, theincentive module506 identifies local merchants in the area of the mobile device that have the identified item in stock for sale.
FIG. 6 is a block diagram illustrating an example embodiment of thelocation identification module502. The location of themobile device132 can be determined in many ways. For example, themobile device132 may be equipped with a Global Positioning Service (GPS) system that would allow the device to communicate the coordinate or location of the mobile device to a GPS/triangulation module602 of thelocation identification module502. In another example, the location of themobile device132 may be determined by triangulation using wireless communication towers and/or wireless nodes (e.g. wi-fi hotspots) within wireless signal reach of themobile device132. Based on the geographic coordinates, the GPS/triangulation module602 of thelocation identification module502 can determine the geographic location of themobile device132 after consulting a mapping database (not shown). Furthermore, the general location of themobile device132 can be located when the user of themobile device132 logs onto a local internet connection for example, at a hotel or coffee shop.
Thelocation identification module502 may also include alocation input module606 configured to determine a geographic location of themobile device132 by requesting the user to input an address, city, zip code or other location information. In one embodiment, the user can select a location from a list of locations or a map on themobile device132. For example, a user on themobile device132 inputs the location of themobile device132 via an application or a web browser on themobile device132.
Thelocation identification module502 may also include a location-dependentsearch term module604. The location of themobile device132 can be inferred when the user of themobile device132 requests a search on the mobile device using location-dependent search terms. For example, a user inputs a search on his/her mobile device for “Best Japanese Restaurant San Jose.” The location-dependentsearch term module604 consults a database (not show) that can determine the geographic location of the best Japanese restaurant in San Jose. The location-dependentsearch term module604 then infers that the user of themobile device132 is at that geographic location. In an example embodiment, the location-dependentsearch term module502 may infer the location of the user based on the search terms submitted by the user and irrespective of the search results or whether the user actually conducts the search. Using the foregoing example, the location-dependent search term module504 may parse the search query entered by the user and infer that the user is located in or around San Jose.
Thelocation identification module502 may also include atag module608 configured to determine the geographic of themobile device132 based on a tag associated with a unique geographic location. The tag may include for example, a barcode tag, such as a linear barcode, QR barcode, or other two-dimensional (2D) barcode, a Radio Frequency Identification (RFID) tag that is associated with a unique geographic location. For example, a user of themobile device132 may use his/her mobile device to scan the tag placed at a landmark or store. The tag is uniquely associated with the geographic location of the landmark or store. Such relationship can be stored in a database. Thetag module608 can then determine the geographic location of themobile device132 based on the tag after consulting the database.
FIG. 7 is a block diagram illustrating an example embodiment of theincentive module506 that may used to execute the processes described herein. Theincentive module506 includes alocal merchant module702, anitem category module704, anincentive matching module706, auser preference module708, anincentive receiver module710, an incentivecode generator module712, and acommunication module714.
Thelocal merchant module702 identifies at least one local merchant having at least one incentive based on the geographic location of themobile device132 as determined by thelocation identification module502. A local merchant is a merchant or retailer that is located within a predefined distance from the geographic location of themobile device132. In one embodiment, thelocal merchant module702 identifies at least one local merchant with at least one incentive based on an updated search distance preference as specified in theuser preference module708.
It should be noted that the incentive of the local merchant may or may not correspond to the item identified by the user. For example, a local merchant may feature a special sale on shoes while identified item corresponds to a digital camera. Once all local merchants having incentives are identified based on the geographic location of the mobile device (using a database of incentives), theincentive match module706 filters all local merchants based on the identified item. In the previous example, the local merchant featuring a sale on shoes may be filtered out from the search result.
Theitem category module704 determines a category of the item specified by the user and identified byitem identification module204. For example, the user may specify a particular digital camera. The item category module504 determines that the item specified by the user falls into the category of electronics, subcategory of cameras.
Theincentive match module706 determines whether the identified item specified by the user corresponds to an item identified in at least one incentive of at least one local merchant as determined bylocal merchant module702. For example, a user specifies an item with his/hermobile device132. The item is identified as a specific digital camera.Item identification module204 generates the brand, model number, color, and other attributes of the specified digital camera.Local merchant module702 identifies merchants with incentives local to the geographic location of themobile device132.Incentive match module706 matches local merchants with incentives (e.g., sale or discount) on the specific digital camera.
In another embodiment, theincentive match module706 determines whether the category of the item identified by the user corresponds to a category of items as determined byitem category module704 and identified in at least one incentive of at least one local merchant. For example, a user specifies an item with his/her mobile device. The item is identified as a specific digital camera.Item identification module204 generates the brand, model number, color, and other attributes of the specified digital camera. Theitem category module704 determines the category of the identified item: electronics.Local merchant module702 identifies merchants with incentives local to the geographic location of the mobile device. Theincentive match module706 matches local merchants with incentives (e.g., sale or discount) on electronics or categories related to the digital camera.
Theuser preference module708 provides user-defined preferences used in the process of determining local merchants or brands or category of the items. In one embodiment, theuser preference module708 allows a user to update a search distance preference for local merchants. For example, the user may wish to decrease the radius of the distance preference in a downtown area of a city. Conversely, the user may wish to increase the radius of the distance preference in a suburban or rural area of a city. In another embodiment,user preference module708 may also allow the user to specify favorite brands of items or favorite merchants or retailers.
Theincentive code module712 generates a code associated with at least one incentive selected by the user at the mobile device. The code is valid for a predetermined period of time at the corresponding local merchant. For example, a user selects a coupon from a local merchant on his/her mobile device. Theincentive code module712 generates a code associated with the coupon. The code is communicated to the mobile device of the user. The user takes the code to the corresponding local merchant to redeem the discount. The code can be redeemed at the local merchant by showing or telling the code to a cashier at the checkout register of the local merchant. The cashier may then enter the code at the checkout register to determine the validity of the code and appropriately apply the discount or promotion. The code can also be redeemed by displaying a machine-readable code such as a bar code on a screen of the mobile device. The user then displays the bar code to the cashier at the checkout register who can scan the bar code to determine the validity of the code and appropriately apply the discount or promotion.
In one embodiment, the code may be valid for a predetermined period of time (e.g., one day, one week). In another embodiment, the generated code may be uniquely associated with the user of the mobile device and may expire immediately upon usage.
Thecommunication module714 communicates one or more incentives of the identified item from at least one local merchant to the mobile device. For example, a list of local merchants within a preset distance radius (e.g., one mile) of the mobile device is displayed. The list of local merchants may include a sale or discount on the item identified by the user of the mobile device. The list may also include a list of recommended merchants (having an incentive on the identified item) that are located beyond the preset distance radius.
In another embodiment, thecommunication module714 communicates one or more incentives of the identified category of the items from at least one local merchant to the mobile device. For example, a list of local merchants within a preset distance radius (e.g., a block) of the mobile device is displayed. The list of local merchants may include merchants having a sale or discount on similar or related items to the identified item specified by the user of the mobile device. The list may also include a list of recommended merchants (having an incentive on similar items to the identified item) that are located beyond the preset distance radius.
Theincentive receiver module710 collects attributes of incentives from merchants and stores the attributes of the incentives in an incentive database. An example of a data structure of the incentive database is further described inFIG. 8.
FIG. 8 is a block diagram illustrating attributes of an example of a data structure. In one embodiment, the data structure includes attributes of the incentives for an item. For example, the attributes include a name attribute of themerchant802, a name attribute of theitem804, a brand attribute of theitem806, a model attribute of theitem808, a category tag of theitem810, a sub-category tag of theitem812, a financial promotion attribute of theitem814, and a financial promotion term attribute of theitem816.
Themerchant name attribute802 includes the name of the local merchant (e.g., Joe's Electronic Shop). Theitem name attribute804 includes the name of an item (e.g., digital camera XYZ D001). Thebrand attribute806 includes the brand name of the item (e.g., brand XYZ). Themodel attribute808 includes the model number of the item (e.g., D001). Thecategory tag810 includes a category metadata associated with the item (e.g., personal electronics). Thesub-category tag812 includes a sub-category metadata associated with the item (e.g., digital camera). Thefinancial promotion attribute814 includes the sale or discount associated with the item (e.g., 40% off all digital cameras, or 20% off all brand XYZ digital cameras). Thefinancial promotion term816 includes the terms of the sale or discount associated with the item (e.g., discount expires on xx/xx/xxxx, discount expires one week from today, or discount valid today only).
FIG. 9A is a block diagram illustrating an example of a taggedvideo frame900. Thevideo frame900 has been selected for processing by videoframe selector module202. Theitem identification module204 has identified two items (e.g., ahat902 and a handbag904) in thevideo frame900. The videoframe tag module304 generates a call out bubble on thevideo frame900 for each identified item. In one embodiment, a call out bubble may be placed on the video frame adjacent to the respective identified item. For example, call outbubble906 labels thehat902 with a market price for the identified item. Similarly, call outbubble908 labels thehandbag904 with a market price for the identified item.
FIG. 9B is a block diagram illustrating another example of a taggedvideo frame901. The user has selected a particular area within the video frame for theitem identification module204 to process. For example, the user may only be interested in the handbag. As such, the user has delineated a region ofinterest910 on thevideo frame900 to identify thehandbag904.
FIG. 10 is a flow diagram of an example method for tagging a video frame with items. At1002, a video frame from a mobile device is determined whether to be processed. At1004, items in the determined or selected video frame are identified. At1006, the video frame may be tagged with an identification of the items in the video frame.
FIG. 11 is a flow diagram of an example method for selecting a video frame. At1102, a difference between a first video frame and a second video frame is determined. At1104, the difference between a first frame and a second video frame is compared to a predetermined amount of difference. If the difference exceeds the predetermined amount of difference, the first or second video frame is processed and tagged for item identification at1106.
FIG. 12 is a flow diagram of an example method for tagging a video frame. At1202, a scene in the determined video frame is deconstructed into a plurality of areas. At1204, an item from each area is identified based on a comparison of an image of the item from the determined video frame with a library of item images. At1206, the image of the identified item is labeled in the determined video frame.
FIG. 13A is a flow diagram of an example method for identifying an item in a video frame. At1302, a user selects an area in the determined video frame to identify the item. At1304, the item in the selected area of the determined video frame is identified. At1306, the image of the identified item in the selected area of the determined video frame is labeled. In one embodiment, a name (e.g., brand, model) of the identified item and a price of the identified item are determined. The name and price of the identified item are placed adjacent to the image of the identified item in the determined video frame.
FIG. 13B is a flow diagram of another example method for identifying an item in a video frame. At1308, a user selects an area in the determined video frame to identify the item. At1310, a user input tag is received to help identify the item in the determined video frame. At1312, the item in the selected area of the determined video frame is identified based on the user input tag. At1314, the image of the identified item in the selected area of the determined video frame is labeled.
FIG. 14A is a flow diagram of an example method for providing information on an item in a tagged video frame. At1402, a video frame selection is received via the shopping pause feature as previously described. At1404, the user selects an identified item in the video frame. At1406, the system provides the vendors and merchants' prices. At1408, the system allows the user to purchase the identified item selected in the video frame. If the user decides to purchase the identified item, the system receives the purchase selection from the user (including merchant selection).
FIG. 14B is a flow diagram of an example method for providing location-based information on an item in a tagged video frame. At1402, a video frame selection is received via the shopping pause feature as previously described. At1404, the user selects an identified item in the video frame. At1408, the system determines a geographic location of themobile device132 and offers an incentive from at least one local merchant based on the identified item and the geographic location of themobile device132. The incentive can be a coupon, a discount, or a recommendation.
FIG. 15A is a flow chart of an example method for identifying a targeted incentive. At1502, thelocation identification module502 of the location-basedincentive application208 determines the geographic location of themobile device132 of a user. At1504, theitem identification module204 of the location-basedincentive application208 identifies an item specified by the user at the geographic location of themobile device132. At1506, thelocal merchant module702 of theincentive module506 determines local merchants with at least one incentive. At1508, theincentive match module706 of theincentive module506 of the location-basedincentive application208 determines whether the identified item as specified by the user corresponds to an item identified in at least one incentive of the local merchants as determined atoperation1506. At1510, thecommunication module714 of theincentive module506 of the location-basedincentive application208 communicates a list of local merchants with incentives for the identified item.
FIG. 15B is a flow chart of another example method for identifying a targeted incentive. At1512, if there are no local merchants having incentives on the identified item, theitem category module704 of theincentive module506 of the location-basedincentive application208 determines a category of the identified item. At1514, theincentive match module706 of theincentive module506 of the location-basedincentive application208 determines whether a category of the identified item as specified by the user corresponds to a category of items identified in at least one incentive of the local merchants as determined atoperation1506. At1516, thecommunication module714 of theincentive module506 of the location-basedincentive application208 communicates a list of local merchants with incentives on similar or related items from the same category of the identified item.
FIG. 15C is a flow chart of an example method for expanding a search of local incentives. At1518, thecommunication module714 of theincentive module506 of the location-basedincentive application208 communicates that theincentive match module706 of theincentive module506 of the location-basedincentive application208 cannot find any incentives from local merchants related to the identified item. At1520, theincentive module506 may offer the user to expand or increase the distance radius preference for local merchants in theuser preference module708. At1522, theuser preference module708 may be updated to reflect a new distance radius preference when searching for local merchants with incentives.
FIG. 16 shows a diagrammatic representation of machine in the example form of acomputer system1600 within which a set of instructions may be executed causing the machine to perform any one or more of the methodologies discussed herein. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
Theexample computer system1600 includes a processor1602 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both), amain memory1604 and astatic memory1606, which communicate with each other via abus1608. Thecomputer system1600 may further include a video display unit1610 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). Thecomputer system1600 also includes an alphanumeric input device1612 (e.g., a keyboard), a user interface (UI) navigation device1614 (e.g., a mouse), adisk drive unit1616, a signal generation device1618 (e.g., a speaker) and anetwork interface device1620.
Thedisk drive unit1616 includes a machine-readable medium1622 on which is stored one or more sets of instructions and data structures (e.g., software1624) embodying or utilized by any one or more of the methodologies or functions described herein. Thesoftware1624 may also reside, completely or at least partially, within themain memory1604 and/or within theprocessor1602 during execution thereof by thecomputer system1600, themain memory1604 and theprocessor1602 also constituting machine-readable media.
Thesoftware1624 may further be transmitted or received over anetwork1626 via thenetwork interface device1620 utilizing any one of a number of well-known transfer protocols (e.g., HTTP).
While the machine-readable medium1622 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.