RELATED APPLICATIONThis application is a continuation of U.S. application Ser. No. 14/704,472, filed May 5, 2015, which is incorporated by reference herein in its entirety.
TECHNICAL FIELDThis relates generally to viewing embedded content in an item of content, including but not limited to using gestures to view embedded content.
BACKGROUNDThe Internet has become an increasingly dominant platform for the publication of electronic content, for both the media and the general population. Electronic content takes on many forms, some with which a consumer can interact, such as embedded pictures or videos a consumer may view and manipulate. The embedded pictures or videos are embedded, for example, in digital items of content.
As the use of mobile devices for digesting electronic content becomes more prevalent, consumers often struggle to view and interact with embedded electronic content in an efficient and effective manner.
SUMMARYAccordingly, there is a need for methods, systems, and interfaces for viewing embedded content in a simple and efficient manner. By utilizing gestures to view various portions of embedded content at various resolutions, users can efficiently and easily digest electronic content. Such methods and interfaces optionally complement or replace conventional methods for viewing embedded content.
In accordance with some embodiments, a method is performed at an electronic device (e.g., a client device) with one or more processors and memory storing instructions for execution by the one or more processors. The method includes simultaneously displaying, within an item of content, an embedded content item and a first portion of the item of content distinct from the embedded content item in a display area having a display height and a display width. The embedded content item is displayed at a first resolution at which the entire width of the embedded content item is contained within the display width of the display area. A first user input is detected, indicating selection of the embedded content item. In response to the first user input, display of the first portion of the item of content ceases, and a first portion of the embedded content item is displayed at a second resolution that is greater than the first resolution, wherein a height of the first portion of the embedded content item at the second resolution equals the display height.
In accordance with some embodiments, an electronic device (e.g., a client device) includes one or more processors, memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for performing the operations of the method described above. In accordance with some embodiments, a non-transitory computer-readable storage medium has stored therein instructions that, when executed by the electronic device, cause the electronic device to perform the operations of the method described above.
Thus, electronic devices are provided with more effective and efficient methods for viewing embedded content, thereby increasing the effectiveness and efficiency of such devices and user satisfaction with such devices.
BRIEF DESCRIPTION OF THE DRAWINGSFor a better understanding of the various described embodiments, reference should be made to the Description of Embodiments below, in conjunction with the following drawings. Like reference numerals refer to corresponding parts throughout the figures and description.
FIG. 1 is a block diagram illustrating an exemplary network architecture of a social network in accordance with some embodiments.
FIG. 2 is a block diagram illustrating an exemplary social-network system in accordance with some embodiments.
FIG. 3 is a block diagram illustrating an exemplary client device in accordance with some embodiments.
FIGS. 4A-4G illustrate exemplary graphical user interfaces (GUIs) on a client device for viewing embedded content, in accordance with some embodiments.
FIGS. 5A-5C are flow diagrams illustrating a method of viewing embedded content, in accordance with some embodiments.
DESCRIPTION OF EMBODIMENTSReference will now be made to embodiments, examples of which are illustrated in the accompanying drawings. In the following description, numerous specific details are set forth in order to provide an understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are used only to distinguish one element from another. For example, a first portion of an item of content could be termed a second portion of the item of content, and, similarly, a second portion of the item of content could be termed a first portion of the item of content, without departing from the scope of the various described embodiments. The first portion of the item of content and the second portion of the item of content are both portions of the item of content, but they are not the same portion.
The terminology used in the description of the various embodiments described herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected,” depending on the context.
As used herein, the term “exemplary” is used in the sense of “serving as an example, instance, or illustration” and not in the sense of “representing the best of its kind.”
FIG. 1 is a block diagram illustrating an exemplary network architecture100 of a social network in accordance with some embodiments. The network architecture100 includes a number of client devices (also called “client systems,” “client computers,” or “clients”)104-1,104-2, . . .104-ncommunicably connected to an electronic social-network system108 by one or more networks106 (e.g., the Internet, cellular telephone networks, mobile data networks, other wide area networks, local area networks, metropolitan area networks, and so on). In some embodiments, the one ormore networks106 include a public communication network (e.g., the Internet and/or a cellular data network), a private communications network (e.g., a private LAN or leased lines), or a combination of such communication networks.
In some embodiments, the client devices104-1,104-2, . . .104-nare computing devices such as smart watches, personal digital assistants, portable media players, smart phones, tablet computers, 2D gaming devices, 3D (e.g., virtual reality) gaming devices, laptop computers, desktop computers, televisions with one or more processors embedded therein or coupled thereto, in-vehicle information systems (e.g., an in-car computer system that provides navigation, entertainment, and/or other information), and/or other appropriate computing devices that can be used to communicate with the social-network system108. In some embodiments, the social-network system108 is a single computing device such as a computer server, while in other embodiments, the social-network system108 is implemented by multiple computing devices working together to perform the actions of a server system (e.g., cloud computing).
Users102-1,102-2, . . .102-nemploy the client devices104-1,104-2, . . .104-nto access the social-network system108 and to participate in a corresponding social-networking service provided by the social-network system108. For example, one or more of the client devices104-1,104-2, . . .104-nexecute web browser applications that can be used to access the social-networking service. As another example, one or more of the client devices104-1,104-2, . . .104-nexecute software applications that are specific to the social-networking service (e.g., social-networking “apps” running on smart phones or tablets, such as a Facebook social-networking application running on an iPhone, Android, or Windows smart phone or tablet).
Users interacting with the client devices104-1,104-2, . . .104-ncan participate in the social-networking service provided by the social-network system108 by posting information (e.g., items of content), such as text comments (e.g., updates, announcements, replies), digital photos, videos, audio files, links, and/or other electronic content. Users of the social-networking service can also annotate information (e.g., items of content) posted by other users of the social-networking service (e.g., endorsing or “liking” a posting of another user, or commenting on a posting by another user). In some embodiments, information can be posted on a user's behalf by systems and/or services external to the social-network system108. For example, the user may post a review of a movie to a movie-review website, and with proper permissions that website may cross-post the review to thesocial network system108 on the user's behalf. In another example, a software application executing on a mobile client device, with proper permissions, may use global positioning system (GPS) or other geo-location capabilities (e.g., Wi-Fi or hybrid positioning systems) to determine the user's location and update thesocial network system108 with the user's location (e.g., “At Home”, “At Work”, or “In San Francisco, Calif.”), and/or update thesocial network system108 with information derived from and/or based on the user's location. Users interacting with the client devices104-1,104-2, . . .104-ncan also use the social-networking service provided by the social-network system108 to define groups of users. Users interacting with the client devices104-1,104-2, . . .104-ncan also use the social-networking service provided by the social-network system108 to communicate and collaborate with each other.
In some embodiments, the network architecture100 also includes third-party servers110-1,110-2, . . .110-m.In some embodiments, a given third-party server110 is used to host third-party websites that provide web pages toclient devices104, either directly or in conjunction with the social-network system108. In some embodiments, the social-network system108 uses inline frames (“iframes”) to nest independent websites within a user's social network session. In some embodiments, a given third-party server is used to host third-party applications that are used byclient devices104, either directly or in conjunction with the social-network system108. In some embodiments, the social-network system108 uses iframes to enable third-party developers to create applications that are hosted separately by a third-party server110, but operate within a social-networking session of auser102 and are accessed through the user's profile in the social-network system108. Exemplary third-party applications include applications for books, business, communication, contests, education, entertainment, fashion, finance, food and drink, games, health and fitness, lifestyle, local information, movies, television, music and audio, news, photos, video, productivity, reference material, security, shopping, sports, travel, utilities, and the like. In some embodiments, a given third-party server110 is used to host enterprise systems, which are used byclient devices104, either directly or in conjunction with the social-network system108. In some embodiments, a given third-party server110 is used to provide third-party content, such as items of content (e.g., news articles, reviews, message feeds, etc.). Items of content may include embedded content items (e.g., text, photos, videos, audio, and/or other electronic content with which a user may interact, such as interactive maps, games, etc.).
In some embodiments, a given third-party server110 is a single computing device, while in other embodiments, a given third-party server110 is implemented by multiple computing devices working together to perform the actions of a server system (e.g., cloud computing).
FIG. 2 is a block diagram illustrating an exemplary social-network system108 in accordance with some embodiments. The social-network system108 typically includes one or more processing units (processors or cores)202, one or more network orother communications interfaces204,memory206, and one ormore communication buses208 for interconnecting these components. Thecommunication buses208 optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. The social-network system108 optionally includes a user interface (not shown). The user interface, if provided, may include a display device and optionally includes inputs such as a keyboard, mouse, trackpad, and/or input buttons. Alternatively or in addition, the display device includes a touch-sensitive surface, in which case the display is a touch-sensitive display.
Memory206 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, and/or other non-volatile solid-state storage devices.Memory206 may optionally include one or more storage devices remotely located from the processor(s)202.Memory206, or alternately the non-volatile memory device(s) withinmemory206, includes a non-transitory computer-readable storage medium. In some embodiments,memory206 or the computer-readable storage medium ofmemory206 stores the following programs, modules and data structures, or a subset or superset thereof:
- anoperating system210 that includes procedures for handling various basic system services and for performing hardware dependent tasks;
- anetwork communication module212 that is used for connecting the social-network system108 to other computers via the one or more communication network interfaces204 (wired or wireless) and one or more communication networks (e.g., the one or more networks106)
- asocial network database214 for storing data associated with the social network, such as:
- entity information216, such as user information218;
- connection information220; and
- content222, such as user content224 (e.g., items of content with embedded content items, such as text, photos, videos, audio, and/or other electronic content with which a user may interact, such as interactive maps, games, etc.) and/ornews articles226;
- a socialnetwork server module228 for providing social-networking services and related features (e.g., in conjunction withbrowser module338 or socialnetwork client module340 on theclient device104,FIG. 3), which includes:
- alogin module230 for logging auser102 at aclient104 into the social-network system108; and
- acontent feed manager232 for providing content to be sent toclients104 for display, which includes:
- acontent generator module234 for adding objects to thesocial network database214, such as images, videos, audio files, comments, status messages, links, applications, and/orother entity information216,connection information220, orcontent222; and
- acontent selector module236 for choosing the information/content to be sent toclients104 for display; and
- asearch module238 for enabling users of the social-network system to search for content and other users in the social network.
Thesocial network database214 stores data associated with the social network in one or more types of databases, such as graph, dimensional, flat, hierarchical, network, object-oriented, relational, and/or XML databases.
In some embodiments, thesocial network database214 includes a graph database, withentity information216 represented as nodes in the graph database andconnection information220 represented as edges in the graph database. The graph database includes a plurality of nodes, as well as a plurality of edges that define connections between corresponding nodes. In some embodiments, the nodes and/or edges themselves are data objects that include the identifiers, attributes, and information for their corresponding entities, some of which are rendered atclients104 on corresponding profile pages or other pages in the social-networking service. In some embodiments, the nodes also include pointers or references to other objects, data structures, or resources for use in rendering content in conjunction with the rendering of the pages corresponding to the respective nodes atclients104.
Entity information216 includes user information218, such as user profiles, login information, privacy and other preferences, biographical data, and the like. In some embodiments, for a given user, the user information218 includes the user's name, profile picture, contact information, birth date, sex, marital status, family status, employment, education background, preferences, interests, and/or other demographic information.
In some embodiments,entity information216 includes information about a physical location (e.g., a restaurant, theater, landmark, city, state, or country), real or intellectual property (e.g., a sculpture, painting, movie, game, song, idea/concept, photograph, or written work), a business, a group of people, and/or a group of businesses. In some embodiments,entity information216 includes information about a resource, such as an audio file, a video file, a digital photo, a text file, a structured document (e.g., web page), or an application. In some embodiments, the resource is located in the social-network system108 (e.g., in content222) or on an external server, such as third-party server110.
In some embodiments,connection information220 includes information about the relationships between entities in thesocial network database214. In some embodiments,connection information220 includes information about edges that connect pairs of nodes in a graph database. In some embodiments, an edge connecting a pair of nodes represents a relationship between the pair of nodes.
In some embodiments, an edge includes or represents one or more data objects or attributes that correspond to the relationship between a pair of nodes. For example, when a first user indicates that a second user is a “friend” of the first user, the social-network system108 transmits a “friend request” to the second user. If the second user confirms the “friend request,” the social-network system108 creates and stores an edge connecting the first user's user node and the second user's user node in a graph database asconnection information220 that indicates that the first user and the second user are friends. In some embodiments,connection information220 represents a friendship, a family relationship, a business or employment relationship, a fan relationship, a follower relationship, a visitor relationship, a subscriber relationship, a superior/subordinate relationship, a reciprocal relationship, a non-reciprocal relationship, another suitable type of relationship, or two or more such relationships.
In some embodiments, an edge between a user node and another entity node represents connection information about a particular action or activity performed by a user of the user node towards the other entity node. For example, a user may “like” or have “attended,” “played,” “listened,” “cooked,” “worked at,” or “watched” the entity at the other node. The page in the social-networking service that corresponds to the entity at the other node may include, for example, a selectable “like,” “check in,” or “add to favorites” icon. After the user clicks one of these icons, the social-network system108 may create a “like” edge, “check in” edge, or a “favorites” edge in response to the corresponding user action. As another example, the user may listen to a particular song using a particular application (e.g., an online music application). In this case, the social-network system108 may create a “listened” edge and a “used” edge between the user node that corresponds to the user and the entity nodes that correspond to the song and the application, respectively, to indicate that the user listened to the song and used the application. In addition, the social-network system108 may create a “played” edge between the entity nodes that correspond to the song and the application to indicate that the particular song was played by the particular application.
In some embodiments,content222 includes text (e.g., ASCII, SGML, HTML), images (e.g., jpeg, tif and gif), graphics (e.g., vector-based or bitmap), audio, video (e.g., mpeg), other multimedia, and/or combinations thereof. In some embodiments,content222 includes executable code (e.g., games executable within a browser window or frame), podcasts, links, and the like.
In some embodiments, the socialnetwork server module228 includes web or Hypertext Transfer Protocol (HTTP) servers, File Transfer Protocol (FTP) servers, as well as web pages and applications implemented using Common Gateway Interface (CGI) script, PHP Hyper-text Preprocessor (PHP), Active Server Pages (ASP), Hyper Text Markup Language (HTML), Extensible Markup Language (XML), Java, JavaScript, Asynchronous JavaScript and XML (AJAX), XHP, Javelin, Wireless Universal Resource File (WURFL), and the like.
FIG. 3 is a block diagram illustrating anexemplary client device104 in accordance with some embodiments. Theclient device104 typically includes one or more processing units (processors or cores)302, one or more network orother communications interfaces304,memory306, and one ormore communication buses308 for interconnecting these components. Thecommunication buses308 optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Theclient device104 includes a user interface310. The user interface310 typically includes adisplay device312. In some embodiments, theclient device104 includes inputs such as a keyboard, mouse, and/orother input buttons316. Alternatively or in addition, in some embodiments, thedisplay device312 includes a touch-sensitive surface314, in which case thedisplay device312 is a touch-sensitive display. In some embodiments, the touch-sensitive surface314 is configured to detect various swipe gestures (e.g., in vertical and/or horizontal directions) and/or other gestures (e.g., single/double tap). In client devices that have a touch-sensitive display312, a physical keyboard is optional (e.g., a soft keyboard may be displayed when keyboard entry is needed). The user interface310 also includes anaudio output device318, such as speakers or an audio output connection connected to speakers, earphones, or headphones. Furthermore, someclient devices104 use a microphone and voice recognition to supplement or replace the keyboard. Optionally, theclient device104 includes an audio input device320 (e.g., a microphone) to capture audio (e.g., speech from a user). Optionally, theclient device104 includes alocation detection device322, such as a GPS (global positioning satellite) or other geo-location receiver, for determining the location of theclient device104. Theclient device104 also optionally includes an image/video capture device324, such as a camera or webcam.
In some embodiments, theclient device104 includes one or more optional sensors323 (e.g., gyroscope, accelerometer) for detecting a motion and/or change in orientation of the client device. In some embodiments, a detected motion and/or orientation of the client device104 (e.g., the motion/change in orientation corresponding to a user input produced by a user of the client device) is used to manipulate an interface (or content items within the interface) displayed on the client device104 (e.g., viewing different portions of a displayed embedded content item, as shown inFIGS. 4D and 4E).
Memory306 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM or other random-access solid-state memory devices; and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.Memory306 may optionally include one or more storage devices remotely located from the processor(s)302.Memory306, or alternately the non-volatile memory device(s) withinmemory306, includes a non-transitory computer-readable storage medium. In some embodiments,memory306 or the computer-readable storage medium ofmemory306 stores the following programs, modules and data structures, or a subset or superset thereof:
- anoperating system326 that includes procedures for handling various basic system services and for performing hardware dependent tasks;
- anetwork communication module328 that is used for connecting theclient device104 to other computers via the one or more communication network interfaces304 (wired or wireless) and one or more communication networks, such as the Internet, cellular telephone networks, mobile data networks, other wide area networks, local area networks, metropolitan area networks, and so on;
- an image/video capture module330 (e.g., a camera module) for processing a respective image or video captured by the image/video capture device324, where the respective image or video may be sent or streamed (e.g., by a client application module336) to the social-network system108;
- an audio input module332 (e.g., a microphone module) for processing audio captured by theaudio input device320, where the respective audio may be sent or streamed (e.g., by a client application module336) to the social-network system108;
- a location detection module334 (e.g., a GPS, Wi-Fi, or hybrid positioning module) for determining the location of the client device104 (e.g., using the location detection device322) and providing this location information for use in various applications (e.g., social network client module340); and
- one or moreclient application modules336, including the following modules (or sets of instructions), or a subset or superset thereof:
- a web browser module338 (e.g., Internet Explorer by Microsoft, Firefox by Mozilla, Safari by Apple, or Chrome by Google) for accessing, viewing, and interacting with web sites (e.g., a social-networking web site provided by the social-network system108 and/or web sites that are linked to in asocial network module340 and/or an optional client application module342), such as a web site hosting a service for displaying and accessing items of content (e.g., news articles) with embedded content items (e.g., text, photos, videos, audio, and/or other electronic content with which a user may interact)
- asocial network module340 for providing an interface to a social-networking service (e.g., a social-networking service provided by social-network system108) and related features, such as an interface to a service for displaying and accessing items of content (e.g., news articles) with embedded content items (e.g., text, photos, videos, audio, and/or other electronic content with which a user may interact); and/or
- optionalclient application modules342, such as applications for displaying and accessing items of content (e.g., news articles) with embedded content items (e.g., text, photos, videos, audio, and/or other electronic content with which a user may interact), word processing, calendaring, mapping, weather, stocks, time keeping, virtual digital assistant, presenting, number crunching (spreadsheets), drawing, instant messaging, e mail, telephony, video conferencing, photo management, video management, a digital music player, a digital video player, 2D gaming, 3D (e.g., virtual reality) gaming, electronic book reader, and/or workout support.
Each of the above identified modules and applications correspond to a set of executable instructions for performing one or more functions as described above and/or in the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are, optionally, combined or otherwise re-arranged in various embodiments. In some embodiments,memory206 and/or306 store a subset of the modules and data structures identified above. Furthermore,memory206 and/or306 optionally store additional modules and data structures not described above.
Attention is now directed towards embodiments of graphical user interfaces (“GUIs”) and associated processes that may be implemented on a client device (e.g., theclient device104 inFIG. 3).
FIGS. 4A-4G illustrate exemplary GUIs on aclient device104 for viewing items of content that include content items (e.g., pictures, graphics, etc.) embedded within them, in accordance with some embodiments. The GUIs in these figures are displayed in response to detected user inputs, starting from the displayed item of content400 (FIG. 4A), and are used to illustrate the processes described below, including the method500 (FIGS. 5A-5C). The GUIs may be provided by a web browser (e.g.,web browser module338,FIG. 3), an application for a social-networking service (e.g., social network module340), and/or a third-party application (e.g., client application module342). WhileFIGS. 4A-4G illustrate examples of GUIs, in other embodiments, a GUI displays user-interface elements in arrangements distinct from the embodiments ofFIGS. 4A-4G.
FIGS. 4A and 4B illustrate a GUI for an item ofcontent400 and embeddedcontent item402. Items of content include various types of formatted content (e.g., web content, such as HTML-formatted documents, or documents in other proprietary web formats), including but not limited to news articles, web pages, blogs, user content published via a social-networking service, and/or other types of published content. Items of content may include various types of embedded content items presentable to a user and with which a user may interact. Examples of embedded content items include text, digital media (e.g., photos, videos, audio), and/or other electronic content with which a user may interact (e.g., interactive maps, games, etc.). InFIGS. 4A and 4B, the item ofcontent400 is a news article (titled “Sea Turtle Egg Hatchings Hit Record High”) that includes embedded content item402 (a picture).
Swipe gesture404-1 inFIG. 4A corresponds to a vertical scroll for viewing and browsing the item ofcontent400, where the resulting view inFIG. 4B allows the embeddedcontent item402 to be shown in its entirety.
InFIG. 4B, detecting a gesture406 (e.g., a tap) on the embeddedcontent item402 results in displaying the embedded content item at a larger resolution (FIG. 4C) than the resolution at which the embedded content item is displayed inFIG. 4B. Only a portion402-1 of the embedded content item is shown inFIG. 4C, because the entire embeddedcontent item402 does not fit with the display area at the larger resolution. While displaying the embeddedcontent item402 at the larger resolution, detecting tilt gesture408-1 (shown inFIG. 4D for a cross-sectional view of the client device104-1) results in displaying a different portion402-2 of the embedded content item, while detecting tilt gesture408-2 (FIG. 4E) results in displaying yet another portion402-3 of the embedded content item.
InFIG. 4F, detecting a swipe gesture404-2 (while displaying the embeddedcontent item402 at the larger resolution) reverts back to displaying the embeddedcontent item402 at the initial resolution (FIG. 4B), as shown inFIG. 4G.
The GUIs shown inFIGS. 4A-4G are described in greater detail below in conjunction with themethod500 ofFIGS. 5A-5C.
FIGS. 5A-5C are flow diagrams illustrating themethod500 of viewing embedded content, in accordance with some embodiments. Themethod500 is performed on an electronic device (e.g.,client device104,FIGS. 1 and 3).FIGS. 5A-5C correspond to instructions stored in a computer memory (e.g.,memory306 of theclient device104,FIG. 3) or other computer-readable storage medium. To assist with describing themethod500,FIGS. 5A-5C will be described with reference to the exemplary GUIs illustrated inFIGS. 4A-4G.
In themethod500, the electronic device simultaneously displays (502), within an item of content, an embedded content item and a first portion of the item of content distinct from the embedded content item. The embedded content item and a first portion are displayed together in a display area having a display height and a display width. The embedded content item is displayed at a first resolution at which the entire width of the embedded content item is contained within the display width of the display area. As shown in the example ofFIG. 4A, the embeddedcontent item402 is displayed at a resolution at which the entire width of the embeddedcontent item402 is contained within the display width. In some embodiments, the embedded content item is displayed at a resolution at which the entire height of the embedded content item is contained within the display height of the display area (e.g., the embeddedcontent item402 as shown inFIG. 4B).
As described above, items of content include various types of formatted content, which may include different types of embedded content items presentable to a user and with which a user may interact. In some embodiments, the item of content includes (504) text, and the embedded content item includes a picture or graphic. InFIG. 4A, for example, the item ofcontent400 is a news article, a portion of which is simultaneously displayed with the embeddedcontent item402, which is an associated picture. Other examples of items of content include but are not limited to web pages, blogs, user content published via a social-networking service, and/or other types of published content. Other examples of embedded content items include text, other types of digital media (e.g., videos), and/or other electronic content with which a user may interact (e.g., interactive maps, games, etc.).
In some embodiments, the electronic device includes (506) a display device (e.g.,display312,FIGS. 3 and 4A) having a screen area. The display area occupies (i.e., is coextensive with) the screen area of the display device. Referring toFIG. 4B, for example, a portion of the item ofcontent400 and the embeddedcontent item402 are simultaneously displayed in a display area, where the display area occupies the screen area of thedisplay312. In some embodiments, the display area occupies less than the screen area of the display device (e.g., the GUI displaying the item of content and embedded content item is a window or tile that occupies only a fraction of the screen area).
In some embodiments, the first portion of the item of content includes (508) a first sub-portion above the embedded content item as displayed at the first resolution, and a second sub-portion below the embedded content item as displayed at the first resolution (e.g.,FIG. 4B, where sub-portions of the item ofcontent400 are shown above and below the embedded content item402).
In some embodiments, the width of the embedded content item being displayed at the first resolution equals (510) the display width of the display area (e.g., equals the screen width, window width, or tile width). In some embodiments, the width of the embedded content item being displayed at the first resolution is less than the display width (e.g., embeddedcontent item402 as shown inFIG. 4B).
In some embodiments, simultaneously displaying (502) the embedded content item and the first portion of the item of content includes (512) displaying a first portion, a second portion, and a third portion of the embedded content item, wherein the first portion, the second portion, and the third portion of the embedded content item are distinct (e.g., and together compose the entire embedded content item). Displaying the embeddedcontent item402 inFIG. 4B, for example, may be viewed as displaying three distinct portions of the embedded content item402: a first portion402-1 (FIG. 4C), a second portion402-2 (FIG. 4D), and a third portion (FIG. 4E). The first, second, and third portions of the embedded content item may be partially distinct (i.e., some portions overlap with other portions, such as portions402-1 through402-3,FIG. 4E) or entirely distinct (i.e., no two portions overlap).
A first user input indicating selection of the embedded content item is detected (514). In some embodiments, the first user input is a touch gesture (e.g., tap) detected on the embedded content item (e.g.,gesture406,FIG. 4B).
Referring now toFIG. 5B, in response to detecting (516) the first user input, the electronic device ceases (518) display of the first portion of the item of content, and displays (524) the first portion of the embedded content item at a second resolution that is greater than the first resolution. In some embodiments, the height of the first portion of the embedded content item at the second resolution equals the display height. An example is shown inFIGS. 4B and 4C, where thegesture406 is detected on the embedded content item402 (FIG. 4B). In response, the client device104-1 ceases display of the item ofcontent400, and a first portion of the embedded content item402-1 is displayed (FIG. 4C) at a larger resolution than the displayed embedded content item402 (FIG. 4B), such that the embedded content item is effectively shown in a zoomed view.
In some embodiments, ceasing (518) display of the first portion of the item of content includes (520) decreasing an amount of the first portion of the item of content being displayed until the first portion of the item of content is no longer displayed. Decreasing the amount of the first portion of the item of content being displayed may include displaying various visual effects. For example, when transitioning from the GUI ofFIG. 4B to the GUI ofFIG. 4C in response to detecting the first user input, the displayed portions of the item of content400 (FIG. 4B) outside of the embedded content item may appear as if they are being gradually shrunk while the resolution of the embeddedcontent item402 proportionally increases. Alternatively, the displayed portions may appear as if being visually pushed off the visible boundaries of the display area (i.e., off the edges of the display312). In yet another embodiment, the displayed portions appear stationary, as the displayed embeddedcontent item402 visually expands to the second resolution and “covers” the displayed portions (i.e., the displayed portions are effectively “beneath” or “behind” the embedded content item402).
In some embodiments, before displaying (524) the first portion of the embedded content item at the second resolution, the resolution of the first portion of the embedded content item being displayed is increased (522) until the first portion of the embedded content item is displayed at the second resolution. The resolution of the first portion of the embedded content item is increased while decreasing the amount of the first portion of the item of content being displayed, and while decreasing a percentage of the embedded content item being displayed. For example, the first portion of the embedded content item402-1 displayed inFIG. 4C represents a smaller percentage of the embeddedcontent item402 than the entire embeddedcontent item402 displayed inFIG. 4B.
In some embodiments, displaying (524) the first portion of the embedded content item at the second resolution includes (526) ceasing display of the second portion of the embedded content item and the third portion of the embedded content item. For example, inFIG. 4C, when displaying the first portion of the embedded content item402-1, the adjacent portions (a second portion to the left and a third portion to the right of the first portion402-1, as illustrated inFIG. 4C) are no longer displayed.
In some embodiments, a user input is detected (528) in a first direction. For example, the user input includes a rotational tilt (530) of the electronic device in the first direction. The rotational tilt may include a turning of the electronic device in a direction (e.g., clockwise or counter-clockwise) with respect to an axis (e.g., an axis that bisects the display) (e.g., axes of a horizontal plane). For example,FIGS. 4C-4E illustrate views of the client device104-1 from the bottom of the device seen at eye level (i.e., cross-sectional views). With reference to the orientation of the client device104-1 inFIG. 4C (no tilt, parallel to horizontal plane), the tilt gesture408-1 (FIG. 4D) is a rotational tilt in a counter-clockwise direction.
In response to detecting (528) the user input in the first direction, the electronic devices ceases (532) display of at least a part of the first portion of the embedded content item and displays at least a part of the second portion of the embedded content item.FIGS. 4C and 4D illustrate an example. In response to detecting the tilt gesture408-1 (FIG. 4D), the client device104-1 transitions from displaying the first portion of the embedded content item402-1 (FIG. 4C) to displaying the second portion of the embedded content item402-2 (FIG. 4D). As shown inFIG. 4D, the second portion of the embedded content item402-2 includes part of the first portion402-1 (shown inFIG. 4C), while the remaining part of the first portion402-1 is no longer displayed. The user input in the first direction (e.g., tilt gesture408-1,FIG. 4D) therefore allows a user to manipulate and interact with a displayed view of the embedded content item. In this example, the user is able to view portions of a picture that are not within the display area in operation524 (i.e., portions that are no longer visible after enlarging the resolution of the embedded content item400).
In some embodiments, ceasing (532) display of at least a part of the first portion of the embedded content item and displaying at least a part of the second portion of the embedded content item includes (534) decreasing an amount of the first portion of the embedded content item being displayed. Furthermore, while decreasing (534) the amount of the first portion of the embedded content item being displayed, an amount of the second portion of the embedded content item being displayed is increased (536). For example, in response to detecting the tilt gesture408-1 inFIG. 4D (i.e., transitioning from the GUI ofFIG. 4C to 4D), the amount of the first portion of the embedded content item402-1 being displayed is decreased, while the amount of the second portion of the embedded content item402-1 being displayed is increased. Translation within the embedded content item from the first portion to the second portion thus is achieved in accordance with some embodiments.
Referring now toFIG. 5C, in some embodiments, a user input is detected (538) in a second direction opposite to the first direction (in528). In some embodiments, the user input includes (540) a rotational tilt of the electronic device in the second direction. For example, a tilt gesture408-2 is detected inFIG. 4E, which is a rotational tilt in a clockwise direction (opposite to the direction of the tilt gesture408-1,FIG. 4D). In response to detecting (538) the user input in the second direction, the electronic devices ceases (542) display of at least the part of the second portion of the embedded content item and displays at least a part of the third portion of the embedded content item. (Ifoperations528 and532 are omitted from themethod500, then display of at least part of the first portion of the embedded content item ceases and at least a part of the third portion of the embedded content item is displayed.) InFIG. 4E, for example, in response to detecting a tilt gesture408-2, the client device104-1 transitions from displaying the second portion of the embedded content item402-2 (FIG. 4D) to displaying the third portion of the embedded content item402-3 (FIG. 4E). In the example ofFIG. 4E, the third portion of the embedded content item402-3 includes part of the first portion402-1 (shown inFIG. 4C). Alternatively, the first and third portions do not overlap. Thus, in some embodiments, the part of the first portion of the embedded content item that is no longer displayed while displaying the second portion of the embedded content item is displayed while displaying the third portion of the embedded content item.
In some embodiments, the height of the embedded content item at the second resolution would exceed the display height of the display area. Thus, in some embodiments, the electronic device ceases displaying portions above and/or below the first portion (e.g., top and/or bottom portions of the embedded content item), along with a second portion (e.g., adjacent and to the left of the first portion) and a third portion (e.g., adjacent and to the right of the first portion) of the embedded content item. In these embodiments, in response to detecting a user input in a first direction (e.g., clockwise), at least part of the second portion of the embedded content item is displayed, and in response to detecting a user input in a second direction opposite to the first direction (e.g., counter-clockwise), at least part of the third portion of the embedded content item is displayed. In some embodiments, in response to detecting a user input in a third direction distinct from the first and second direction (e.g., substantially perpendicular to the first and second directions), the electronic device displays at least some of the top or bottom portion that ceased being displayed. Continuing the example above, if a tilt gesture is detected with respect to an axis distinct from (e.g., substantially perpendicular to) the first and second directions (e.g., with reference to the display as viewed by a user holding a device, a side-to-side axis, rather than a top-to-bottom axis), a top or bottom portion of the embedded content item is displayed.
In some embodiments, the amount of a respective portion of the embedded content item being displayed in response to detecting a user input (e.g., a rotational tilt) is proportional to the magnitude of the user input. The magnitude of a rotational tilt, for example, corresponds to the angle of the rotational tilt with respect to a predefined axis (e.g., longitudinal/latitudinal axes of a planar surface of the client device104-1, such as axes that bisect the display). As an example, referring toFIG. 4D, the amount of the second portion402-2 displayed in response to detecting the tilt gesture408-1 that forms a first angle (e.g., a 15 degree angle) with the horizontal axis, is less than an amount of the second portion402-2 that would displayed in response to detecting a tilt gesture in the same direction that forms a second, larger angle (e.g., a 45 degree angle).
In some embodiments, the direction of the rotational tilt is with reference to one or more axes of a predefined plane (e.g., the plane of the display at the time the first user input is detected, but not substantially perpendicular to a plane defined by the direction of gravity). Axes based on a predefined plane may therefore allow a user to more naturally view or interact with embedded content without requiring the user to adjust his viewing angle or orient the client device to conform to arbitrarily defined axes.
In some embodiments, while displaying a portion of the embedded content item at the second resolution, a user input is detected (544). In some embodiments, the user input is (546) a swipe gesture (e.g., a substantially vertical swipe). Additionally and/or alternatively, the user input may be a tap gesture (e.g., single tap). In response to detecting (544) the user input, the electronic device transitions (548) from display of the first portion of the embedded content item at the second resolution, to simultaneous display of the embedded content item and a respective portion of the item of content. For example, a swipe gesture404-2 (FIG. 4F) in a substantially vertical direction is detected while displaying the first portion of the embedded content item402-1. In response, the entire embeddedcontent item402 and a portion of the item ofcontent400 are simultaneously displayed.
In some embodiments, the respective portion of the item of content (548) is the first portion of the item of content (550). In other words, the electronic device reverts back to displaying the portion of the item of content at the resolution displayed prior to displaying the embedded content item at the second resolution. In other embodiments, the respective portion of the item of content (548) is a second portion of the item of content (552) distinct from the first portion of the item of content (e.g., more text is displayed below the embeddedcontent item402 inFIG. 4G than inFIG. 4B). In another example, in response to the swipe gesture404-2 (FIG. 4F), the electronic device may smoothly transition back to displaying the embeddedcontent item402 at the prior resolution (i.e., gradually decrease the displayed resolution of the embeddedcontent item402 from the second resolution to the first resolution). Until the displayed embeddedcontent item402 returns to the first resolution, the portion of the item ofcontent400 being displayed is therefore different from the first portion displayed inFIG. 4B.
For situations in which the systems discussed above collect information about users, the users may be provided with an opportunity to opt in/out of programs or features that may collect personal information (e.g., information about a user's preferences or a user's contributions to social content providers). In addition, in some embodiments, certain data may be anonymized in one or more ways before it is stored or used, so that personally identifiable information is removed. For example, a user's identity may be anonymized so that the personally identifiable information cannot be determined for or associated with the user, and so that user preferences or user interactions are generalized (for example, generalized based on user demographics) rather than associated with a particular user.
Although some of various drawings illustrate a number of logical stages in a particular order, stages which are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be apparent to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen in order to best explain the principles underlying the claims and their practical applications, to thereby enable others skilled in the art to best use the embodiments with various modifications as are suited to the particular uses contemplated.