Movatterモバイル変換


[0]ホーム

URL:


CN110851053A - Apparatus, method and graphical user interface for system-level behavior of 3D models - Google Patents

Apparatus, method and graphical user interface for system-level behavior of 3D models
Download PDF

Info

Publication number
CN110851053A
CN110851053ACN201911078900.7ACN201911078900ACN110851053ACN 110851053 ACN110851053 ACN 110851053ACN 201911078900 ACN201911078900 ACN 201911078900ACN 110851053 ACN110851053 ACN 110851053A
Authority
CN
China
Prior art keywords
virtual object
cameras
view
representation
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911078900.7A
Other languages
Chinese (zh)
Inventor
P·洛科
J·R·达斯科拉
S·O·勒梅
J·M·弗科纳
D·阿迪
D·卢依
G·耶基斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from DKPA201870346Aexternal-prioritypatent/DK201870346A1/en
Application filed by Apple IncfiledCriticalApple Inc
Publication of CN110851053ApublicationCriticalpatent/CN110851053A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention provides an apparatus, method and graphical user interface for system level behavior of 3D models. A computer system having a display, a touch-sensitive surface, and one or more cameras displays a representation of a virtual object in a first user interface area on the display. A first input by contact is detected at a location on the touch-sensitive surface that corresponds to a representation of a virtual object on the display. In response to detecting the first input by contact: in accordance with a determination that the first input by contact satisfies the first criteria, displaying a second user interface region on the display, including replacing display of at least a portion of the first user interface region, the first user interface region having a representation of a field of view of the one or more cameras, and continuously displaying the representation of the virtual object while switching from displaying the first user interface region to displaying the second user interface region.

Description

Translated fromChinese
用于3D模型的系统级行为的设备、方法和图形用户界面Apparatus, method and graphical user interface for system-level behavior of 3D models

本申请是申请日为2018年9月29日、申请号为201811165504.3、名称为“用于3D模型的系统级行为的设备、方法和图形用户界面”的发明专利申请的分案申请。This application is a divisional application of an invention patent application with an application date of September 29, 2018, an application number of 201811165504.3, and the title of "Apparatus, Method and Graphical User Interface for System-Level Behavior of 3D Models".

相关申请Related applications

本申请涉及2018年1月24日提交的美国临时申请No.62/621,529,该美国临时申请全文以引用方式并入本文。This application is related to US Provisional Application No. 62/621,529, filed January 24, 2018, which is incorporated herein by reference in its entirety.

技术领域technical field

本发明整体涉及显示虚拟对象的电子设备,该电子设备包括但不限于在各种情景中显示虚拟对象的电子设备。The present invention generally relates to electronic devices that display virtual objects, including but not limited to electronic devices that display virtual objects in various scenarios.

背景技术Background technique

近年来,用于增强现实的计算机系统的发展显著增加。示例增强现实环境包括至少一些替换或增强物理世界的虚拟元素。用于计算机系统和其他电子计算设备的输入设备诸如触敏表面用于与虚拟/增强现实环境进行交互。示例触敏表面包括触摸板、触敏遥控器和触摸屏显示器。此类表面用于操纵显示器上的用户界面和其中的对象。示例性用户界面对象包括数字图像、视频、文本、图标和控制元件(诸如,按钮)以及其他图形。The development of computer systems for augmented reality has increased significantly in recent years. Example augmented reality environments include at least some virtual elements that replace or augment the physical world. Input devices such as touch-sensitive surfaces for computer systems and other electronic computing devices are used to interact with virtual/augmented reality environments. Example touch-sensitive surfaces include touch pads, touch-sensitive remote controls, and touch screen displays. Such surfaces are used to manipulate the user interface and the objects in it on the display. Exemplary user interface objects include digital images, video, text, icons and control elements such as buttons, and other graphics.

但用于与包括至少一些虚拟元素的环境(例如,应用程序、增强现实环境、混合现实环境和虚拟现实环境)进行交互的方法和界面麻烦、低效且受限。例如,使用一系列输入来取向和定位增强现实环境中的虚拟对象是繁琐的,对用户造成显著的认知负担,并且损害对虚拟/增强现实环境的体验。此外,这些方法花费比所需时间更长的时间,从而浪费能量。这后一考虑在电池驱动的设备中是特别重要的。But methods and interfaces for interacting with environments that include at least some virtual elements (eg, applications, augmented reality environments, mixed reality environments, and virtual reality environments) are cumbersome, inefficient, and limited. For example, using a series of inputs to orient and locate virtual objects in an augmented reality environment is cumbersome, imposes a significant cognitive load on the user, and impairs the experience of the virtual/augmented reality environment. Furthermore, these methods take longer than necessary, wasting energy. This latter consideration is particularly important in battery-operated devices.

发明内容SUMMARY OF THE INVENTION

因此,需要具有用于与虚拟对象进行交互的改进的方法和界面的计算机系统。此类方法和界面任选地补充或替换用于与虚拟对象进行交互的常规方法。此类方法和界面减少了来自用户的输入的数量、程度、和/或性质,并且产生更有效的人机界面。对于电池驱动设备,此类方法和界面可节省用电并且增加两次电池充电之间的时间。Accordingly, there is a need for computer systems with improved methods and interfaces for interacting with virtual objects. Such methods and interfaces optionally supplement or replace conventional methods for interacting with virtual objects. Such methods and interfaces reduce the amount, extent, and/or nature of input from a user and result in a more efficient human-machine interface. For battery-operated devices, such methods and interfaces can conserve power and increase the time between battery charges.

本公开的计算机系统减少或消除了以上缺陷以及与用于与虚拟对象进行交互的界面相关联的其他问题(例如,用于增强现实(AR)的用户界面和相关的非AR界面)。在一些实施方案中,该计算机系统包括台式计算机。在一些实施方案中,该计算机系统是便携式的(例如,笔记本电脑、平板电脑或手持设备)。在一些实施方案中,该计算机系统包括个人电子设备(例如,可穿戴电子设备,诸如手表)。在一些实施方案中,该计算机系统具有触摸板(并且/或者与触摸板通信)。在一些实施方案中,该计算机系统具有触敏显示器(也称为“触摸屏”或“触摸屏显示器”)(并且/或者与触敏显示器通信)。在一些实施方案中,该计算机系统具有图形用户界面(GUI)、一个或多个处理器、存储器和一个或多个模块、存储在存储器中以用于执行多种功能的程序或指令集。在一些实施方案中,用户部分地通过触笔和/或手指接触以及触敏表面上的手势来与GUI进行交互。在一些实施方案中,这些功能任选地包括玩游戏、图像编辑、绘图、展示、文字处理、电子表格制作、接打电话、视频会议、收发电子邮件、即时消息通信、健身支持、数字摄影、数字视频录制、网页浏览、数字音乐播放、记笔记和/或数字视频播放。用于执行这些功能的可执行指令任选地被包括在被配置用于由一个或多个处理器执行的非暂态计算机可读存储介质或其他计算机程序产品中。The computer system of the present disclosure reduces or eliminates the above deficiencies and other problems associated with interfaces for interacting with virtual objects (eg, user interfaces for augmented reality (AR) and related non-AR interfaces). In some embodiments, the computer system includes a desktop computer. In some embodiments, the computer system is portable (eg, a laptop, tablet, or handheld device). In some embodiments, the computer system includes a personal electronic device (eg, a wearable electronic device such as a watch). In some embodiments, the computer system has (and/or is in communication with) a touchpad. In some embodiments, the computer system has (and/or is in communication with) a touch-sensitive display (also referred to as a "touch screen" or "touch screen display"). In some embodiments, the computer system has a graphical user interface (GUI), one or more processors, memory, and one or more modules, programs or sets of instructions stored in the memory for performing various functions. In some embodiments, the user interacts with the GUI in part through stylus and/or finger contact and gestures on the touch-sensitive surface. In some embodiments, these functions optionally include game playing, image editing, drawing, presentation, word processing, spreadsheet making, making phone calls, video conferencing, e-mailing, instant messaging, fitness support, digital photography, Digital video recording, web browsing, digital music playback, note-taking and/or digital video playback. Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.

根据一些实施方案,在具有显示器、触敏表面以及一个或多个相机的计算机系统处执行方法。该方法包括在显示器上的第一用户界面区域中显示虚拟对象的表示。该方法还包括,在显示器上的第一用户界面区域中显示虚拟对象的第一表示时,检测在触敏表面上与显示器上的虚拟对象的表示对应的位置处通过接触进行的第一输入。该方法还包括,响应于检测到通过接触进行的第一输入,根据确定通过接触进行的第一输入满足第一标准:在显示器上显示第二用户界面区域,这包括替换第一用户界面区域的至少一部分的显示,第一用户界面区域具有一个或多个相机的视场的表示,并且在从显示第一用户界面区域切换到显示第二用户界面区域时,不断显示虚拟对象的表示。According to some embodiments, the method is performed at a computer system having a display, a touch-sensitive surface, and one or more cameras. The method includes displaying a representation of the virtual object in a first user interface area on the display. The method also includes, while displaying the first representation of the virtual object in the first user interface area on the display, detecting a first input by contact at a location on the touch-sensitive surface that corresponds to the representation of the virtual object on the display. The method also includes, in response to detecting the first input by the contact, based on determining that the first input by the contact satisfies the first criterion: displaying a second user interface area on the display, which includes replacing a portion of the first user interface area At least a portion of the display, the first user interface area has a representation of the field of view of the one or more cameras, and when switching from displaying the first user interface area to displaying the second user interface area, continuously displaying the representation of the virtual object.

根据一些实施方案,在具有显示器、触敏表面以及一个或多个相机的计算机系统处执行方法。该方法包括在显示器上的第一用户界面区域中显示虚拟对象的第一表示。该方法还包括,在显示器上的第一用户界面区域中显示虚拟对象的第一表示时,检测在触敏表面上与显示器上的虚拟对象的第一表示对应的位置处通过第一接触进行的第一输入。该方法还包括,响应于检测到通过第一接触进行的第一输入,并且根据确定通过第一接触进行的输入满足第一标准,在第二用户界面区域中显示虚拟对象的表示,第二用户界面区域与第一用户界面区域不同。该方法还包括,在第二用户界面区域中显示虚拟对象的第二表示时,检测第二输入,并且响应于检测到第二输入,根据确定第二输入与在第二用户界面区域中操纵虚拟对象的请求对应,基于第二输入改变虚拟对象的第二表示在第二用户界面区域内的显示属性;以及,根据确定第二输入与在增强现实环境中显示虚拟对象的请求对应,显示虚拟对象的第三表示,该虚拟对象具有一个或多个相机的视场的表示。According to some embodiments, the method is performed at a computer system having a display, a touch-sensitive surface, and one or more cameras. The method includes displaying a first representation of the virtual object in a first user interface area on the display. The method also includes, while displaying the first representation of the virtual object in the first user interface area on the display, detecting a contact made by the first contact at a location on the touch-sensitive surface that corresponds to the first representation of the virtual object on the display first input. The method also includes, in response to detecting the first input through the first contact, and based on determining that the input through the first contact satisfies the first criterion, displaying a representation of the virtual object in the second user interface area, the second user The interface area is different from the first user interface area. The method also includes, while displaying the second representation of the virtual object in the second user interface area, detecting a second input, and in response to detecting the second input, based on determining the second input and manipulating the virtual object in the second user interface area corresponding to the request for the object, changing a display attribute of the second representation of the virtual object within the second user interface area based on the second input; and displaying the virtual object in response to determining that the second input corresponds to the request to display the virtual object in the augmented reality environment The third representation of the virtual object has a representation of the field of view of one or more cameras.

根据一些实施方案,在具有显示器和触敏表面的计算机系统处执行方法。该方法包括,响应于显示第一用户界面的请求,显示具有第一项目的表示的第一用户界面。该方法还包括,根据确定第一项目与相应的虚拟三维对象对应,显示第一项目的表示,第一项目的表示具有指示第一项目与第一相应的虚拟三维对象对应的视觉指示。该方法还包括,根据确定第一项目不与相应的虚拟三维对象对应,显示不具有该视觉指示的第一项目的表示。该方法还包括,在显示第一项目的表示之后,接收显示包括第二项目的第二用户界面的请求。该方法还包括,响应于显示第二用户界面的请求,显示具有第二项目的表示的第二用户界面。该方法还包括,根据确定第二项目与相应的虚拟三维对象对应,显示第二项目的表示,第二项目的表示具有指示第二项目与第二相应的虚拟三维对象对应的视觉指示。该方法还包括,根据确定第二项目不与相应的虚拟三维对象对应,显示不具有该视觉指示的第二项目的表示。According to some embodiments, the method is performed at a computer system having a display and a touch-sensitive surface. The method includes, in response to a request to display the first user interface, displaying a first user interface having a representation of the first item. The method also includes, upon determining that the first item corresponds to the corresponding virtual three-dimensional object, displaying a representation of the first item, the representation of the first item having a visual indication that the first item corresponds to the first corresponding virtual three-dimensional object. The method also includes, upon determining that the first item does not correspond to the corresponding virtual three-dimensional object, displaying a representation of the first item without the visual indication. The method also includes, after displaying the representation of the first item, receiving a request to display a second user interface including the second item. The method also includes, in response to the request to display the second user interface, displaying a second user interface having a representation of the second item. The method also includes, upon determining that the second item corresponds to the corresponding virtual three-dimensional object, displaying a representation of the second item with a visual indication that the second item corresponds to the second corresponding virtual three-dimensional object. The method also includes, upon determining that the second item does not correspond to the corresponding virtual three-dimensional object, displaying a representation of the second item without the visual indication.

根据一些实施方案,在具有显示生成部件、一个或多个输入设备以及一个或多个相机的计算机系统处执行方法。该方法包括接收在第一用户界面区域中显示虚拟对象的请求,第一用户界面区域包括一个或多个相机的视场的至少一部分。该方法还包括,响应于在第一用户界面区域中显示虚拟对象的请求,经由显示生成部件在包括在第一用户界面区域中的一个或多个相机的视场的至少一部分的上方显示虚拟对象的表示,其中一个或多个相机的视场是一个或多个相机所处的物理环境的视图。显示虚拟对象的表示包括:根据确定对象放置标准未得到满足,显示具有第一组视觉属性和第一取向的虚拟对象的表示,其中对象放置标准要求虚拟对象的放置位置在一个或多个相机视场中可被识别,以便满足对象放置标准,第一取向与物理环境中显示在一个或多个相机的视场中的部分无关;并且根据确定对象放置标准得到满足,显示具有第二组视觉属性和第二取向的虚拟对象的表示,第二组视觉属性不同于第一组视觉属性,第二取向与在一个或多个相机的视场中检测到的物理环境中的平面对应。According to some embodiments, the method is performed at a computer system having a display generation component, one or more input devices, and one or more cameras. The method includes receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of one or more cameras. The method also includes, in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area A representation where the field of view of one or more cameras is a view of the physical environment in which the one or more cameras are located. Displaying the representation of the virtual object includes displaying a representation of the virtual object having a first set of visual attributes and a first orientation based on determining that object placement criteria are not met, wherein the object placement criteria require placement of the virtual object within one or more camera views. can be identified in the field so that object placement criteria are met, the first orientation is independent of the portion of the physical environment displayed in the field of view of the one or more cameras; and based on determining that the object placement criteria are met, the display has a second set of visual properties and a representation of the virtual object in a second orientation, the second set of visual attributes being different from the first set of visual attributes, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras.

根据一些实施方案,在具有显示生成部件、一个或多个输入设备、一个或多个相机以及一个或多个姿态传感器的计算机系统处执行方法,姿态传感器用于检测包括一个或多个相机的设备的姿态的变化。该方法包括接收在第一用户界面区域中显示物理环境的增强现实视图的请求,第一用户界面区域包括一个或多个相机的视场的表示。该方法还包括,响应于接收到显示物理环境的增强现实视图的请求,显示一个或多个相机的视场的表示,并且根据确定用于物理环境的增强现实视图的校准标准未得到满足,显示根据物理环境中的一个或多个相机的移动动态地动画的校准用户界面对象,其中显示校准用户界面对象包括:在显示校准用户界面对象时,经由一个或多个姿态传感器检测物理环境中的一个或多个相机的姿态的变化;并且,响应于检测到物理环境中的一个或多个相机的姿态变化,根据所检测到的物理环境中的一个或多个相机的姿态变化来调整校准用户界面对象的至少一个显示参数。该方法还包括,在显示根据所检测到的物理环境中的一个或多个相机的姿态变化在显示器上移动的校准用户界面对象时,检测到校准标准得到满足。该方法还包括,响应于检测到校准标准得到满足,停止显示校准用户界面对象。According to some embodiments, a method is performed at a computer system having a display generation component, one or more input devices, one or more cameras, and one or more gesture sensors for detecting a device including one or more cameras changes in posture. The method includes receiving a request to display an augmented reality view of a physical environment in a first user interface area, the first user interface area including a representation of a field of view of one or more cameras. The method also includes, in response to receiving the request to display the augmented reality view of the physical environment, displaying a representation of the field of view of the one or more cameras, and in accordance with determining that calibration criteria for the augmented reality view of the physical environment are not met, displaying A calibration user interface object that is dynamically animated according to movement of the one or more cameras in the physical environment, wherein displaying the calibration user interface object includes detecting, via the one or more gesture sensors, one of the physical environment while displaying the calibration user interface object a change in the pose of the one or more cameras in the physical environment; and, in response to detecting the change in the pose of the one or more cameras in the physical environment, adjusting the calibration user interface according to the detected change in the pose of the one or more cameras in the physical environment At least one display parameter of the object. The method also includes detecting that the calibration criteria are met when displaying a calibration user interface object that moves on the display according to the detected pose changes of the one or more cameras in the physical environment. The method also includes, in response to detecting that the calibration criteria are met, ceasing to display the calibration user interface object.

根据一些实施方案,在具有显示生成部件以及包括触敏表面的一个或多个输入设备的计算机系统处执行方法。该方法包括通过显示生成部件在第一用户界面区域中显示虚拟三维对象的第一视角的表示。该方法还包括,当在显示器上的第一用户界面区域中显示虚拟三维对象的第一视角的表示时,检测与相对于显示虚拟三维对象中从虚拟三维对象的第一视角不可视的部分的显示器旋转虚拟三维对象的请求对应的第一输入。该方法还包括,响应于检测到第一输入:根据确定第一输入与围绕第一轴旋转三维对象的请求对应,使虚拟三维对象相对于第一轴旋转基于第一输入的量值而确定的量,并且该量由限制虚拟三维对象相对于第一轴旋转超过阈值旋转量的移动极限来约束;并且,根据确定第一输入与围绕不同于第一轴的第二轴旋转三维对象的请求对应,使虚拟三维对象相对于第二轴旋转基于第一输入的量值确定的量,其中,对于具有高于相应阈值的量值的输入,设备使虚拟三维对象相对于第二轴旋转超过阈值旋转量。According to some embodiments, the method is performed at a computer system having a display generating component and one or more input devices including a touch-sensitive surface. The method includes displaying, by a display generation component, a representation of a first perspective of the virtual three-dimensional object in a first user interface area. The method also includes, when displaying the representation of the first viewing angle of the virtual three-dimensional object in the first user interface area on the display, detecting a difference with respect to displaying a portion of the virtual three-dimensional object that is not visible from the first viewing angle of the virtual three-dimensional object The first input corresponding to the request of the display to rotate the virtual three-dimensional object. The method also includes, in response to detecting the first input: in response to determining that the first input corresponds to a request to rotate the three-dimensional object about the first axis, rotating the virtual three-dimensional object relative to the first axis by a magnitude determined based on the first input and the amount is constrained by a movement limit that restricts rotation of the virtual three-dimensional object relative to the first axis beyond a threshold amount of rotation; and, upon determining that the first input corresponds to a request to rotate the three-dimensional object about a second axis different from the first axis , rotates the virtual three-dimensional object relative to the second axis by an amount determined based on the magnitude of the first input, wherein, for an input having a magnitude above a corresponding threshold, the device rotates the virtual three-dimensional object relative to the second axis by more than a threshold rotation quantity.

根据一些实施方案,在具有显示生成部件和触敏表面的计算机系统处执行方法。该方法包括经由显示生成部件显示第一用户界面区域,第一用户界面区域包括与多个对象操纵行为相关联的用户界面对象,该多个对象操纵行为包括响应于满足第一手势识别标准的输入而执行的第一对象操纵行为和响应于满足第二手势识别标准的输入而执行的第二对象操纵行为。该方法还包括,在显示第一用户界面区域时,检测涉及用户界面对象的输入的第一部分,这包括检测一个或多个接触在触敏表面上的移动,并且当在触敏表面上检测到一个或多个接触时,相对于第一手势识别标准和第二手势识别标准评估一个或多个接触的移动。该方法还包括,响应于检测到输入的第一部分,基于输入的第一部分更新用户界面对象的外观,这包括:根据确定输入的第一部分在满足第二手势识别标准之前满足第一手势识别标准,根据第一对象操纵行为且基于输入的第一部分改变用户界面对象的外观,以及通过增大第二手势识别标准的阈值来更新第二手势识别标准;并且根据确定输入在满足第一手势识别标准之前满足第二手势识别标准,基于输入的第一部分,根据第二对象操纵行为来改变用户界面对象的外观,并且通过增大第一手势识别标准的阈值来更新第一手势识别标准。According to some embodiments, the method is performed at a computer system having a display generating component and a touch-sensitive surface. The method includes displaying, via a display generation component, a first user interface area, the first user interface area including user interface objects associated with a plurality of object manipulation behaviors including in response to input satisfying a first gesture recognition criterion The first object manipulation behavior performed and the second object manipulation behavior performed in response to the input satisfying the second gesture recognition criteria. The method also includes, while displaying the first user interface area, detecting the first portion of the input involving the user interface object, which includes detecting movement of the one or more contacts on the touch-sensitive surface, and when detecting on the touch-sensitive surface Upon one or more contacts, movement of the one or more contacts is evaluated relative to a first gesture recognition criterion and a second gesture recognition criterion. The method also includes, in response to detecting the first portion of the input, updating the appearance of the user interface object based on the first portion of the input, which includes: based on determining that the first portion of the input satisfies the first gesture recognition criterion before satisfying the second gesture recognition criterion , changing the appearance of the user interface object according to the first object manipulation behavior and based on the first portion of the input, and updating the second gesture recognition criterion by increasing the threshold of the second gesture recognition criterion; The recognition criteria previously met a second gesture recognition criterion, based on the first portion of the input, changing the appearance of the user interface object according to the second object manipulation behavior, and updating the first gesture recognition criterion by increasing a threshold of the first gesture recognition criterion.

根据一些实施方案,在具有显示生成部件、一个或多个输入设备、一个或多个音频输出发生器以及一个或多个相机的计算机系统处执行方法。该方法包括经由显示生成部件在第一用户界面区域中显示虚拟对象的表示,第一用户界面区域包括一个或多个相机的视场的表示,其中显示包括保持虚拟对象的表示与在捕获在一个或多个相机的视场中的物理环境内所检测到的平面之间的第一空间关系。该方法还包括检测调整一个或多个相机的视场的设备的移动。该方法还包括,响应于检测到调整一个或多个相机的视场的设备的移动:在调整一个或多个相机的视场时,根据虚拟对象与在一个或多个相机的视场内检测到的平面之间的第一空间关系调整虚拟对象的表示在第一用户界面区域中的显示,并且根据确定设备的移动使得虚拟对象移动到一个或多个相机的视场的所显示的部分之外,并且移动超过阈值量的量,经由一个或多个音频输出发生器生成第一音频警报。According to some embodiments, the method is performed at a computer system having a display generation component, one or more input devices, one or more audio output generators, and one or more cameras. The method includes displaying, via a display generating component, a representation of the virtual object in a first user interface area, the first user interface area including a representation of the field of view of one or more cameras, wherein the displaying includes maintaining the representation of the virtual object with the representation of the virtual object captured in a or a first spatial relationship between detected planes within the physical environment in the field of view of the plurality of cameras. The method also includes detecting movement of the device that adjusts the field of view of the one or more cameras. The method also includes, in response to detecting movement of the device that adjusts the field of view of the one or more cameras: in adjusting the field of view of the one or more cameras, based on the virtual object and the detection of within the field of view of the one or more cameras The first spatial relationship between the obtained planes adjusts the display of the representation of the virtual object in the first user interface area and causes the virtual object to move to one of the displayed portions of the field of view of the one or more cameras in accordance with the movement of the determining device. In addition, and moving by an amount exceeding the threshold amount, a first audio alert is generated via the one or more audio output generators.

根据一些实施方案,电子设备包括显示生成部件、任选地一个或多个输入设备、任选地一个或多个触敏表面、任选地一个或多个相机、用于检测与触敏表面的接触强度的任选地一个或多个传感器、任选地一个或多个音频输出发生器、任选地一个或多个设备取向传感器、任选地一个或多个触觉输出发生器、用于检测姿态变化的任选地一个或多个姿态传感器、一个或多个处理器以及存储一个或多个程序的存储器;该一个或多个程序被配置成由一个或多个处理器执行,并且一个或多个程序包括用于执行或导致执行任何本文所述的方法的操作的指令。根据一些实施方案,计算机可读存储介质具有存储在其中的指令,这些指令在由具有显示生成部件、任选地一个或多个输入设备、任选地一个或多个触敏表面、任选地一个或多个相机、用于检测与触敏表面的接触强度的任选地一个或多个传感器、任选地一个或多个音频输出发生器、任选地一个或多个设备取向传感器、任选地一个或多个触觉输出发生器以及任选地一个或多个姿态传感器的电子设备执行时,使得该设备执行本文所述任何方法的操作或使得本文所述任何方法的操作被执行。根据一些实施方案,具有显示生成部件、任选地一个或多个输入设备、任选地一个或多个触敏表面、任选地一个或多个相机、用于检测与触敏表面的接触强度的任选地一个或多个传感器、任选地一个或多个音频输出发生器、任选地一个或多个设备取向传感器、任选地一个或多个触觉输出发生器以及任选地一个或多个姿态传感器、存储器和用于执行存储在存储器中的一个或多个程序的一个或多个处理器的电子设备上的图形用户界面包括在本文所述任何方法中显示的一个或多个元素,该一个或多个元素响应于输入进行更新,如本文所述的任何方法中所述。根据一些实施方案,电子设备包括:显示生成部件、任选地一个或多个输入设备、任选地一个或多个触敏表面、任选地一个或多个相机、用于检测与触敏表面的接触强度的任选地一个或多个传感器、任选地一个或多个音频输出发生器、任选地一个或多个设备取向传感器、任选地一个或多个触觉输出发生器以及用于检测姿态变化的任选地一个或多个姿态传感器;以及用于执行或导致执行本文所述方法中的任一方法的操作的装置。根据一些实施方案,在具有显示生成部件、任选地一个或多个输入设备、任选地一个或多个触敏表面、任选地一个或多个相机、用于检测与触敏表面的接触强度的任选地一个或多个传感器、任选地一个或多个音频输出发生器、任选地一个或多个设备取向传感器、任选地一个或多个触觉输出发生器以及用于检测姿态变化的任选地一个或多个姿态传感器的电子设备中使用的信息处理装置包括用于执行本文所述任何方法的操作或使得本文所述任何方法的操作被执行的装置。According to some embodiments, the electronic device includes a display generating component, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, Optionally one or more sensors of contact intensity, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, for detecting Optionally one or more attitude sensors, one or more processors, and memory storing one or more programs for attitude change; the one or more programs are configured to be executed by the one or more processors, and one or more Programs include instructions for performing or causing the operations of any of the methods described herein to be performed. According to some embodiments, a computer-readable storage medium has stored therein instructions that are executed by a display generating component, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, optionally one or more sensors for detecting the intensity of contact with the touch-sensitive surface, optionally one or more audio output generators, optionally one or more device orientation sensors, any The electronic device, optionally one or more tactile output generators, and optionally one or more gesture sensors, when executed, causes the device to perform or causes the operations of any method described herein to be performed. According to some embodiments, there is a display generating component, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, for detecting the intensity of contact with the touch-sensitive surface optionally one or more sensors, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, and optionally one or more A graphical user interface on an electronic device of a plurality of gesture sensors, memory, and one or more processors for executing one or more programs stored in the memory includes one or more elements displayed in any of the methods described herein , the one or more elements are updated in response to the input, as described in any of the methods described herein. According to some embodiments, an electronic device includes: a display generating component, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, for detecting and contacting the touch-sensitive surfaces optionally one or more sensors of contact intensity, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, and optionally one or more attitude sensors to detect changes in attitude; and means for performing or causing the operations of any of the methods described herein to be performed. According to some embodiments, there is a display generating component, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, for detecting contact with the touch-sensitive surface optionally one or more sensors of intensity, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, and for detecting gestures Information processing means for use in electronic devices that vary, optionally with one or more attitude sensors, include means for performing or causing the operations of any of the methods described herein to be performed.

因此,为具有显示生成部件、任选地一个或多个输入设备、任选地一个或多个触敏表面、任选地一个或多个相机、用于检测与触敏表面的接触强度的任选地一个或多个传感器、任选地一个或多个音频输出发生器、任选地一个或多个设备取向传感器、任选地一个或多个触觉输出发生器以及任选地一个或多个姿态传感器的电子设备提供用于在各种情景中显示虚拟对象的改进的方法和界面,从而提高此类设备的有效性、效率和用户满意度。此类方法和界面可补充或替换用于在各种情景中显示虚拟对象的常规方法。Thus, for any device having a display generating component, optionally one or more input devices, optionally one or more touch-sensitive surfaces, optionally one or more cameras, for detecting the intensity of contact with the touch-sensitive surface optionally one or more sensors, optionally one or more audio output generators, optionally one or more device orientation sensors, optionally one or more tactile output generators, and optionally one or more Gesture sensor electronics provide improved methods and interfaces for displaying virtual objects in a variety of contexts, thereby increasing the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may complement or replace conventional methods for displaying virtual objects in various contexts.

附图说明Description of drawings

为了更好地理解各种所述实施方案,应结合以下附图参考下面的具体实施方式,其中类似的附图标号在所有附图中指示对应的部分。For a better understanding of the various described embodiments, reference should be made to the following Detailed Description in conjunction with the following drawings, wherein like reference numerals indicate corresponding parts throughout.

图1A是示出根据一些实施方案的具有触敏显示器的便携式多功能设备的框图。1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.

图1B是示出根据一些实施方案的用于事件处理的示例部件的框图。FIG. 1B is a block diagram illustrating example components for event processing, according to some embodiments.

图1C是示出根据一些实施方案的触觉输出模块的框图。1C is a block diagram illustrating a haptic output module according to some embodiments.

图2示出了根据一些实施方案的具有触摸屏的便携式多功能设备。2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.

图3是根据一些实施方案的具有显示器和触敏表面的示例多功能设备的框图。3 is a block diagram of an example multifunction device with a display and a touch-sensitive surface, according to some embodiments.

图4A示出了根据一些实施方案的便携式多功能设备上的应用程序菜单的示例用户界面。4A illustrates an example user interface of an application menu on a portable multifunction device according to some embodiments.

图4B示出了根据一些实施方案的用于具有与显示器分开的触敏表面的多功能设备的示例用户界面。4B illustrates an example user interface for a multifunction device having a touch-sensitive surface separate from the display, according to some embodiments.

图4C至图4E示出了根据一些实施方案的动态强度阈值的示例。4C-4E illustrate examples of dynamic intensity thresholds according to some embodiments.

图4F至图4K示出根据一些实施方案的一组样本触觉输出模式。4F-4K illustrate a set of sample haptic output patterns according to some implementations.

图5A至图5AT示出了根据一些实施方案的示例用户界面,其用于在从显示第一用户界面区域切换到显示第二用户界面区域时显示虚拟对象的表示。5A-5AT illustrate example user interfaces for displaying representations of virtual objects when switching from displaying a first user interface area to displaying a second user interface area, according to some embodiments.

图6A至图6AJ示出了根据一些实施方案的示例用户界面,根据一些实施方案,其用于在第一用户界面区域中显示虚拟对象的第一表示、在第二用户界面区域中显示虚拟对象的第二表示以及显示具有一个或多个相机的视场的表示的虚拟对象的第三表示。FIGS. 6A-6AJ illustrate example user interfaces for displaying a first representation of a virtual object in a first user interface area and displaying the virtual object in a second user interface area, according to some embodiments, according to some embodiments. and a third representation of the virtual object displaying a representation of the field of view with one or more cameras.

图7A至图7E、图7F1至图7F2、图7G1至图7G2以及图7H至图7P示出了根据一些实施方案的示例用户界面,其用于显示具有指示项目与虚拟三维对象对应的视觉指示的项目。Figures 7A-7E, 7F1-7F2, 7G1-7G2, and 7H-7P illustrate example user interfaces for displaying visual indications with indication items corresponding to virtual three-dimensional objects, according to some embodiments s project.

图8A至图8E是根据一些实施方案的过程的流程图,根据一些实施方案,该过程用于在从显示第一用户界面区域切换到显示第二用户界面区域时显示虚拟对象的表示。8A-8E are flowcharts of a process according to some embodiments for displaying a representation of a virtual object when switching from displaying a first user interface area to displaying a second user interface area, according to some embodiments.

图9A至图9D是根据一些实施方案的过程的流程图,该过程用于在第一用户界面区域中显示虚拟对象的第一表示、在第二用户界面区域中显示虚拟对象的第二表示以及显示具有一个或多个相机的视场的表示的虚拟对象的第三表示。9A-9D are flowcharts of a process for displaying a first representation of a virtual object in a first user interface area, displaying a second representation of the virtual object in a second user interface area, and A third representation of the virtual object is displayed with a representation of the field of view of the one or more cameras.

图10A至图10D是根据一些实施方案的过程的流程图,该过程用于显示具有指示项目与虚拟三维对象对应的视觉指示的项目。10A-10D are flowcharts of a process for displaying an item with a visual indication indicating that the item corresponds to a virtual three-dimensional object, according to some embodiments.

图11A至图11V示出了根据一些实施方案的示例用户界面,其用于根据对象放置标准是否得到满足来显示具有不同视觉属性的虚拟对象。11A-11V illustrate example user interfaces for displaying virtual objects with different visual properties depending on whether object placement criteria are met, according to some embodiments.

图12A至图12D、图12E-1、图12E-2、图12F-1、图12F-2、图12G-1、图12G-2、图12H-1、图12H-2、图12I-1、图12I-2、图12J、图12K-1、图12K-2、图12L-1和图12L-2示出了根据一些实施方案的示例用户界面,其用于显示根据设备的一个或多个相机的移动而动态地动画的校准用户界面对象。Figures 12A to 12D, Figure 12E-1, Figure 12E-2, Figure 12F-1, Figure 12F-2, Figure 12G-1, Figure 12G-2, Figure 12H-1, Figure 12H-2, Figure 12I-1 , Figure 12I-2, Figure 12J, Figure 12K-1, Figure 12K-2, Figure 12L-1, and Figure 12L-2 illustrate example user interfaces in accordance with some embodiments for displaying one or more device-dependent A calibrated UI object that dynamically animates the movement of a camera.

图13A至图13M示出了根据一些实施方案的约束虚拟对象围绕轴的旋转的示例用户界面。13A-13M illustrate example user interfaces for constraining the rotation of a virtual object about an axis, according to some embodiments.

图14A至图14Z示出了根据一些实施方案的根据确定第一对象操纵行为满足第一阈值移动量值来增大第二对象操纵行为所需的第二阈值移动量值的示例用户界面。14A-14Z illustrate example user interfaces for increasing a second threshold movement magnitude required for a second object manipulation behavior based on determining that the first object manipulation behavior satisfies the first threshold movement magnitude value, according to some embodiments.

图14AA至图14AD示出了根据一些实施方案的流程图,其示出了根据确定第一对象操纵行为满足第一阈值移动量值来增大第二对象操纵行为所需的第二阈值移动量值的操作。14AA-14AD illustrate flowcharts illustrating increasing a second threshold amount of movement required for a second object manipulation behavior in accordance with a determination that the first object manipulation behavior satisfies a first threshold movement amount value, according to some embodiments value operation.

图15A至图15AI示出了根据一些实施方案的根据确定设备的移动使虚拟对象移动到所显示的一个或多个设备相机的视场之外来生成音频警报的示例用户界面。15A-15AI illustrate example user interfaces that generate audio alerts based on determining that movement of the device causes virtual objects to move outside the field of view of one or more displayed device cameras, according to some embodiments.

图16A至图16G是根据一些实施方案的过程的流程图,该过程用于根据对象放置标准是否得到满足来显示具有不同视觉属性的虚拟对象。16A-16G are flowcharts of a process for displaying virtual objects with different visual properties depending on whether object placement criteria are met, according to some embodiments.

图17A至图17D是根据一些实施方案的过程的流程图,该过程用于显示根据设备的一个或多个相机的移动而动态地动画的校准用户界面对象。17A-17D are flowcharts of a process for displaying calibration user interface objects that dynamically animate in accordance with movement of one or more cameras of a device, according to some embodiments.

图18A至图18I是根据一些实施方案的用于约束虚拟对象围绕轴的旋转的过程的流程图。18A-18I are flowcharts of a process for constraining the rotation of a virtual object about an axis, according to some embodiments.

图19A至图19H是根据一些实施方案的过程的流程图,该过程用于根据确定第一对象操纵行为满足第一阈值移动量值来增大第二对象操纵行为所需的第二阈值移动量值。19A-19H are flowcharts of a process for increasing a second threshold amount of movement required for a second object manipulation behavior based on a determination that the first object manipulation behavior satisfies a first threshold movement amount value, according to some embodiments value.

图20A至图20F是根据一些实施方案的过程的流程图,该过程用于根据确定设备的移动使虚拟对象移动到一个或多个设备相机的所显示的视场之外来生成音频警报。20A-20F are flow diagrams of a process for generating an audio alert based on determining that movement of the device causes a virtual object to move outside the displayed field of view of one or more device cameras, according to some embodiments.

具体实施方式Detailed ways

虚拟对象是虚拟环境中的三维对象的图形表示。与虚拟对象进行交互来将虚拟对象从显示在应用程序用户界面(例如,不显示增强现实环境的二维应用程序用户界面)的情景中转变为显示在增强现实环境(例如,其中利用向用户提供不可在物理世界中获得的另外信息的补充信息来增强物理世界的视图的环境)的情景中的常规方法通常需要多个独立的输入(例如,一系列的手势和按钮按压等),才能实现预期结果(例如,调整虚拟对象的尺寸、位置和/或取向,从而在增强现实环境中实现逼真或期望的外观)。另外,常规的输入方法通常涉及接收显示增强现实环境的请求与显示增强现实环境之间的延迟,该延迟是由激活一个或多个设备相机来捕获物理世界的视图所需的时间以及/或分析和表征与可放置在增强现实环境中的虚拟对象相关的物理世界的视图(例如,检测捕获的物理世界视图中的平面和/或表面)所需的时间引起的。本文的实施方案为用户提供在各种情景中显示虚拟对象并且/或者与虚拟对象进行交互的直观方式(例如,通过允许用户提供从在应用程序用户界面的情景中显示虚拟对象切换到在增强现实环境中显示虚拟对象的输入;通过允许用户在虚拟对象显示在增强现实环境中之前改变虚拟对象的显示属性(例如,在三维登台环境中);通过提供允许用户从多个应用程序中容易地识别系统级虚拟对象的指示;通过在确定对象的放置信息时改变对象的视觉属性;通过提供指示校准所需的设备移动的动画校准用户界面对象;通过约束所显示的虚拟对象围绕轴的旋转;通过在第一对象操纵行为的阈值移动量值得到满足时增大第二对象操纵行为的阈值移动量值;以及通过提供指示虚拟对象已移出所显示的视场的音频警报)。A virtual object is a graphical representation of a three-dimensional object in a virtual environment. Interacting with virtual objects to transition the virtual objects from being displayed in an application user interface (e.g., a two-dimensional application user interface that does not display an augmented reality environment) to being displayed in an augmented reality environment (e.g., where Conventional approaches in scenarios where additional information not available in the physical world complements the environment to enhance the view of the physical world) often require multiple independent inputs (e.g., a series of gestures and button presses, etc.) to achieve the desired Results (eg, adjusting the size, position and/or orientation of virtual objects to achieve a realistic or desired appearance in an augmented reality environment). Additionally, conventional input methods typically involve a delay between receiving a request to display an augmented reality environment and displaying the augmented reality environment, the delay being the time and/or analysis required to capture a view of the physical world by activating one or more device cameras and the time required to characterize a view of the physical world (eg, to detect planes and/or surfaces in the captured view of the physical world) in relation to virtual objects that can be placed in the augmented reality environment. Embodiments herein provide an intuitive way for a user to display and/or interact with virtual objects in various contexts (eg, by allowing the user to provide a switch from displaying virtual objects in the context of an application user interface to in augmented reality Input to display virtual objects in the environment; by allowing the user to change the display properties of the virtual object before it is displayed in the augmented reality environment (eg, in a three-dimensional staging environment); by providing the user to easily identify it from multiple applications Indication of system-level virtual objects; by changing the visual properties of objects when determining the object's placement information; by providing animations that indicate device movement required for calibration to calibrate user interface objects; by constraining the rotation of displayed virtual objects about axes; by Increasing the threshold movement magnitude of the second object manipulation action when the threshold movement magnitude of the first object manipulation action is satisfied; and by providing an audio alert indicating that the virtual object has moved out of the displayed field of view).

本文所述的系统、方法和GUI以多种方式改进与虚拟/增强现实环境进行的用户界面交互。例如,它们使得以下操作更容易:在增强现实环境中显示虚拟对象,以及响应于不同的输入,调整显示在增强现实环境中的虚拟对象的外观。The systems, methods and GUIs described herein improve user interface interaction with virtual/augmented reality environments in a number of ways. For example, they make it easier to display virtual objects in an augmented reality environment, and to adjust the appearance of virtual objects displayed in an augmented reality environment in response to different inputs.

下面,图1A至图1C、图2和图3提供了对示例设备的描述。图4A至图4B、图5A至图5AT、图6A至图6AJ、图7A至图7P、图11A至图11V、图12A至图12L、图13A至图13M、图14A至图14Z以及图15A至图15AI示出了用于在各种情景中显示虚拟对象的示例用户界面。图8A至图8E示出了用于在从显示第一用户界面区域切换到显示第二用户界面区域时显示虚拟对象的表示的过程。图9A至图9D示出了用于在第一用户界面区域中显示虚拟对象的第一表示、在第二用户界面区域中显示虚拟对象的第二表示以及显示具有一个或多个相机的视场的表示的虚拟对象的第三表示的过程。图10A至图10D示出了用于显示具有指示项目与虚拟三维对象对应的视觉指示的项目的过程。图16A至图16G示出了用于根据对象放置标准是否得到满足来显示具有不同视觉属性的虚拟对象的过程。图17A至图17D示出了用于显示根据设备的一个或多个相机的移动而动态地动画的校准用户界面对象的过程。图18A至图18I示出了用于约束虚拟对象围绕轴的旋转的过程。图14AA至图14AD以及图19A至图19H示出了用于根据确定第一对象操纵行为满足第一阈值移动量值来增大第二对象操纵行为所需的第二阈值移动量值的过程。图20A至图20F示出了用于根据确定设备的移动使虚拟对象移动到所显示的一个或多个设备相机的视场之外来生成音频警报的过程。图5A至图5AT、图6A至图6AJ、图7A至图7P、图11A至图11V、图12A至图12L、图13A至图13M、图14A至图14Z以及图15A至图15AI中的用户界面用于示出图8A至图8E、图9A至图9D、图10A至图10D、图14AA至图14AD、图16A至图16G、图17A至图17D、图18A至图18I、图19A至图19H以及图20A至图20F中的过程。Below, FIGS. 1A-1C , 2 and 3 provide descriptions of example devices. 4A-4B, 5A-5AT, 6A-6AJ, 7A-7P, 11A-11V, 12A-12L, 13A-13M, 14A-14Z, and 15A An example user interface for displaying virtual objects in various contexts is shown through Figure 15AI. 8A-8E illustrate a process for displaying representations of virtual objects when switching from displaying a first user interface area to displaying a second user interface area. 9A-9D illustrate a first representation for displaying a virtual object in a first user interface area, a second representation of the virtual object in a second user interface area, and displaying a field of view with one or more cameras The process of representing a third representation of a virtual object. 10A-10D illustrate a process for displaying an item with a visual indication that the item corresponds to a virtual three-dimensional object. 16A-16G illustrate a process for displaying virtual objects with different visual properties depending on whether object placement criteria are met. 17A-17D illustrate a process for displaying calibration user interface objects that dynamically animate in accordance with movement of one or more cameras of a device. 18A-18I illustrate a process for constraining the rotation of a virtual object about an axis. 14AA-14AD and 19A-19H illustrate a process for increasing a second threshold movement amount required for a second object manipulation behavior based on a determination that the first object manipulation behavior satisfies the first threshold movement amount. 20A-20F illustrate a process for generating an audio alert based on determining that the movement of the device causes a virtual object to move out of the field of view of the displayed one or more device cameras. 5A-5AT, 6A-6AJ, 7A-7P, 11A-11V, 12A-12L, 13A-13M, 14A-14Z, and 15A-15AI 8A-8E, 9A-9D, 10A-10D, 14AA-14AD, 16A-16G, 17A-17D, 18A-18I, 19A-19A- 19H and the process in FIGS. 20A-20F.

示例性设备Exemplary Equipment

现在将详细地参考实施方案,这些实施方案的示例在附图中示出。下面的详细描述中示出许多具体细节,以便提供对各种所描述的实施方案的充分理解。但是,对本领域的普通技术人员将显而易见的是,各种所描述的实施方案可以在没有这些具体细节的情况下被实践。在其他情况下,没有详细地描述众所周知的方法、过程、部件、电路和网络,从而不会不必要地使实施方案的各个方面晦涩难懂。Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. The following detailed description sets forth numerous specific details in order to provide a thorough understanding of the various described embodiments. However, it will be apparent to one of ordinary skill in the art that the various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.

还将理解的是,虽然在一些情况下,术语“第一”、“第二”等在本文中用于描述各种元件,但是这些元件不应受到这些术语限制。这些术语只是用于将一个元件与另一元件区分开。例如,第一接触可被命名为第二接触,并且类似地,第二接触可被命名为第一接触,而不脱离各种所描述的实施方案的范围。第一接触和第二接触均为接触,但它们不是同一个接触,除非上下文另外明确指示。It will also be understood that, although in some instances the terms "first," "second," etc. are used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and similarly, a second contact could be termed a first contact, without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact unless the context clearly dictates otherwise.

在本文中对各种所述实施方案的描述中所使用的术语只是为了描述特定实施方案的目的,而并非旨在进行限制。如在对各种所述实施方案中的描述和所附权利要求书中所使用的那样,单数形式“一个”和“该”旨在也包括复数形式,除非上下文另外明确地指示。还将理解的是,本文中所使用的术语“和/或”是指并且涵盖相关联的所列出的项目中的一个或多个项目的任何和全部可能的组合。还将理解的是,术语“包括”(“includes”、“including”、“comprises”和/或“comprising”)在本说明书中使用时是指定存在所陈述的特征、整数、步骤、操作、元件和/或部件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、部件和/或其分组。The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a" and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will also be understood that the terms "includes", "including", "comprises" and/or "comprising" when used in this specification are intended to designate the presence of stated features, integers, steps, operations, elements and/or components, but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groupings thereof.

如本文中所使用,根据上下文,术语“如果”任选地被解释为意思是“当......时”(“when”或“upon”)或“响应于确定”或“响应于检测到”。类似地,根据上下文,短语“如果确定......”或“如果检测到[所陈述的条件或事件]”任选地被解释为是指“在确定......时”或“响应于确定......”或“在检测到[所陈述的条件或事件]时”或“响应于检测到[所陈述的条件或事件]”。As used herein, the term "if" is optionally construed to mean "when" ("when" or "upon") or "in response to determining" or "in response to a determination," depending on the context. detected". Similarly, depending on the context, the phrases "if it is determined that..." or "if [the stated condition or event] is detected" are optionally construed to mean "when it is determined that..." or "in response to determining..." or "on detection of [recited condition or event]" or "in response to detection of [recited condition or event]".

本文描述了电子设备、此类设备的用户界面和使用此类设备的相关过程的实施方案。在一些实施方案中,该设备为还包含其他功能诸如PDA和/或音乐播放器功能的便携式通信设备,诸如移动电话。便携式多功能设备的示例性实施方案包括但不限于来自AppleInc.(Cupertino,California)的

Figure BSA0000194339440000121
Figure BSA0000194339440000122
设备。任选地使用其他便携式电子设备,诸如具有触敏表面(例如,触摸屏显示器和/或触摸板)的膝上型电脑或平板电脑。还应当理解的是,在一些实施方案中,该设备并非便携式通信设备,而是具有触敏表面(例如,触摸屏显示器和/或触摸板)的台式计算机。Embodiments of electronic devices, user interfaces for such devices, and related processes for using such devices are described herein. In some embodiments, the device is a portable communication device, such as a mobile phone, that also incorporates other functions such as PDA and/or music player functions. Exemplary embodiments of the portable multifunction device include, but are not limited to, a portable multifunction device from Apple Inc. (Cupertino, California).
Figure BSA0000194339440000121
and
Figure BSA0000194339440000122
equipment. Other portable electronic devices such as laptops or tablets with touch-sensitive surfaces (eg, touch screen displays and/or touch pads) are optionally used. It should also be understood that in some embodiments, the device is not a portable communication device, but rather a desktop computer with a touch-sensitive surface (eg, a touch screen display and/or a touch pad).

在下面的讨论中,描述了一种包括显示器和触敏表面的电子设备。然而,应当理解,该电子设备任选地包括一个或多个其他物理用户接口设备,诸如物理键盘、鼠标和/或操纵杆。In the following discussion, an electronic device is described that includes a display and a touch-sensitive surface. It should be understood, however, that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.

该设备通常支持各种应用程序,诸如以下应用程序中的一个或多个应用程序:记笔记应用程序、绘图应用程序、呈现应用程序、文字处理应用程序、网站创建应用程序、盘编辑应用程序、电子表格应用程序、游戏应用程序、电话应用程序、视频会议应用程序、电子邮件应用程序、即时消息应用程序、健身支持应用程序、照片管理应用程序、数字相机应用程序、数字摄像机应用程序、web浏览应用程序、数字音乐播放器应用程序、和/或数字视频播放器应用程序。The device typically supports various applications, such as one or more of the following applications: note-taking applications, drawing applications, rendering applications, word processing applications, website creation applications, disk editing applications, Spreadsheet apps, gaming apps, telephony apps, video conferencing apps, email apps, instant messaging apps, fitness support apps, photo management apps, digital camera apps, digital video camera apps, web browsing applications, digital music player applications, and/or digital video player applications.

在设备上执行的各种应用程序任选地使用至少一个通用的物理用户界面设备,诸如触敏表面。触敏表面的一种或多种功能以及被显示在设备上的对应信息任选地对于不同应用程序被调整和/或变化,和/或在相应应用程序内被调整和/或变化。这样,设备的共用物理架构(诸如触敏表面)任选地利用对于用户而言直观且清楚的用户界面来支持各种应用程序。Various applications executing on the device optionally use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or changed for different applications and/or within respective applications. In this way, the common physical architecture of the device, such as a touch-sensitive surface, optionally supports various applications with a user interface that is intuitive and clear to the user.

现在将注意力转到具有触敏显示器的便携式设备的实施方案。图1A是示出根据一些实施方案的具有触敏显示器系统112的便携式多功能设备100的框图。触敏显示器系统112有时为了方便而被叫做“触摸屏”,并且有时被简称为触敏显示器。设备100包括存储器102(其任选地包括一个或多个计算机可读存储介质)、存储器控制器122、一个或多个处理单元(CPU)120、外围设备接口118、RF电路108、音频电路110、扬声器111、麦克风113、输入/输出(I/O)子系统106、其他输入、或控制设备116吗、和外部端口124。设备100任选地包括一个或多个光学传感器164。设备100任选地包括用于检测设备100(例如,触敏表面,诸如设备100的触敏显示器系统112)上的接触强度的一个或多个强度传感器165。设备100任选地包括用于在设备100上生成触觉输出的一个或多个触觉输出发生器167(例如,在触敏表面诸如设备100的触敏显示器系统112或设备300的触摸板355上生成触觉输出)。这些部件任选地通过一个或多个通信总线或信号线103进行通信。Attention is now turned to an embodiment of a portable device with a touch-sensitive display. FIG. 1A is a block diagram illustrating aportable multifunction device 100 having a touch-sensitive display system 112 in accordance with some embodiments. The touch-sensitive display system 112 is sometimes referred to as a "touch screen" for convenience, and is sometimes simply referred to as a touch-sensitive display.Device 100 includes memory 102 (which optionally includes one or more computer-readable storage media),memory controller 122 , one or more processing units (CPUs) 120 , peripherals interface 118 ,RF circuitry 108 ,audio circuitry 110 ,speaker 111 ,microphone 113 , input/output (I/O) subsystem 106 , other input, orcontrol devices 116 , andexternal ports 124 .Device 100 optionally includes one or moreoptical sensors 164 .Device 100 optionally includes one ormore intensity sensors 165 for detecting contact intensity on device 100 (eg, a touch-sensitive surface, such as touch-sensitive display system 112 of device 100 ).Device 100 optionally includes one or morehaptic output generators 167 for generating haptic output on device 100 (eg, on a touch-sensitive surface such as touch-sensitive display system 112 ofdevice 100 or touchpad 355 of device 300 ). haptic output). These components optionally communicate via one or more communication buses orsignal lines 103 .

应当理解,设备100仅仅是便携式多功能设备的一个示例,并且设备100任选地具有比所示出的部件更多或更少的部件,任选地组合两个或更多个部件,或者任选地具有这些部件的不同配置或布置。图1A中所示的各种部件在硬件、软件、固件、或它们的任何组合(包括一个或多个信号处理电路和/或专用集成电路)中实施。It should be understood thatdevice 100 is only one example of a portable multifunction device, and thatdevice 100 optionally has more or fewer components than shown, optionally combines two or more components, or any There are optionally different configurations or arrangements of these components. The various components shown in FIG. 1A are implemented in hardware, software, firmware, or any combination thereof (including one or more signal processing circuits and/or application specific integrated circuits).

存储器102任选地包括高速随机存取存储器,并且还任选地包括非易失性存储器,诸如一个或多个磁盘存储设备、闪存存储器设备、或其他非易失性固态存储器设备。设备100的其他部件(诸如一个或多个CPU 120和外围设备接口118)对存储器102的访问任选地由存储器控制器122来控制。Memory 102 optionally includes high speed random access memory, and also optionally includes nonvolatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other nonvolatile solid state memory devices. Access to memory 102 by other components ofdevice 100 , such as one ormore CPUs 120 and peripherals interface 118 , is optionally controlled bymemory controller 122 .

外围设备接口118可被用于将设备的输入外围设备和输出外围设备耦接到存储器102和一个或多个CPU 120。一个或多个处理器120运行或执行存储器102中存储的各种软件程序和/或指令集,以执行设备100的各种功能并处理数据。Peripherals interface 118 may be used to couple the device's input peripherals and output peripherals to memory 102 and one ormore CPUs 120 . One ormore processors 120 run or execute various software programs and/or sets of instructions stored in memory 102 to perform various functions ofdevice 100 and process data.

在一些实施方案中,外围设备接口118、一个或多个CPU 120、和存储器控制器122任选地在单个芯片诸如芯片104上实现。在一些其他实施方案中,它们任选地在独立的芯片上实现。In some embodiments, peripherals interface 118 , one ormore CPUs 120 , andmemory controller 122 are optionally implemented on a single chip such aschip 104 . In some other embodiments, they are optionally implemented on separate chips.

RF(射频)电路108接收和发送也被称作电磁信号的RF信号。RF电路108将电信号转换为电磁信号/将电磁信号转换为电信号,并且经由电磁信号来与通信网络以及其他通信设备进行通信。RF电路108任选地包括用于执行这些功能的熟知的电路,包括但不限于天线系统、RF收发器、一个或多个放大器、调谐器、一个或多个振荡器、数字信号处理器、编解码芯片组、用户身份模块(SIM)卡、存储器等等。RF电路108任选地通过无线通信来与网络以及其他设备进行通信,该网络为诸如互联网(也被称为万维网(WWW))、内联网、和/或无线网络(诸如蜂窝电话网络、无线局域网(LAN)和/或城域网(MAN))。该无线通信任选地使用多种通信标准、协议、和技术中的任一者,包括但不限于全球移动通信系统(GSM)、增强型数据GSM环境(EDGE)、高速下行链路分组接入(HSDPA)、高速上行链路分组接入(HSUPA)、演进纯数据(EV-DO)、HSPA、HSPA+、双单元HSPA(DC-HSPDA)、长期演进(LTE)、近场通信(NFC)、宽带码分多址(W-CDMA)、码分多址(CDMA)、时分多址(TDMA)、蓝牙、无线保真(Wi-Fi)(例如,IEEE802.11a、IEEE 802.11ac、IEEE 802.11ax、IEEE 802.11b、IEEE 802.11g和/或IEEE802.11n)、互联网协议语音技术(VoIP)、Wi-MAX、电子邮件协议(例如,互联网消息访问协议(IMAP)和/或邮局协议(POP))、即时消息(例如,可扩展消息处理和存在协议(XMPP)、用于即时消息和存在利用扩展的会话发起协议(SIMPLE)、即时消息和存在服务(IMPS))、和/或短消息服务(SMS)、或者包括在本文档提交日期还未开发出的通信协议的其他任何适当的通信协议。RF (radio frequency)circuitry 108 receives and transmits RF signals, also known as electromagnetic signals.RF circuitry 108 converts/converts electrical signals to/from electrical signals, and communicates with communications networks and other communications devices via the electromagnetic signals.RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to antenna systems, RF transceivers, one or more amplifiers, tuners, one or more oscillators, digital signal processors, encoders Decoding chipsets, Subscriber Identity Module (SIM) cards, memory, and more.RF circuitry 108 communicates with networks such as the Internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks such as cellular telephone networks, wireless local area networks, and other devices, optionally by wireless communication (LAN) and/or Metropolitan Area Network (MAN)). The wireless communication optionally uses any of a variety of communication standards, protocols, and technologies, including but not limited to Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), Evolved Data Only (EV-DO), HSPA, HSPA+, Dual Cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), Wideband Code Division Multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE802.11a, IEEE 802.11ac, IEEE 802.11ax , IEEE 802.11b, IEEE 802.11g and/or IEEE802.11n), Voice over Internet Protocol (VoIP), Wi-MAX, email protocols (eg, Internet Message Access Protocol (IMAP) and/or Post Office Protocol (POP)) , instant messaging (eg, Extensible Message Processing and Presence Protocol (XMPP), Session Initiation Protocol for Instant Messaging and Presence Utilization Extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service ( SMS), or any other appropriate communication protocol including a communication protocol that has not been developed as of the filing date of this document.

音频电路110、扬声器111和麦克风113提供用户与设备100之间的音频接口。音频电路110从外围设备接口118接收音频数据,将音频数据转换为电信号,并且将电信号传输到扬声器111。扬声器111将电信号转换为人类可听见的声波。音频电路110还接收由麦克风113根据声波转换的电信号。音频电路110将电信号转换为音频数据,并且将音频数据传输到外围设备接口118以用于处理。音频数据任选地由外围设备接口118检索自和/或传输至存储器102和/或RF电路108。在一些实施方案中,音频电路110还包括耳麦插孔(例如,图2中的212)。耳麦插孔提供音频电路110与可移除的音频输入/输出外围设备之间的接口,该可移除的音频输入/输出外围设备诸如仅输出的耳机或者具有输出(例如,单耳耳机或双耳耳机)和输入(例如,麦克风)两者的耳麦。Audio circuitry 110 ,speaker 111 andmicrophone 113 provide an audio interface between the user anddevice 100 . Theaudio circuit 110 receives audio data from theperipherals interface 118 , converts the audio data into electrical signals, and transmits the electrical signals to thespeaker 111 . Thespeaker 111 converts electrical signals into sound waves audible to humans. Theaudio circuit 110 also receives electrical signals converted by themicrophone 113 according to sound waves.Audio circuitry 110 converts the electrical signals to audio data and transmits the audio data to peripherals interface 118 for processing. Audio data is optionally retrieved from and/or transferred to memory 102 and/orRF circuitry 108 byperipherals interface 118 . In some embodiments, theaudio circuit 110 also includes a headphone jack (eg, 212 in Figure 2). The headset jack provides an interface between theaudio circuitry 110 and removable audio input/output peripherals such as output-only headphones or with outputs (eg, mono or dual earphones) and inputs (eg, microphones).

I/O子系统106将设备100上的输入/输出外围设备诸如触敏显示器系统112和其他输入或控制设备116与外围设备接口118耦接。I/O子系统106任选地包括显示控制器156、光学传感器控制器158、强度传感器控制器159、触觉反馈控制器161、和用于其他输入或控制设备的一个或多个输入控制器160。所述一个或多个输入控制器160从其他输入或控制设备116接收电信号/将电信号发送到所述其他输入或控制设备。其他输入控制设备116任选地包括物理按钮(例如,下压按钮、摇臂按钮等)、拨号盘、滑动开关、操纵杆、点击轮等。在一些另选的实施方案中,一个或多个输入控制器160任选地耦接至以下各项中的任一者(或不耦接至以下各项中的任一者):键盘、红外线端口、USB端口、触笔、和/或指针设备诸如鼠标。一个或多个按钮(例如,图2中的208)任选地包括用于扬声器111和/或麦克风113的音量控制的向上/向下按钮。一个或多个按钮任选地包括下压按钮(例如,图2中的206)。I/O subsystem 106 couples input/output peripherals ondevice 100 such as touch-sensitive display system 112 and other input orcontrol devices 116 withperipherals interface 118 . I/O subsystem 106 optionally includes display controller 156, optical sensor controller 158, intensity sensor controller 159,haptic feedback controller 161, and one ormore input controllers 160 for other input or control devices . The one ormore input controllers 160 receive electrical signals from/send electrical signals to other input orcontrol devices 116 . Otherinput control devices 116 optionally include physical buttons (eg, push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, and the like. In some alternative implementations, the one ormore input controllers 160 are optionally coupled to any of the following (or not to any of the following): keyboard, infrared port, USB port, stylus, and/or pointing device such as a mouse. The one or more buttons (eg, 208 in FIG. 2 ) optionally include up/down buttons for volume control ofspeaker 111 and/ormicrophone 113 . The one or more buttons optionally include a push button (eg, 206 in Figure 2).

触敏显示器系统112提供设备与用户之间的输入接口和输出接口。显示控制器156从触敏显示器系统112接收电信号和/或将电信号发送至触敏显示器系统112。触敏显示器系统112向用户显示视觉输出。视觉输出任选地包括图形、文本、图标、视频以及它们的任何组合(统称为“图形”)。在一些实施方案中,一些视觉输出或全部的视觉输出对应于用户界面对象。如本文所用,术语“示能表示”是指用户交互式图形用户界面对象(例如,被配置为对被引向图形用户界面对象的输入进行响应的图形用户界面对象)。用户交互式图形用户界面对象的示例包括但不限于按钮、滑块、图标、可选择菜单项、开关、超链接、或其他用户界面控件。The touch-sensitive display system 112 provides an input interface and an output interface between the device and the user. Display controller 156 receives electrical signals from and/or sends electrical signals to touch-sensitive display system 112 . The touch-sensitive display system 112 displays visual output to the user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively, "graphics"). In some embodiments, some or all of the visual output corresponds to a user interface object. As used herein, the term "enabling representation" refers to a user-interactive graphical user interface object (eg, a graphical user interface object configured to respond to input directed to the graphical user interface object). Examples of user-interactive graphical user interface objects include, but are not limited to, buttons, sliders, icons, selectable menu items, switches, hyperlinks, or other user interface controls.

触敏显示系统112具有基于触觉和/或触感接触来接受来自用户的输入的触敏表面、传感器、或传感器组。触敏显示器系统112和显示控制器156(与存储器102中的任何相关联的模块和/或指令集一起)检测触敏显示器系统112上的接触(和该接触的任何移动或中断),并且将检测到的接触转换为与被显示在触敏显示器系统112上的用户界面对象(例如,一个或多个软按键、图标、网页或图像)的交互。在一些实施方案中,在触敏显示系统112和用户之间的接触点对应于用户的手指或触笔。The touch-sensitive display system 112 has a touch-sensitive surface, sensor, or set of sensors that accepts input from a user based on tactile and/or tactile contact. Touch-sensitive display system 112 and display controller 156 (together with any associated modules and/or instruction sets in memory 102 ) detect contact on touch-sensitive display system 112 (and any movement or interruption of that contact), and The detected contact translates into interaction with a user interface object (eg, one or more soft keys, icons, web pages, or images) displayed on the touch-sensitive display system 112 . In some embodiments, the point of contact between the touch-sensitive display system 112 and the user corresponds to the user's finger or stylus.

触敏显示器系统112任选地使用LCD(液晶显示器)技术、LPD(发光聚合物显示器)技术、或LED(发光二极管)技术,但是在其他实施方案中使用其他显示技术。触敏显示系统112和显示控制器156任选地使用现在已知的或以后将开发出的多种触摸感测技术中的任何技术以及其他接近传感器阵列或用于确定与触敏显示系统112接触的一个或多个点的其他元件来检测接触及其任何移动或中断,该多种触摸感测技术包括但不限于电容性的、电阻性的、红外线的、和表面声波技术。在一些实施方案中,使用投射式互电容感测技术,诸如从Apple Inc.(Cupertino,California)的

Figure BSA0000194339440000161
中发现的技术。The touch-sensitive display system 112 optionally uses LCD (Liquid Crystal Display) technology, LPD (Light Emitting Polymer Display) technology, or LED (Light Emitting Diode) technology, although other display technologies are used in other embodiments. Touch-sensitive display system 112 and display controller 156 optionally use any of a variety of touch-sensing technologies now known or later developed, as well as other proximity sensor arrays or for determining contact with touch-sensitive display system 112 A variety of touch sensing technologies including, but not limited to, capacitive, resistive, infrared, and surface acoustic wave technologies are used to detect contact and any movement or interruption of the contact by other elements at one or more points. In some implementations, projected mutual capacitance sensing techniques, such as those from Apple Inc. (Cupertino, California), are used
Figure BSA0000194339440000161
and technology found in.

触敏显示器系统112任选地具有超过100dpi的视频分辨率。在一些实施方案中,触摸屏视频分辨率超过400dpi(例如,500dpi、800dpi或更大)。用户任选地使用任何合适的物体或附加物诸如触笔、手指等来与触敏显示系统112接触。在一些实施方案中,将用户界面设计成与基于手指的接触和手势一起工作,由于手指在触摸屏上的接触区域较大,因此这可能不如基于触笔的输入精确。在一些实施方案中,设备将基于手指的粗略输入转化为精确的指针/光标位置或命令以用于执行用户所期望的动作。The touch-sensitive display system 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touchscreen video resolution exceeds 400 dpi (eg, 500 dpi, 800 dpi or greater). The user optionally makes contact with the touch-sensitive display system 112 using any suitable object or appendage such as a stylus, a finger, or the like. In some embodiments, the user interface is designed to work with finger-based contact and gestures, which may not be as accurate as stylus-based input due to the larger area of contact of the finger on the touch screen. In some embodiments, the device translates the rough finger-based input into precise pointer/cursor positions or commands for performing the actions desired by the user.

在一些实施方案中,除了触摸屏之外,设备100任选地包括用于激活或去激活特定功能的触控板(未示出)。在一些实施方案中,触控板是设备的触敏区域,与触摸屏不同,该触敏区域不显示视觉输出。触摸板任选地是与触敏显示器系统112分开的触敏表面,或者是由触摸屏形成的触敏表面的延伸部分。In some embodiments, in addition to the touchscreen,device 100 optionally includes a touchpad (not shown) for activating or deactivating certain functions. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike a touchscreen, does not display visual output. The touchpad is optionally a touch-sensitive surface separate from the touch-sensitive display system 112, or an extension of the touch-sensitive surface formed by the touch screen.

设备100还包括用于为各种部件供电的电力系统162。电力系统162任选地包括电力管理系统、一个或多个电源(例如,电池、交流电(AC))、再充电系统、电力故障检测电路、功率转换器或逆变器、电源状态指示符(例如,发光二极管(LED))以及与便携式设备中的电力的生成、管理和分配相关联的任何其他部件。Device 100 also includes apower system 162 for powering various components.Power system 162 optionally includes a power management system, one or more power sources (eg, batteries, alternating current (AC)), recharging systems, power failure detection circuitry, power converters or inverters, power status indicators (eg, , Light Emitting Diodes (LEDs), and any other components associated with the generation, management and distribution of power in portable devices.

设备100任选地还包括一个或多个光学传感器164。图1A示出与I/O子系统106中的光学传感器控制器158耦接的光学传感器。一个或多个光学传感器164任选地包括电荷耦合器件(CCD)或互补金属氧化物半导体(CMOS)光电晶体管。一个或多个光学传感器164从环境接收通过一个或多个透镜而投射的光,并且将光转换为表示图像的数据。结合成像模块143(也被叫做相机模块),一个或多个光学传感器164任选地捕获静态图像和/或视频。在一些实施方案中,光学传感器位于设备100的与设备前部上的触敏显示系统112相背对的后部上,使得触摸屏能够用作用于静态图像和/或视频图像采集的取景器。在一些实施方案中,另一光学传感器位于设备的前部上,从而获取该用户的图像(例如,用于自拍、用于在用户在触摸屏上观看其他视频会议参与者时进行视频会议等等)。Device 100 optionally also includes one or moreoptical sensors 164 . FIG. 1A shows an optical sensor coupled to optical sensor controller 158 in I/O subsystem 106 . The one or moreoptical sensors 164 optionally include charge coupled devices (CCD) or complementary metal oxide semiconductor (CMOS) phototransistors. One or moreoptical sensors 164 receive light projected through one or more lenses from the environment and convert the light into data representing an image. In conjunction with imaging module 143 (also referred to as a camera module), one or moreoptical sensors 164 optionally capture still images and/or video. In some embodiments, an optical sensor is located on the rear of thedevice 100 opposite the touch-sensitive display system 112 on the front of the device, enabling the touch screen to be used as a viewfinder for still and/or video image capture. In some embodiments, another optical sensor is located on the front of the device to capture an image of the user (eg, for selfies, for videoconferences while the user is watching other videoconference participants on the touchscreen, etc.) .

设备100任选地还包括一个或多个接触强度传感器165。图1A示出了与I/O子系统106中的强度传感器控制器159耦接的接触强度传感器。一个或多个接触强度传感器165任选地包括一个或多个压阻应变仪、电容式力传感器、电气式力传感器、压电力传感器、光学力传感器、电容式触敏表面、或其他强度传感器(例如,用于测量触敏表面上的接触的力(或压力)的传感器)。一个或多个接触强度传感器165从环境接收接触强度信息(例如,压力信息或压力信息的代用物)。在一些实施方案中,至少一个接触强度传感器与触敏表面(例如,触敏显示器系统112)并置排列或邻近。在一些实施方案中,至少一个接触强度传感器位于设备100的与位于设备100的前部上的触敏显示系统112相背对的后部上。Device 100 optionally also includes one or morecontact intensity sensors 165 . FIG. 1A shows a contact intensity sensor coupled to intensity sensor controller 159 in I/O subsystem 106 . The one or morecontact intensity sensors 165 optionally include one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors ( For example, a sensor for measuring the force (or pressure) of a contact on a touch-sensitive surface). One or morecontact intensity sensors 165 receive contact intensity information (eg, pressure information or a surrogate for pressure information) from the environment. In some implementations, at least one contact intensity sensor is juxtaposed or adjacent to the touch-sensitive surface (eg, touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the rear of thedevice 100 opposite the touch-sensitive display system 112 located on the front of thedevice 100 .

设备100任选地还包括一个或多个接近传感器166。图1A示出了与外围设备接口118耦接的接近传感器166。另选地,接近传感器166与I/O子系统106中的输入控制器160耦接。在一些实施方案中,当多功能设备被置于用户耳朵附近时(例如,用户正在打电话时),接近传感器关闭并禁用触敏显示器系统112。Device 100 optionally also includes one ormore proximity sensors 166 . FIG. 1A showsproximity sensor 166 coupled toperipherals interface 118 . Alternatively,proximity sensor 166 is coupled to inputcontroller 160 in I/O subsystem 106 . In some embodiments, the proximity sensor turns off and disables the touch-sensitive display system 112 when the multifunction device is placed near the user's ear (eg, when the user is on a phone call).

设备100任选地还包括一个或多个触觉输出发生器167。图1A示出了与I/O子系统106中的触觉反馈控制器161耦接的触觉输出发生器。在一些实施方案中,一个或多个触觉输出发生器167包括一个或多个电声设备诸如扬声器或其他音频部件;和/或用于将能量转换成线性运动的机电设备诸如电机、螺线管、电活性聚合器、压电致动器、静电致动器、或其他触觉输出生成部件(例如,用于将电信号转换成设备上的触觉输出的部件)。触觉输出发生器167从触觉反馈模块133接收触觉反馈生成指令,并且在设备100上生成能够由设备100的用户感觉到的触觉输出。在一些实施方案中,至少一个触觉输出发生器与触敏表面(例如,触敏显示器系统112)并置排列或邻近,并且任选地通过竖直地(例如,向设备100的表面内/外)或侧向地(例如,在与设备100的表面相同的平面中向后和向前)移动触敏表面来生成触觉输出。在一些实施方案中,至少一个触觉输出发生器传感器位于设备100的与位于设备100的前部上的触敏显示系统112相背对的后部上。Device 100 optionally also includes one or morehaptic output generators 167 . FIG. 1A shows a haptic output generator coupled tohaptic feedback controller 161 in I/O subsystem 106 . In some embodiments, the one or morehaptic output generators 167 include one or more electroacoustic devices such as speakers or other audio components; and/or electromechanical devices such as motors, solenoids for converting energy into linear motion , electroactive polymerizers, piezoelectric actuators, electrostatic actuators, or other haptic output generating components (eg, components used to convert electrical signals into haptic outputs on the device).Haptic output generator 167 receives haptic feedback generation instructions fromhaptic feedback module 133 and generates haptic output ondevice 100 that can be felt by a user ofdevice 100 . In some embodiments, at least one tactile output generator is juxtaposed or adjacent to a touch-sensitive surface (eg, touch-sensitive display system 112 ), and optionally by vertically (eg, in/out of the surface of device 100 ) ) or laterally (eg, backwards and forwards in the same plane as the surface of device 100) to move the touch-sensitive surface to generate the haptic output. In some implementations, at least one tactile output generator sensor is located on the rear of thedevice 100 opposite the touch-sensitive display system 112 located on the front of thedevice 100 .

设备100任选地还包括一个或多个加速度计168。图1A示出与外围设备接口118耦接的加速度计168。另选地,加速度计168任选地与I/O子系统106中的输入控制器160耦接。在一些实施方案中,基于对从该一个或多个加速度计所接收的数据的分析来在触摸屏显示器上以纵向视图或横向视图来显示信息。设备100任选地除了加速度计168之外还包括磁力仪(未示出)和GPS(或GLONASS或其他全球导航系统)接收器(未示出),以用于获取关于设备100的位置和取向(例如,纵向或横向)的信息。Device 100 optionally also includes one ormore accelerometers 168 . FIG. 1A shows theaccelerometer 168 coupled to theperipherals interface 118 . Alternatively,accelerometer 168 is optionally coupled toinput controller 160 in I/O subsystem 106 . In some embodiments, information is displayed on a touch screen display in a portrait view or landscape view based on analysis of data received from the one or more accelerometers.Device 100 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition toaccelerometer 168 for obtaining position and orientation about device 100 (for example, portrait or landscape).

在一些实施方案中,存储于存储器102中的软件部件包括操作系统126、通信模块(或指令集)128、接触/运动模块(或指令集)130、图形模块(或指令集)132、触觉反馈模块(或指令集)133、文本输入模块(或指令集)134、全球定位系统(GPS)模块(或指令集)135、以及应用程序(或指令集)136。此外,在一些实施方案中,存储器102存储设备/全局内部状态157,如图在1A和图3中所示的。设备/全局内部状态157包括以下中的一者或多者:活动应用状态,其指示哪些应用(如果有的话)当前是活动的;显示状态,其指示什么应用程序、视图或其它信息占据触敏显示器系统112的各个区域;传感器状态,包括从设备的各个传感器和其他输入或控制设备116获取的信息;以及关于设备的位置和/或姿态的位置和/或方位信息。In some embodiments, the software components stored in memory 102 include operating system 126, communication module (or instruction set) 128, contact/motion module (or instruction set) 130, graphics module (or instruction set) 132, haptic feedback A module (or instruction set) 133 , a text input module (or instruction set) 134 , a global positioning system (GPS) module (or instruction set) 135 , and an application program (or instruction set) 136 . Additionally, in some embodiments, memory 102 stores device/globalinternal state 157, as shown in FIGS. 1A and 3 . Device/globalinternal state 157 includes one or more of the following: active application state, which indicates which applications, if any, are currently active; display state, which indicates what application, view, or other information occupies touch. various areas of thesensitive display system 112; sensor status, including information obtained from the device's various sensors and other input orcontrol devices 116; and position and/or orientation information regarding the device's position and/or attitude.

操作系统126(例如,iOS、Darwin、RTXC、LINUX、UNIX、OSX、WINDOWS、或嵌入式操作系统诸如VxWorks)包括用于控制和管理一般系统任务(例如,存储器管理、存储设备控制、电源管理等)的各种软件组件和/或驱动器,并且有利于各种硬件和软件组件之间的通信。An operating system 126 (eg, iOS, Darwin, RTXC, LINUX, UNIX, OSX, WINDOWS, or an embedded operating system such as VxWorks) includes functions for controlling and managing general system tasks (eg, memory management, storage device control, power management, etc. ) of various software components and/or drivers, and to facilitate communication between the various hardware and software components.

通信模块128有利于通过一个或多个外部端口124来与其他设备进行通信,并且还包括用于处理由RF电路108和/或外部端口124所接收的数据的各种软件组件。外部端口124(例如,通用串行总线(USB)、火线等)适于直接耦接到其他设备或间接地经由网络(例如,互联网、无线LAN等)耦接。在一些实施方案中,外部端口是与Apple Inc.(Cupertino,California)的一些

Figure BSA0000194339440000191
和iPod设备中所使用的30针连接器相同或类似和/或兼容的多针(例如,30针)连接器。在一些实施方案中,外部端口是与Apple Inc.(Cupertino,California)的一些iPod
Figure BSA0000194339440000192
和iPod设备中所使用的Lightning连接器相同或类似和/或兼容的Lightning连接器。Communication module 128 facilitates communication with other devices through one or moreexternal ports 124 , and also includes various software components for processing data received byRF circuitry 108 and/orexternal ports 124 . External ports 124 (eg, Universal Serial Bus (USB), FireWire, etc.) are suitable for coupling directly to other devices or indirectly via a network (eg, the Internet, wireless LAN, etc.). In some embodiments, the external port is some kind with Apple Inc. (Cupertino, California)
Figure BSA0000194339440000191
A multi-pin (eg, 30-pin) connector identical to or similar to and/or compatible with the 30-pin connector used in iPod devices. In some embodiments, the external port is some kind with Apple Inc. (Cupertino, California) iPod
Figure BSA0000194339440000192
A Lightning connector that is the same or similar and/or compatible with the Lightning connector used in the iPod device.

接触/运动模块130任选地检测与触敏显示器系统112(结合显示控制器156)和其他触敏设备(例如,触摸板或物理点击轮)的接触。接触/运动模块130包括各种软件部件以用于执行与(例如通过手指或触笔)接触检测相关的各种操作,诸如确定是否已发生接触(例如,检测手指按下事件)、确定接触的强度(例如,接触的力或压力,或者接触的力或压力的代替物)、确定是否存在接触的移动并跟踪跨触敏表面的移动(例如,检测一个或多个手指拖动事件)、以及确定接触是否已停止(例如,检测手指抬离事件或者接触断开)。接触/运动模块130从触敏表面接收接触数据。确定接触点的移动任选地包括确定接触点的速率(量值)、速度(量值和方向)和/或加速度(量值和/或方向的改变),所述接触点的移动由一系列接触数据表示。这些操作任选地被应用于单点接触(例如,单指接触或触笔接触)或者多点同时接触(例如,“多点触摸”/多指接触)。在一些实施方案中,接触/运动模块130和显示控制器156检测触摸板上的接触。Contact/motion module 130 optionally detects contact with touch-sensitive display system 112 (in conjunction with display controller 156) and other touch-sensitive devices (eg, a touchpad or physical click wheel). Contact/motion module 130 includes various software components for performing various operations related to contact detection (eg, by a finger or stylus), such as determining whether contact has occurred (eg, detecting a finger press event), determining the Intensity (eg, force or pressure of contact, or a surrogate for force or pressure of contact), determining whether there is movement of contact and tracking movement across the touch-sensitive surface (eg, detecting one or more finger drag events), and Determine if the contact has stopped (eg, detect a finger lift event or contact break). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining the movement of the contact point optionally includes determining the velocity (magnitude), velocity (magnitude and direction) and/or acceleration (change in magnitude and/or direction) of the contact point, the movement of the contact point being determined by a series of Contact data representation. These operations are optionally applied to single-point contact (eg, single-finger contact or stylus contact) or multiple simultaneous contacts (eg, "multi-touch"/multi-finger contact). In some embodiments, contact/motion module 130 and display controller 156 detect contact on the touchpad.

接触/运动模块130任选地检测用户的手势输入。触敏表面上的不同手势具有不同的接触模式(例如,所检测到的接触的不同运动、计时和/或强度)。因此,任选地通过检测特定接触模式来检测手势。例如,检测单指轻击手势包括检测手指按下事件,然后在与手指按下事件相同的位置(或基本上相同的位置)处(例如,在图标位置处)检测手指抬起(抬离)事件。又如,检测触敏表面上的手指轻扫手势包括检测手指按下事件,然后检测一个或多个手指拖动事件,并且随后检测手指抬起(抬离)事件。类似地,通过检测触笔的特定接触图案来任选地检测触笔的轻击、轻扫、拖动和其他手势。The contact/motion module 130 optionally detects a user's gesture input. Different gestures on a touch-sensitive surface have different contact patterns (eg, different movements, timings, and/or intensities of detected contacts). Thus, gestures are optionally detected by detecting specific contact patterns. For example, detecting a single-finger tap gesture includes detecting a finger-down event and then detecting a finger-up (lift-off) at the same location (or substantially the same location) as the finger-down event (eg, at an icon location) event. As another example, detecting a finger swipe gesture on a touch-sensitive surface includes detecting a finger press event, then detecting one or more finger drag events, and then detecting a finger up (lift off) event. Similarly, taps, swipes, drags, and other gestures of the stylus are optionally detected by detecting specific contact patterns of the stylus.

在一些实施方案中,检测到手指轻击手势取决于检测到手指按下事件与手指抬起事件之间的时间长度,但是与手指按下事件与手指抬起事件之间的手指接触强度无关。在一些实施方案中,根据确定手指按下事件与手指抬起事件之间的时间长度小于预先确定的值(例如,小于0.1、0.2、0.3、0.4或0.5秒),检测轻击手势,而不管轻击期间手指接触的强度是否达到给定的强度阈值(大于标称接触检测强度阈值),例如轻按压或深按压强度阈值。因此,手指轻击手势可以满足特定输入标准,该特定输入标准不要求接触的特征强度满足给定强度阈值以满足特定输入标准。为清楚起见,轻击手势中的手指接触通常需要满足标称接触检测强度阈值以检测到手指按下事件,低于该标称接触检测强度阈值时,不会检测到接触。类似的分析适用于通过触笔或其他接触检测轻击手势。在设备能够检测在触敏表面上方悬停的手指或触笔接触的情况下,标称接触检测强度阈值任选地不与手指或触笔与触敏表面之间的物理接触对应。In some embodiments, detection of a finger tap gesture depends on the length of time between the detection of the finger down event and the finger up event, but not the intensity of the finger contact between the finger down event and the finger up event. In some implementations, the tap gesture is detected based on determining that the length of time between the finger down event and the finger up event is less than a predetermined value (eg, less than 0.1, 0.2, 0.3, 0.4, or 0.5 seconds), regardless of Whether the intensity of the finger contact during a tap reaches a given intensity threshold (greater than the nominal contact detection intensity threshold), such as a light press or deep press intensity threshold. Thus, a finger tap gesture may satisfy certain input criteria that do not require the characteristic strength of the contact to satisfy a given strength threshold to satisfy the particular input criterion. For clarity, finger contact in a tap gesture typically needs to meet a nominal contact detection strength threshold to detect a finger press event, below which no contact is detected. Similar analysis applies to detecting tap gestures with a stylus or other contact. Where the device is capable of detecting finger or stylus contact hovering over the touch-sensitive surface, the nominal contact detection intensity threshold optionally does not correspond to physical contact between the finger or stylus and the touch-sensitive surface.

同样的概念以类似方式适用于其他类型的手势。例如,可基于对于与手势中包括的接触的强度无关或者不要求执行手势的接触达到强度阈值以便被识别的标准的满足来任选地检测轻扫手势、捏合手势、展开手势和/或长按压手势。例如,轻扫手势基于一个或多个接触的移动的量来检测;缩放手势基于两个或更多个接触朝彼此的移动来检测;扩放手势基于两个或更多个接触背离彼此的移动来检测;长按压手势基于触敏表面上具有少于阈值移动量的接触的持续时间来检测。因此,关于特定手势识别标准不要求接触强度满足相应的强度阈值以满足特定手势识别标准的陈述意味着特定手势识别标准能够在手势中的接触未达到相应的强度阈值时被满足,并且还能够在手势中的一个或多个接触达到或超过相应的强度阈值的情况下被满足。在一些实施方案中,基于确定在预定义时间段内检测到手指按下事件和手指抬起事件来检测轻击手势,而不考虑在预定义时间段期间接触是高于还是低于相应的强度阈值,并且基于确定接触移动大于预定义量值来检测轻扫手势,即使在接触移动结束时接触高于相应的强度阈值也是如此。即使在对手势的检测受到执行手势的接触的强度的影响的具体实施中(例如,当接触的强度高于强度阈值时,设备更快地检测到长按压,或者当接触的强度更高时,设备会延迟对轻击输入的检测),只要在接触未达到特定强度阈值的情况下可以满足识别手势的标准,则对这些手势的检测也不会要求接触达到特定强度阈值(例如,即使识别手势所需的时间量发生变化)。The same concept applies to other types of gestures in a similar fashion. For example, swipe gestures, pinch gestures, expand gestures, and/or long presses may optionally be detected based on the satisfaction of criteria for a contact that is independent of the strength of the contact included in the gesture or does not require that the gesture be performed to meet a strength threshold in order to be recognized gesture. For example, a swipe gesture is detected based on the amount of movement of one or more contacts; a pinch gesture is detected based on the movement of two or more contacts towards each other; a pinch gesture is detected based on the movement of two or more contacts away from each other to detect; a long press gesture is detected based on the duration of a contact on the touch-sensitive surface with less than a threshold amount of movement. Thus, a statement that a specific gesture recognition criterion does not require contact strength to meet a corresponding strength threshold to meet a specific gesture recognition criterion means that the specific gesture recognition criterion can be met when the contact in the gesture does not meet the corresponding strength threshold, and can also be met when the contact in the gesture does not meet the corresponding strength threshold. Satisfied if one or more of the contacts in the gesture meet or exceed a corresponding intensity threshold. In some implementations, a tap gesture is detected based on determining that a finger down event and a finger up event are detected within a predefined time period, regardless of whether the contact was above or below the corresponding intensity during the predefined time period threshold, and detects a swipe gesture based on determining that the contact movement is greater than a predefined magnitude, even if the contact is above the corresponding intensity threshold at the end of the contact movement. Even in implementations where the detection of the gesture is affected by the strength of the contact performing the gesture (eg, the device detects a long press faster when the strength of the contact is above a strength threshold, or when the strength of the contact is higher, The device delays detection of tap input), and detection of those gestures does not require contact to reach a specific intensity threshold as long as the criteria for recognizing gestures can be met without reaching a certain intensity threshold (e.g., even if the gesture is recognized The amount of time required varies).

在某些情况下,接触强度阈值、持续时间阈值和移动阈值以各种不同组合进行组合,以便创建启发式算法来区分针对相同输入元素或区域的两个或更多个不同手势,使得与相同输入元素的多个不同交互能够提供更丰富的用户交互和响应的集合。关于一组特定手势识别标准不要求接触的强度满足相应的强度阈值以满足特定手势识别标准的陈述不排除对其他强度相关手势识别标准进行同时评估,以识别具有当手势包括具有高于相应强度阈值的强度的接触时被满足的标准的其他手势。例如,在某些情况下,第一手势的第一手势识别标准(其不要求接触的强度满足相应的强度阈值以满足第一手势识别标准)与第二手势的第二手势识别标准(其取决于达到相应强度阈值的接触)竞争。在这样的竞争中,如果第二手势的第二手势识别标准首先标准得到满足,则手势任选地不被识别为满足第一手势的第一手势识别标准。例如,如果在接触移动预定义的移动量之前接触达到相应的强度阈值,则检测到深按压手势而不是轻扫手势。相反,如果在接触达到相应的强度阈值之前接触移动预定义的移动量,则检测到轻扫手势而不是深按压手势。即使在这种情况下,第一手势的第一手势识别标准仍然不要求接触的强度满足相应的强度阈值以满足第一手势识别标准,因为如果接触保持低于相应的强度阈值直到手势结束(例如,具有不会增大到高于相应强度阈值的强度的接触的轻扫手势),手势将被第一手势识别标准识别为轻扫手势。因此,不要求接触的强度满足相应的强度阈值以满足特定手势识别标准的特定手势识别标准将会(A)在某些情况下,忽略相对于强度阈值的接触强度(例如,对于轻击手势而言)和/或(B)在某些情况下,如果在特定手势识别标准识别与输入对应的手势之前,一组竞争的强度相关手势识别标准(例如,对于深按压手势而言)将输入识别为与强度相关手势对应,则不能满足特定手势识别标准(例如,对于长按压手势而言),从这个意义上来讲,仍然取决于相对于强度阈值的接触强度(例如,对于与深按压手势竞争识别的长按压手势而言)。In some cases, contact intensity thresholds, duration thresholds, and movement thresholds are combined in various combinations in order to create heuristics that distinguish two or more different gestures for the same input element or area such that the same Multiple different interactions of input elements can provide a richer set of user interactions and responses. The statement that a particular set of gesture recognition criteria does not require that the strength of a contact meet a corresponding strength threshold does not preclude concurrent evaluation of other strength-related gesture recognition criteria The strength of the contact is met when the criteria are met for other gestures. For example, in some cases, a first gesture recognition criterion for a first gesture (which does not require the intensity of the contact to meet a corresponding intensity threshold to satisfy the first gesture recognition criterion) is different from a second gesture recognition criterion for a second gesture ( It depends on the contact) competition that reaches the corresponding intensity threshold. In such a competition, the gesture is optionally not recognized as satisfying the first gesture recognition criterion for the first gesture if the second gesture recognition criterion for the second gesture is met first. For example, a deep press gesture is detected instead of a swipe gesture if the contact reaches a corresponding intensity threshold before the contact moves by a predefined amount of movement. Conversely, if the contact moves a predefined amount of movement before the contact reaches the corresponding intensity threshold, a swipe gesture is detected instead of a deep press gesture. Even in this case, the first gesture recognition criterion for the first gesture still does not require the intensity of the contact to meet the corresponding intensity threshold to satisfy the first gesture recognition criterion, because if the contact remains below the corresponding intensity threshold until the end of the gesture (eg , a swipe gesture with a contact that does not increase to an intensity above the corresponding intensity threshold), the gesture will be recognized as a swipe gesture by the first gesture recognition criterion. Therefore, specific gesture recognition criteria that do not require the intensity of the contact to meet the corresponding intensity threshold to satisfy the specific gesture recognition criterion will (A) in some cases ignore the intensity of the contact relative to the intensity threshold (eg, for tap gestures and language) and/or (B) in some cases, if a set of competing strength-dependent gesture recognition criteria (e.g., for a deep-press gesture) recognizes an input before a gesture corresponding to the input is recognized by a particular gesture recognition criterion To correspond to strength-dependent gestures, then certain gesture recognition criteria cannot be met (eg, for long-press gestures), in the sense that it still depends on the contact strength relative to the strength threshold (eg, for competing with deep-press gestures) for recognized long-press gestures).

图形模块132包括用于在触敏显示器系统112或其他显示器上渲染和显示图形的各种已知软件部件,包括用于改变所显示的图形的视觉冲击(例如,亮度、透明度、饱和度、对比度或其他视觉属性)的部件。如本文所用,术语“图形”包括可被显示给用户的任何对象,非限制性地包括文本、网页、图标(诸如包括软键的用户界面对象)、数字图像、视频、动画等。Graphics module 132 includes various known software components for rendering and displaying graphics on touch-sensitive display system 112 or other displays, including for changing the visual impact (eg, brightness, transparency, saturation, contrast) of displayed graphics or other visual properties). As used herein, the term "graphics" includes any object that can be displayed to a user including, without limitation, text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations, and the like.

在一些实施方案中,图形模块132存储待使用的表示图形的数据。每个图形任选地被分配有对应的代码。图形模块132从应用程序等接收用于指定待显示的图形的一个或多个代码,在必要的情况下还一起接收坐标数据和其他图形属性数据,并且然后生成屏幕图像数据,以输出至显示控制器156。In some embodiments, thegraphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. Thegraphics module 132 receives one or more codes for specifying graphics to be displayed from an application program or the like, along with coordinate data and other graphics attribute data if necessary, and then generates screen image data for output to the display control device 156.

触觉反馈模块133包括用于生成指令(例如,由触觉反馈控制器161使用的指令)的各种软件部件,以响应于用户与设备100的交互而使用触觉输出发生器167在设备100上的一个或多个位置处生成触觉输出。Haptic feedback module 133 includes various software components for generating instructions (eg, used by haptic feedback controller 161 ) to use one ofhaptic output generators 167 ondevice 100 in response to user interaction withdevice 100 or multiple locations to generate haptic output.

任选地为图形模块132的部件的文本输入模块134提供用于在各种应用程序(例如,联系人137、电子邮件140、IM 141、浏览器147和需要文本输入的任何其他应用程序)中输入文本的软键盘。Atext entry module 134, optionally a component of thegraphics module 132, is provided for use in various applications (eg,contacts 137,email 140,IM 141,browser 147, and any other application requiring text entry) Soft keyboard for entering text.

GPS模块135确定设备的位置并提供这种信息以在各种应用程序中使用(例如,提供至用于基于位置的拨号的电话138;提供至相机143作为图片/视频元数据;以及提供至提供基于位置的服务诸如天气桌面小程序、当地黄页桌面小程序和地图/导航桌面小程序的应用程序)。GPS module 135 determines the location of the device and provides this information for use in various applications (eg, tophone 138 for location-based dialing; tocamera 143 as picture/video metadata; and to providing applications for location-based services such as the Weather applet, the Local Yellow Pages applet, and the Maps/Navigation applet).

应用程序136任选地包括以下模块(或指令集)或者其子集或超集:Application 136 optionally includes the following modules (or sets of instructions), or subsets or supersets thereof:

●联系人模块137(有时称作通讯录或联系人列表);- Contacts module 137 (sometimes referred to as an address book or contact list);

●电话模块138;atelephone module 138;

●视频会议模块139;video conference module 139;

●电子邮件客户端模块140;-Email client module 140;

●即时消息(IM)模块141;- Instant Messaging (IM)module 141;

●健身支持模块142;afitness support module 142;

●用于静态图像和/或视频图像的相机模块143;- acamera module 143 for still and/or video images;

●图像管理模块144;animage management module 144;

●浏览器模块147;abrowser module 147;

●日历模块148;Calendar module 148;

●桌面小程序模块149,其任选地包括以下各项中的一者或多者:天气桌面小程序149-1、股市桌面小程序149-2、计算器桌面小程序149-3、闹钟桌面小程序149-4、词典桌面小程序149-5和由用户获得的其他桌面小程序,以及用户创建的桌面小程序149-6;- adesktop applet module 149, which optionally includes one or more of the following: a weather desktop applet 149-1, a stock market desktop applet 149-2, a calculator desktop applet 149-3, an alarm clock desktop applet 149-4, dictionary desktop applet 149-5 and other desktop applets obtained by the user, and desktop applet 149-6 created by the user;

●用于形成用户创建的桌面小程序149-6的桌面小程序创建器模块150;- a desktopapplet creator module 150 for forming user-created desktop applets 149-6;

●搜索模块151;search module 151;

●视频和音乐播放器模块152,任选地由视频播放器模块和音乐播放器模块构成;a video andmusic player module 152, optionally consisting of a video player module and a music player module;

●记事本模块153;Notepad module 153;

●地图模块154;和/或- amap module 154; and/or

●在线视频模块155。Online video module 155.

任选地存储在存储器102中的其他应用程序136的示例包括其他文字处理应用程序、其他图像编辑应用程序、绘图应用程序、呈现应用程序、支持JAVA的应用程序、加密、数字权益管理、语音识别和语音复制。Examples ofother applications 136 optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, rendering applications, JAVA enabled applications, encryption, digital rights management, speech recognition and voice copying.

结合触敏显示器系统112、显示控制器156、接触模块130、图形模块132、和文本输入模块134,联系人模块137包括可执行指令用于管理通讯录或联系人列表(例如,存储在存储器102或存储器370中的联系人模块137的应用程序内部状态192中),包括:添加姓名到通讯录;从地址簿删除姓名;将电话号码、电子邮件地址、物理地址或其他信息与姓名关联;将图像与姓名关联;对姓名进行归类和分类;提供电话号码和/或电子邮件地址来发起和/或促进通过电话138、视频会议139、电子邮件140或即时消息141的通信;等等。In conjunction with touch-sensitive display system 112, display controller 156,contact module 130,graphics module 132, andtext input module 134,contacts module 137 includes executable instructions for managing an address book or contact list (eg, stored in memory 102). or in applicationinternal state 192 ofcontacts module 137 in memory 370), including: adding names to the address book; deleting names from the address book; associating phone numbers, email addresses, physical addresses, or other information with names; Images are associated with names; categorize and categorize names; provide phone numbers and/or email addresses to initiate and/or facilitate communication viatelephone 138,video conferencing 139,email 140 orinstant message 141; and the like.

结合RF电路108、音频电路110、扬声器111、麦克风113、触敏显示器系统112、显示控制器156、接触模块130、图形模块132、和文本输入模块134,电话模块138包括用于进行以下操作的可执行指令:输入与电话号码对应的字符序列、访问通讯录137中的一个或多个电话号码、修改已输入的电话号码、拨打相应的电话号码、进行会话、以及当会话完成时断开或挂断。如上所述,无线通信任选地使用多种通信标准、协议和技术中的任一种。In conjunction withRF circuit 108,audio circuit 110,speaker 111,microphone 113, touch-sensitive display system 112, display controller 156,contact module 130,graphics module 132, andtext input module 134,telephony module 138 includes a Executable instructions: enter a sequence of characters corresponding to a phone number, access one or more phone numbers in theaddress book 137, modify an entered phone number, dial the corresponding phone number, enter into a session, and disconnect or when the session is complete hang up. As mentioned above, wireless communication optionally uses any of a variety of communication standards, protocols, and techniques.

结合RF电路108、音频电路110、扬声器111、麦克风113、触敏显示系统112、显示控制器156、一个或多个光学传感器164、光学传感器控制器158、接触模块130、图形模块132、文本输入模块134、联系人列表137和电话模块138,视频会议模块139包括根据用户指令来发起、进行和终止用户与一个或多个其他参与方之间的视频会议的可执行指令。Incorporation ofRF circuit 108,audio circuit 110,speaker 111,microphone 113, touchsensitive display system 112, display controller 156, one or moreoptical sensors 164, optical sensor controller 158,contact module 130,graphics module 132,text input Module 134,contact list 137, andtelephony module 138,videoconferencing module 139 includes executable instructions for initiating, conducting, and terminating a videoconference between a user and one or more other participants in accordance with user instructions.

结合RF电路108、触敏显示器系统112、显示控制器156、接触模块130、图形模块132和文本输入模块134,电子邮件客户端模块140包括用于响应于用户指令来创建、发送、接收和管理电子邮件的可执行指令。结合图像管理模块144,电子邮件客户端模块140使得非常容易创建和发送具有由相机模块143拍摄的静态图像或视频图像的电子邮件。In conjunction withRF circuitry 108, touch-sensitive display system 112, display controller 156,contact module 130,graphics module 132, andtext input module 134,email client module 140 includes functions for creating, sending, receiving, and managing in response to user instructions Executable instructions for email. In conjunction with theimage management module 144 , theemail client module 140 makes it very easy to create and send emails with still or video images captured by thecamera module 143 .

结合RF电路108、触敏显示器系统112、显示控制器156、接触模块130、图形模块132和文本输入模块134,即时消息模块141包括用于进行以下操作的可执行指令:输入与即时消息对应的字符序列、修改先前输入的字符、发送相应即时消息(例如,使用针对基于电话的即时消息的短消息服务(SMS)或多媒体消息服务(MMS)协议或者使用针对基于互联网的即时消息的XMPP、SIMPLE、Apple推送通知服务(APNs)或IMPS)、接收即时消息、以及查看所接收的即时消息。在一些实施方案中,所传输和/或接收的即时消息任选地包括图形、相片、音频文件、视频文件、和/或MMS和/或增强消息服务(EMS)中所支持的其他附接件。如本文所用,“即时消息”是指基于电话的消息(例如,使用SMS或MMS发送的消息)和基于互联网的消息(例如,使用XMPP、SIMPLE、APNs或IMPS发送的消息)两者。In conjunction withRF circuitry 108, touch-sensitive display system 112, display controller 156,contact module 130,graphics module 132, andtext input module 134,instant messaging module 141 includes executable instructions for entering an character sequence, modifying previously entered characters, sending the corresponding instant message (eg, using the Short Message Service (SMS) or Multimedia Messaging Service (MMS) protocols for phone-based instant messaging or using XMPP, SIMPLE for Internet-based instant messaging , Apple Push Notification Service (APNs) or IMPS), receive instant messages, and view received instant messages. In some embodiments, the transmitted and/or received instant messages optionally include graphics, photographs, audio files, video files, and/or other attachments supported in MMS and/or Enhanced Messaging Services (EMS) . As used herein, "instant messaging" refers to both phone-based messages (eg, messages sent using SMS or MMS) and Internet-based messages (eg, messages sent using XMPP, SIMPLE, APNs, or IMPS).

结合RF电路108、触敏显示器系统112、显示控制器156、接触模块130、图形模块132、文本输入模块134、GPS模块135、地图模块154以及视频和音乐播放器模块152,健身支持模块142包括可执行指令用于创建健身(例如,具有时间、距离和/或卡路里燃烧目标);与(体育设备和智能手表中的)健身传感器通信;接收健身传感器数据;校准用于监视健身的传感器;为健身选择和播放音乐;以及显示、存储和传输健身数据。In conjunction withRF circuitry 108, touch-sensitive display system 112, display controller 156,contact module 130,graphics module 132,text input module 134,GPS module 135,map module 154, and video andmusic player module 152,fitness support module 142 includes Executable instructions for creating fitness (eg, with time, distance, and/or calorie burn goals); communicating with fitness sensors (in sports devices and smart watches); receiving fitness sensor data; calibrating sensors for monitoring fitness; Fitness selection and playback of music; and display, storage and transmission of fitness data.

结合触敏显示器系统112、显示控制器156、一个或多个光学传感器164、光学传感器控制器158、接触模块130、图形模块132和图像管理模块144,相机模块143包括用于进行以下操作的可执行指令:捕获静态图像或视频(包括视频流)并且将它们存储到存储器102中、修改静态图像或视频的特征、和/或从存储器102删除静态图像或视频。In conjunction with touch-sensitive display system 112, display controller 156, one or moreoptical sensors 164, optical sensor controller 158,contact module 130,graphics module 132, andimage management module 144,camera module 143 includes possible Execute instructions: capture still images or video (including video streams) and store them in memory 102 , modify characteristics of still images or videos, and/or delete still images or video from memory 102 .

结合触敏显示器系统112、显示控制器156、接触模块130、图形模块132、文本输入模块134、和相机模块143,图像管理模块144包括用于排列、修改(例如,编辑)、或以其他方式操控、加标签、删除、展示(例如,在数字幻灯片或相册中)、以及存储静态图像和/或视频图像的可执行指令。In conjunction with touch-sensitive display system 112, display controller 156,contact module 130,graphics module 132,text input module 134, andcamera module 143,image management module 144 includes functions for arranging, modifying (eg, editing), or otherwise Executable instructions for manipulating, tagging, deleting, presenting (eg, in a digital slideshow or photo album), and storing still images and/or video images.

结合RF电路108、触敏显示器系统112、显示系统控制器156、接触模块130、图形模块132和文本输入模块134,浏览器模块147包括根据用户指令来浏览互联网(包括搜索、链接到、接收、和显示网页或其部分、以及链接到网页的附件和其他文件)的可执行指令。In conjunction withRF circuitry 108, touch-sensitive display system 112, display system controller 156,contact module 130,graphics module 132, andtext input module 134,browser module 147 includes browsing the Internet (including searching, linking to, receiving, and executable instructions to display web pages or portions thereof, as well as attachments and other files linked to web pages).

结合RF电路108、触敏显示器系统112、显示系统控制器156、接触模块130、图形模块132、文本输入模块134、电子邮件客户端模块140和浏览器模块147,日历模块148包括用于根据用户指令来创建、显示、修改和存储日历以及与日历相关联的数据(例如,日历条目、待办事项等)的可执行指令。In conjunction withRF circuitry 108, touch-sensitive display system 112, display system controller 156,contact module 130,graphics module 132,text input module 134,email client module 140, andbrowser module 147,calendar module 148 includes Executable instructions for creating, displaying, modifying, and storing a calendar and data associated with the calendar (eg, calendar entries, to-dos, etc.).

结合RF电路108、触敏显示器系统112、显示系统控制器156、接触模块130、图形模块132、文本输入模块134和浏览器模块147,桌面小程序模块149是任选地由用户下载和使用的微型应用程序(例如,天气桌面小程序149-1、股市桌面小程序149-2、计算器桌面小程序149-3、闹钟桌面小程序149-4和词典桌面小程序149-5)、或由用户创建的微型应用程序(例如,用户创建的桌面小程序149-6)。在一些实施方案中,桌面小程序包括HTML(超文本标记语言)文件、CSS(层叠样式表)文件和JavaScript文件。在一些实施方案中,桌面小程序包括XML(可扩展标记语言)文件和JavaScript文件(例如,Yahoo!桌面小程序)。In conjunction withRF circuitry 108, touch-sensitive display system 112, display system controller 156,contact module 130,graphics module 132,text input module 134, andbrowser module 147, adesktop applet module 149 is optionally downloaded and used by a user Mini-applications (eg, Weather applet 149-1, Stocks applet 149-2, Calculator applet 149-3, Alarm clock applet 149-4, and Dictionary applet 149-5), or by User-created mini-applications (eg, user-created desktop applet 149-6). In some embodiments, the desktop applet includes HTML (Hypertext Markup Language) files, CSS (Cascading Style Sheets) files, and JavaScript files. In some embodiments, a desktop applet includes an XML (Extensible Markup Language) file and a JavaScript file (eg, a Yahoo! desktop applet).

结合RF电路108、触敏显示器系统112、显示系统控制器156、接触模块130、图形模块132、文本输入模块134、和浏览器模块147,桌面小程序创建器模块150包括用于创建桌面小程序(例如,将网页的用户指定部分转到桌面小程序中)的可执行指令。In conjunction withRF circuitry 108, touchsensitive display system 112, display system controller 156,contact module 130,graphics module 132,text input module 134, andbrowser module 147, theapplet creator module 150 includes tools for creating a desktop applet Executable instructions (for example, to transfer a user-specified portion of a web page into a desktop applet).

结合触敏显示器系统112、显示系统控制器156、接触模块130、图形模块132和文本输入模块134,搜索模块151包括用于根据用户指令来搜索存储器102中的与一个或多个搜索条件(例如,一个或多个用户指定的搜索词)匹配的文本、音乐、声音、图像、视频和/或其他文件的可执行指令。In conjunction with touch-sensitive display system 112, display system controller 156,contact module 130,graphics module 132, andtext input module 134,search module 151 includes means for searching memory 102 for one or more search criteria (eg, , one or more user-specified search terms) are executable instructions for matching text, music, sound, images, videos, and/or other files.

结合触敏显示系统112、显示系统控制器156、接触模块130、图形模块132、音频电路110、扬声器111、RF电路108和浏览器模块147,视频和音乐播放器模块152包括允许用户下载和回放以一种或多种文件格式(诸如MP3或AAC文件)存储的所记录的音乐和其他声音文件的可执行指令,以及用于显示、呈现或以其他方式回放视频(例如,在触敏显示系统112上或在经由外部端口124无线连接的外部显示器上)的可执行指令。在一些实施方案中,设备100任选地包括MP3播放器,诸如iPod(Apple Inc.的商标)的功能性。In conjunction with touchsensitive display system 112, display system controller 156,contact module 130,graphics module 132,audio circuitry 110,speakers 111,RF circuitry 108, andbrowser module 147, video andmusic player module 152 includes a video and music player module that allows users to download and play back Executable instructions for recorded music and other sound files stored in one or more file formats (such as MP3 or AAC files), and for displaying, rendering, or otherwise playing back video (for example, on a touch-sensitive display system) 112 or on an external display wirelessly connected via external port 124). In some embodiments,device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple Inc.).

结合触敏显示器系统112、显示控制器156、接触模块130、图形模块132和文本输入模块134,记事本模块153包括用于根据用户指令来创建和管理记事本、待办事项等的可执行指令。In conjunction with touchsensitive display system 112, display controller 156,contact module 130,graphics module 132, andtext input module 134,notepad module 153 includes executable instructions for creating and managing notepads, to-do lists, etc. in accordance with user instructions .

结合RF电路108、触敏显示器系统112、显示系统控制器156、接触模块130、图形模块132、文本输入模块134、GPS模块135和浏览器模块147,地图模块154包括用于根据用户指令来接收、显示、修改和存储地图以及与地图相关联的数据(例如,驾车路线;特定位置处或附近的商店和其他兴趣点的数据;和其他基于位置的数据)的可执行指令。In conjunction withRF circuitry 108, touch-sensitive display system 112, display system controller 156,contact module 130,graphics module 132,text input module 134,GPS module 135, andbrowser module 147,map module 154 includes functions for receiving in accordance with user instructions , executable instructions to display, modify, and store maps and data associated with maps (eg, driving directions; data on stores and other points of interest at or near a particular location; and other location-based data).

结合触敏显示系统112、显示系统控制器156、接触模块130、图形模块132、音频电路110、扬声器111、RF电路108、文本输入模块134、电子邮件客户端模块140和浏览器模块147,在线视频模块155包括允许用户访问、浏览、接收(例如,通过流式传输和/或下载)、回放(例如在触摸屏112上或在无线连接的或经由外部端口124连接的外部显示器上)、发送具有至特定在线视频的链接的电子邮件、以及以其他方式管理一种或多种文件格式诸如H.264的在线视频的可执行指令。在一些实施方案中,使用即时消息模块141而不是电子邮件客户端模块140来发送特定在线视频的链接。In conjunction with touchsensitive display system 112, display system controller 156,contact module 130,graphics module 132,audio circuit 110,speaker 111,RF circuit 108,text input module 134,email client module 140 andbrowser module 147,online Video module 155 includes components that allow a user to access, browse, receive (eg, by streaming and/or download), playback (eg, ontouch screen 112 or on an external display connected wirelessly or via external port 124 ), send video with Emails of links to specific online videos, and executable instructions that otherwise manage online videos in one or more file formats, such as H.264. In some embodiments,instant messaging module 141 is used instead ofemail client module 140 to send links to specific online videos.

上述所识别的每个模块和应用对应于用于执行上述一种或多种功能以及在本申请中所描述的方法(例如,本文中所描述的计算机实现的方法和其他信息处理方法)的一组可执行指令。这些模块(即,指令集)不必以独立的软件程序、过程或模块实现,因此这些模块的各种子集任选地在各种实施方案中组合或以其他方式重新布置。在一些实施方案中,存储器102任选地存储上述模块和数据结构的子组。此外,存储器102任选地存储上面未描述的另外的模块和数据结构。Each of the modules and applications identified above corresponds to a method for performing one or more of the functions described above and the methods described in this application (eg, the computer-implemented methods and other information processing methods described herein). Group executable instructions. These modules (ie, sets of instructions) need not be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 102 optionally stores subsets of the modules and data structures described above. In addition, memory 102 optionally stores additional modules and data structures not described above.

在一些实施方案中,设备100是该设备上的预定义的一组功能的操作唯一地通过触摸屏和/或触摸板来执行的设备。通过使用触摸屏和/或触控板作为用于操作设备100的主要输入控制设备,任选地减少设备100上的物理输入控制设备(例如,下压按钮、拨盘等等)的数量。In some embodiments,device 100 is a device in which operation of a predefined set of functions on the device is performed exclusively through a touch screen and/or touch pad. By using a touch screen and/or trackpad as the primary input control device for operatingdevice 100, the number of physical input control devices (eg, push buttons, dials, etc.) ondevice 100 is optionally reduced.

唯一地通过触摸屏和/或触控板来执行的预定义的一组功能任选地包括在用户界面之间的导航。在一些实施方案中,该触摸板在被用户触摸时将设备100从被显示在设备100上的任何用户界面导航到主菜单、home菜单、或根菜单。在此类实施方案中,使用触摸板来实现“菜单按钮”。在一些其他实施方案中,菜单按钮是物理下压按钮或者其他物理输入控制设备,而不是触摸板。A predefined set of functions performed exclusively through a touch screen and/or trackpad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by the user, navigates thedevice 100 from any user interface displayed on thedevice 100 to the main menu, home menu, or root menu. In such embodiments, a "menu button" is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device rather than a touchpad.

图1B是示出根据一些实施方案的用于事件处理的示例性部件的框图。在一些实施方案中,存储器102(图1A中)或存储器370(图3)包括事件分类器170(例如,在操作系统126中)和相应的应用程序136-1(例如,前述应用程序136、137至155、380至390中的任一个应用程序)。FIG. 1B is a block diagram illustrating exemplary components for event processing, according to some embodiments. In some embodiments, memory 102 (in FIG. 1A ) or memory 370 ( FIG. 3 ) includes an event classifier 170 (eg, in operating system 126 ) and a corresponding application 136 - 1 (eg, theaforementioned application 136 , 137 to 155, 380 to 390 applications).

事件分类器170接收事件信息并确定要将事件信息递送到的应用程序136-1和应用程序136-1的应用程序视图191。事件分类器170包括事件监视器171和事件分配器模块174。在一些实施方案中,应用程序136-1包括应用程序内部状态192,该应用程序内部状态指示当应用程序是活动的或正在执行时在触敏显示器系统112上显示的一个或多个当前应用程序视图。在一些实施方案中,设备/全局内部状态157被事件分类器170用于确定哪个(哪些)应用程序当前是活动的,并且应用程序内部状态192被事件分类器170用于确定要将事件信息递送到的应用程序视图191。The event classifier 170 receives the event information and determines the application 136-1 to which the event information is to be delivered and theapplication view 191 of the application 136-1. Event classifier 170 includes event monitor 171 andevent dispatcher module 174 . In some embodiments, application 136-1 includes applicationinternal state 192 that indicates one or more current applications displayed on touch-sensitive display system 112 when the application is active or executing view. In some embodiments, device/globalinternal state 157 is used by event classifier 170 to determine which application(s) are currently active, and applicationinternal state 192 is used by event classifier 170 to determine which event information to deliver to theapplication view 191.

在一些实施方案中,应用程序内部状态192包括另外的信息,诸如以下各项中的一者或多者:当应用程序136-1恢复执行时将被使用的恢复信息、指示正被应用程序136-1显示的信息或准备好用于被该应用程序显示的信息的用户界面状态信息、用于使得用户能够返回到应用程序136-1的前一状态或视图的状态队列以及用户采取的先前动作的重复/撤销队列。In some embodiments, the applicationinternal state 192 includes additional information, such as one or more of the following: resume information to be used when the application 136-1 resumes execution, an indication that the application 136-1 is being executed -1 displayed information or user interface state information ready for information displayed by the application, a state queue to enable the user to return to a previous state or view of the application 136-1, and previous actions taken by the user the repeat/undo queue.

事件监视器171从外围设备接口118接收事件信息。事件信息包括关于子事件(例如,作为多点触摸手势的一部分的触敏显示器系统112上的用户触摸)的信息。外围设备接口118传输其从I/O子系统106或传感器诸如接近传感器166、加速度计168和/或麦克风113(通过音频电路110)接收的信息。外围设备接口118从I/O子系统106所接收的信息包括来自触敏显示器系统112或触敏表面的信息。Event monitor 171 receives event information fromperipherals interface 118 . Event information includes information about sub-events (eg, user touches on touch-sensitive display system 112 as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or sensors such asproximity sensor 166,accelerometer 168, and/or microphone 113 (through audio circuitry 110). Information received by peripherals interface 118 from I/O subsystem 106 includes information from touch-sensitive display system 112 or a touch-sensitive surface.

在一些实施方案中,事件监视器171以预先确定的间隔将请求发送至外围设备接口118。作为响应,外围设备接口118传输事件信息。在其他实施方案中,外围设备接口118仅当存在显著事件(例如,接收到高于预先确定的噪声阈值的输入和/或接收到超过预先确定的持续时间的输入)时才传输事件信息。In some embodiments, event monitor 171 sends requests to peripherals interface 118 at predetermined intervals. In response,peripheral interface 118 transmits event information. In other embodiments, peripherals interface 118 transmits event information only when there is a significant event (eg, receiving input above a predetermined noise threshold and/or receiving input for longer than a predetermined duration).

在一些实施方案中,事件分类器170还包括命中视图确定模块172和/或活动事件识别器确定模块173。In some embodiments, the event classifier 170 also includes a hitview determination module 172 and/or an active eventidentifier determination module 173 .

当触敏显示器系统112显示多于一个视图时,命中视图确定模块172提供用于确定子事件已在一个或多个视图内的什么地方发生的软件过程。视图由用户能够在显示器上看到的控件和其他元素构成。When touch-sensitive display system 112 displays more than one view, hitview determination module 172 provides a software process for determining where a sub-event has occurred within one or more views. Views are made up of controls and other elements that the user can see on the display.

与应用程序相关联的用户界面的另一方面是一组视图,本文中有时也称为应用程序视图或用户界面窗口,在其中显示信息并且发生基于触摸的手势。在其中检测到触摸的(相应应用程序的)应用程序视图任选地对应于在应用程序的程序化或视图分级结构内的程序化水平。例如,在其中检测到触摸的最低水平视图任选地被称为命中视图,并且被识别为正确输入的事件集任选地至少部分地基于初始触摸的命中视图来确定,所述初始触摸开始基于触摸的手势。Another aspect of the user interface associated with an application is a set of views, sometimes also referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the corresponding application) in which the touch is detected optionally corresponds to a programmatic level within the application's programmatic or view hierarchy. For example, the lowest horizontal view in which a touch is detected is optionally referred to as the hit view, and the set of events identified as correct input is optionally determined based at least in part on the hit view of the initial touch that begins based on touch gesture.

命中视图确定模块172接收与基于触摸的手势的子事件相关的信息。当应用程序具有在分级结构中组织的多个视图时,命中视图确定模块172将命中视图识别为应对子事件进行处理的分级结构中的最低视图。在大多数情况下,命中视图是发起子事件(即形成事件或潜在事件的子事件序列中的第一子事件)在其中发生的最低水平视图。一旦命中视图被命中视图确定模块所识别,命中视图便通常接收与其被识别为命中视图所针对的同一触摸或输入源相关的所有子事件。The hitview determination module 172 receives information related to sub-events of touch-based gestures. When an application has multiple views organized in a hierarchy, the hitview determination module 172 identifies the hit view as the lowest view in the hierarchy for which sub-events should be processed. In most cases, the hit view is the lowest level view in which the initiating sub-event (ie, the first sub-event in the sequence of sub-events forming the event or potential event) occurs. Once the hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.

活动事件识别器确定模块173确定视图分级结构内的哪个或哪些视图应接收特定子事件序列。在一些实施方案中,活动事件识别器确定模块173确定仅命中视图才应接收特定子事件序列。在其他实施方案中,活动事件识别器确定模块173确定包括子事件的物理位置的所有视图均为活跃参与的视图,因此确定所有活跃参与的视图都应接收特定子事件序列。在其他实施方案中,即使触摸子事件完全被局限到与一个特定视图相关联的区域,分级结构中的较高视图将仍然保持为活跃参与的视图。The active eventrecognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some embodiments, the active eventrecognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, the active eventrecognizer determination module 173 determines that all views that include the physical location of the sub-event are actively participating views, and thus determines that all actively participating views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely localized to the area associated with a particular view, views higher in the hierarchy will remain the actively participating views.

事件分配器模块174将事件信息分配到事件识别器(例如,事件识别器180)。在包括活动事件识别器确定模块173的实施方案中,事件分配器模块174将事件信息递送到由活动事件识别器确定模块173确定的事件识别器。在一些实施方案中,事件分配器模块174在事件队列中存储事件信息,该事件信息由相应事件接收器模块182进行检索。Event dispatcher module 174 dispatches event information to event recognizers (eg, event recognizer 180). In embodiments that include active eventrecognizer determination module 173 ,event dispatcher module 174 delivers event information to event recognizers determined by active eventrecognizer determination module 173 . In some embodiments, theevent dispatcher module 174 stores event information in an event queue that is retrieved by the correspondingevent receiver module 182 .

在一些实施方案中,操作系统126包括事件分类器170。另选地,应用程序136-1包括事件分类器170。在另一个实施方案中,事件分类器170为独立模块,或者为被存储在存储器102中的另一个模块(诸如接触/运动模块130)的一部分。In some embodiments, operating system 126 includes event classifier 170 . Alternatively, application 136 - 1 includes event classifier 170 . In another embodiment, event sorter 170 is a stand-alone module, or part of another module (such as contact/motion module 130 ) that is stored in memory 102 .

在一些实施方案中,应用程序136-1包括多个事件处理程序190和一个或多个应用程序视图191,其中的每一个都包括用于处理发生在应用程序的用户界面的相应视图内的触摸事件的指令。应用程序136-1的每个应用程序视图191包括一个或多个事件识别器180。通常,相应应用程序视图191包括多个事件识别器180。在其他实施方案中,事件识别器180中的一个或多个事件识别器为独立模块的一部分,该独立模块诸如用户界面工具包(未示出)或应用程序136-1从中继承方法和其他属性的更高水平的对象。在一些实施方案中,相应事件处理程序190包括以下各项中的一者或多者:数据更新器176、对象更新器177、GUI更新器178、和/或从事件分类器170接收的事件数据179。事件处理程序190任选地利用或调用数据更新器176、对象更新器177或GUI更新器178来更新应用程序内部状态192。另选地,应用程序视图191中的一个或多个应用程序视图包括一个或多个相应事件处理程序190。另外,在一些实施方案中,数据更新器176、对象更新器177和GUI更新器178中的一者或多者被包括在相应应用程序视图191中。In some embodiments, application 136-1 includes a plurality ofevent handlers 190 and one or more application views 191, each of which includes a method for handling touches that occur within a corresponding view of the application's user interface event command. Eachapplication view 191 of application 136-1 includes one ormore event recognizers 180. Typically, therespective application view 191 includes a plurality ofevent recognizers 180 . In other embodiments, one or more of theevent recognizers 180 are part of a separate module, such as a user interface toolkit (not shown) or the application 136-1, from which it inherits methods and other properties of higher level objects. In some embodiments,respective event handlers 190 include one or more of:data updater 176 ,object updater 177 ,GUI updater 178 , and/or event data received from event classifier 170 179.Event handler 190 optionally utilizes or invokesdata updater 176 ,object updater 177 orGUI updater 178 to update applicationinternal state 192 . Alternatively, one or more of the application views 191 include one or morecorresponding event handlers 190 . Additionally, in some embodiments, one or more ofdata updater 176 ,object updater 177 , andGUI updater 178 are included in respective application views 191 .

相应的事件识别器180从事件分类器170接收事件信息(例如,事件数据179),并且从事件信息识别事件。事件识别器180包括事件接收器182和事件比较器184。在一些实施方案中,事件识别器180还包括元数据183和事件传递指令188(其任选地包括子事件递送指令)的至少一个子集。Acorresponding event recognizer 180 receives event information (eg, event data 179 ) from the event classifier 170 and identifies the event from the event information.Event recognizer 180 includesevent receiver 182 andevent comparator 184 . In some embodiments,event recognizer 180 also includesmetadata 183 and at least a subset of event delivery instructions 188 (which optionally include sub-event delivery instructions).

事件接收器182接收来自事件分类器170的事件信息。事件信息包括关于子事件例如触摸或触摸移动的信息。根据子事件,事件信息还包括附加信息,诸如子事件的位置。当子事件涉及触摸的运动时,事件信息任选地还包括子事件的速率和方向。在一些实施方案中,事件包括设备从一个取向旋转到另一取向(例如,从纵向取向旋转到横向取向,或反之亦然),并且事件信息包括关于设备的当前取向(也被称为设备姿态)的对应信息。Event receiver 182 receives event information from event classifier 170 . The event information includes information about sub-events such as touch or touch movement. Depending on the sub-event, the event information also includes additional information, such as the location of the sub-event. When the sub-event involves movement of a touch, the event information optionally also includes the rate and direction of the sub-event. In some embodiments, the event includes a rotation of the device from one orientation to another (eg, from a portrait orientation to a landscape orientation, or vice versa), and the event information includes a current orientation (also referred to as a device pose) about the device ) corresponding information.

事件比较器184将事件信息与预定义的事件或子事件定义进行比较,并且基于该比较来确定事件或子事件,或者确定或更新事件或子事件的状态。在一些实施方案中,事件比较器184包括事件定义186。事件定义186包含事件的定义(例如,预定义的子事件序列),例如事件1(187-1)、事件2(187-2)以及其他事件。在一些实施方案中,事件187中的子事件例如包括触摸开始、触摸结束、触摸移动、触摸取消和多点触摸。在一个示例中,事件1(187-1)的定义是被显示对象上的双击。例如,双击包括被显示对象上的预先确定时长的第一次触摸(触摸开始)、预先确定时长的第一次抬起(触摸结束)、被显示对象上的预先确定时长的第二次触摸(触摸开始)以及预先确定时长的第二次抬起(触摸结束)。在另一个示例中,事件2(187-2)的定义是被显示对象上的拖动。例如,拖动包括被显示对象上的预先确定时长的触摸(或接触)、触摸在触敏显示器系统112上的移动、以及触摸的抬离(触摸结束)。在一些实施方案中,事件还包括用于一个或多个相关联的事件处理程序190的信息。Theevent comparator 184 compares the event information to predefined event or sub-event definitions and determines an event or sub-event based on the comparison, or determines or updates the state of the event or sub-event. In some embodiments,event comparator 184 includesevent definition 186 .Event definitions 186 contain definitions of events (eg, predefined sequences of sub-events), such as event 1 (187-1), event 2 (187-2), and other events. In some embodiments, sub-events in event 187 include, for example, touch start, touch end, touch move, touch cancel, and multi-touch. In one example, the definition of event 1 (187-1) is a double click on the displayed object. For example, a double tap includes a first touch of a predetermined duration on the displayed object (touch start), a first lift of a predetermined duration (touch end), a second touch of a predetermined duration on the displayed object ( touch start) and a second lift (touch end) of a predetermined duration. In another example, the definition of event 2 (187-2) is a drag on the displayed object. For example, dragging includes a predetermined duration of touch (or contact) on the displayed object, movement of the touch on the touch-sensitive display system 112, and lift-off of the touch (end of touch). In some embodiments, the event also includes information for one or more associatedevent handlers 190 .

在一些实施方案中,事件定义187包括对用于相应用户界面对象的事件的定义。在一些实施方案中,事件比较器184执行命中测试以确定哪个用户界面对象与子事件相关联。例如,在触敏显示器系统112上显示三个用户界面对象的应用程序视图中,当在触敏显示器系统112上检测到触摸时,事件比较器184执行命中Test以确定这三个用户界面对象中的哪一个用户界面对象与该触摸(子事件)相关联。如果每个所显示对象与相应事件处理程序190相关联,则事件比较器使用该命中测试的结果来确定哪个事件处理程序190应当被激活。例如,事件比较器184选择与子事件和触发该命中测试的对象相关联的事件处理程序。In some embodiments, event definitions 187 include definitions of events for corresponding user interface objects. In some embodiments, theevent comparator 184 performs a hit test to determine which user interface object is associated with the sub-event. For example, in an application view displaying three user interface objects on touch-sensitive display system 112, when a touch is detected on touch-sensitive display system 112,event comparator 184 performs a hit Test to determine which of the three user interface objects Which user interface object of is associated with this touch (sub-event). If each displayed object is associated with acorresponding event handler 190, the event comparator uses the results of the hit test to determine whichevent handler 190 should be activated. For example,event comparator 184 selects the event handler associated with the sub-event and the object that triggered the hit test.

在一些实施方案中,对相应事件187的定义还包括延迟动作,该延迟动作延迟事件信息的递送,直到已确定子事件序列是否确实对应于或不对应于事件识别器的事件类型。In some embodiments, the definition of the corresponding event 187 also includes a delay action that delays the delivery of event information until it has been determined whether the sequence of sub-events does or does not correspond to the event type of the event recognizer.

当相应事件识别器180确定子事件序列不与事件定义186中的任何事件匹配时,该相应事件识别器180进入事件不可能、事件失败或事件结束状态,在此之后忽略基于触摸的手势的后续子事件。在这种情况下,对于命中视图保持活动的其他事件识别器(如果有的话)继续跟踪并处理持续进行的基于触摸的手势的子事件。When thecorresponding event recognizer 180 determines that the sequence of sub-events does not match any of the events in theevent definition 186, thecorresponding event recognizer 180 enters the event impossible, event failed, or event end state, after which subsequent touch-based gestures are ignored sub-event. In this case, other event recognizers (if any) that remain active for the hit view continue to track and process sub-events of the ongoing touch-based gesture.

在一些实施方案中,相应事件识别器180包括元数据183,所述元数据具有指示事件递送系统应该如何执行对活跃参与的事件识别器的子事件递送的可配置属性、标记和/或列表。在一些实施方案中,元数据183包括指示事件识别器彼此如何交互或如何能够交互的可配置属性、标志和/或列表。在一些实施方案中,元数据183包括指示子事件是否递送到视图或程序化分级结构中的不同层级的可配置属性、标志和/或列表。In some embodiments,respective event recognizers 180 includemetadata 183 with configurable attributes, flags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively participating event recognizers. In some embodiments,metadata 183 includes configurable properties, flags, and/or lists that indicate how event recognizers interact or can interact with each other. In some embodiments,metadata 183 includes configurable attributes, flags, and/or lists that indicate whether sub-events are delivered to different levels in the view or programmatic hierarchy.

在一些实施方案中,当识别事件的一个或多个特定子事件时,相应事件识别器180激活与事件相关联的事件处理程序190。在一些实施方案中,相应事件识别器180将与事件相关联的事件信息递送到事件处理程序190。激活事件处理程序190不同于将子事件发送(和延期发送)到相应命中视图。在一些实施方案中,事件识别器180抛出与所识别的事件相关联的标志,并且与该标志相关联的事件处理程序190接获该标志并执行预定义的过程。In some embodiments, when one or more specific sub-events of an event are identified, therespective event recognizer 180 activates theevent handler 190 associated with the event. In some embodiments, therespective event recognizer 180 delivers event information associated with the event to theevent handler 190 . Activating theevent handler 190 differs from sending (and deferred sending) sub-events to the corresponding hit view. In some embodiments, theevent recognizer 180 throws a flag associated with the identified event, and theevent handler 190 associated with the flag receives the flag and executes a predefined procedure.

在一些实施方案中,事件递送指令188包括递送关于子事件的事件信息而不激活事件处理程序的子事件递送指令。相反,子事件递送指令将事件信息递送到与子事件序列相关联的事件处理程序或者递送到活跃参与的视图。与子事件序列或与活跃参与的视图相关联的事件处理程序接收事件信息并执行预先确定的过程。In some embodiments, theevent delivery instructions 188 include sub-event delivery instructions that deliver event information about the sub-event without activating the event handler. Instead, sub-event delivery instructions deliver event information to event handlers associated with the sub-event sequence or to actively participating views. Event handlers associated with sub-event sequences or with actively participating views receive event information and execute predetermined procedures.

在一些实施方案中,数据更新器176创建并更新在应用程序136-1中使用的数据。例如,数据更新器176对联系人模块137中所使用的电话号码进行更新,或者对视频或音乐播放器模块152中所使用的视频文件进行存储。在一些实施方案中,对象更新器177创建和更新在应用程序136-1中使用的对象。例如,对象更新器177创建新用户界面对象或更新用户界面对象的位置。GUI更新器178更新GUI。例如,GUI更新器178准备显示信息,并且将显示信息发送到图形模块132用以显示在触敏显示器上。In some embodiments,data updater 176 creates and updates data used in application 136-1. For example, thedata updater 176 updates the phone numbers used in thecontacts module 137 or stores video files used in the video ormusic player module 152 . In some embodiments, objectupdater 177 creates and updates objects used in application 136-1. For example, objectupdater 177 creates new user interface objects or updates the location of user interface objects.GUI updater 178 updates the GUI. For example,GUI updater 178 prepares display information and sends the display information tographics module 132 for display on a touch-sensitive display.

在一些实施方案中,事件处理程序190包括数据更新器176、对象更新器177和GUI更新器178或者具有对它们的访问权限。在一些实施方案中,数据更新器176、对象更新器177和GUI更新器178被包括在相应应用程序136-1或应用程序视图191的单个模块中。在其他实施方案中,它们被包括在两个或更多个软件模块中。In some embodiments,event handler 190 includes or has access todata updater 176,object updater 177, andGUI updater 178. In some embodiments,data updater 176 ,object updater 177 , andGUI updater 178 are included in a single module of respective application 136 - 1 orapplication view 191 . In other embodiments, they are included in two or more software modules.

应当理解,关于触敏显示器上的用户触摸的事件处理的上述论述还适用于利用输入设备来操作多功能设备100的其他形式的用户输入,并不是所有用户输入都是在触摸屏上发起的。例如,任选地与单次或多次键盘按下或按住协作的鼠标移动和鼠标按钮按下;触摸板上的接触移动,诸如轻击、拖动、滚动等;触笔输入;设备的移动;口头指令;检测到的眼睛移动;生物特征输入;和/或它们的任何组合任选地被用作对应于限定要识别的事件的子事件的输入。It should be understood that the above discussion regarding event handling of user touches on touch-sensitive displays also applies to other forms of user input utilizing input devices to operatemultifunction device 100, not all of which are initiated on a touch screen. For example, mouse movement and mouse button press, optionally in conjunction with single or multiple keyboard presses or holds; contact movement on a touchpad, such as tapping, dragging, scrolling, etc.; stylus input; device's Movements; verbal commands; detected eye movements; biometric inputs; and/or any combination thereof are optionally used as inputs corresponding to sub-events that define the event to be identified.

图1C是示出根据一些实施方案的触觉输出模块的框图。在一些实施方案中,I/O子系统106(例如触感反馈控制器161(图1A)和/或其他输入控制器160(图1A))包括图1C所示示例性部件中的至少一些。在一些实施方案中,外围设备接口118包括图1C所示示例性部件中的至少一些。1C is a block diagram illustrating a haptic output module according to some embodiments. In some embodiments, I/O subsystem 106 (eg, haptic feedback controller 161 ( FIG. 1A ) and/or other input controller 160 ( FIG. 1A )) includes at least some of the exemplary components shown in FIG. 1C . In some embodiments,peripheral interface 118 includes at least some of the exemplary components shown in FIG. 1C .

在一些实施方案中,触觉输出模块包括触感反馈模块133。在一些实施方案中,触感反馈模块133聚集并组合来自电子设备上软件应用程序的用户界面反馈(例如,对与所显示用户界面对应的用户输入以及指示电子设备的用户界面中操作的执行或事件的发生的提示和其他通知进行响应的反馈)的触觉输出。触感反馈模块133包括波形模块123(用于提供用于生成触觉输出的波形)、混合器125(用于混合波形,诸如不同通道中的波形)、压缩器127(用于缩减或压缩波形的动态范围)、低通滤波器129(用于滤除波形中的高频信号分量)、和热控制器131(用于根据热条件调节波形)中的一者或多者。在一些实施方案中,触感反馈模块133被包括在触感反馈控制器161(图1A)中。在一些实施方案中,触感反馈模块133的单独单元(或触感反馈模块133的单独具体实施)也被包括在音频控制器(例如音频电路110,图1A)中并用于生成音频信号。在一些实施方案中,单个触感反馈模块133被用于生成音频信号以及生成触觉输出的波形。In some embodiments, the haptic output module includes ahaptic feedback module 133 . In some embodiments, thehaptic feedback module 133 aggregates and combines user interface feedback from software applications on the electronic device (eg, user input corresponding to the displayed user interface and performance or events indicating operations in the user interface of the electronic device). haptic output in response to prompts and other notifications that occur.Haptic feedback module 133 includes waveform module 123 (for providing waveforms for generating haptic output), mixer 125 (for mixing waveforms, such as waveforms in different channels), compressor 127 (for reducing or compressing the dynamics of the waveforms) range), one or more of a low pass filter 129 (for filtering out high frequency signal components in the waveform), and a thermal controller 131 (for adjusting the waveform according to thermal conditions). In some embodiments, thehaptic feedback module 133 is included in the haptic feedback controller 161 (FIG. 1A). In some embodiments, a separate unit of haptic feedback module 133 (or a separate implementation of haptic feedback module 133 ) is also included in an audio controller (eg,audio circuit 110 , FIG. 1A ) and used to generate audio signals. In some embodiments, a singlehaptic feedback module 133 is used to generate the audio signal and generate the waveform of the haptic output.

在一些实施方案中,触感反馈模块133还包括触发器模块121(例如,确定要生成触觉输出并引发用于生成对应触觉输出的过程的软件应用程序、操作系统、或其他软件模块)。在一些实施方案中,触发器模块121生成用于引发(例如由波形模块123)生成波形的触发器信号。例如,触发器模块121基于预先设置的定时标准来生成触发器信号。在一些实施方案中,触发器模块121从触感反馈模块133之外接收触发器信号(例如,在一些实施方案中,触感反馈模块133从位于触感反馈模块133之外的硬件输入处理模块146接收触发器信号)并将触发器信号中继给触感反馈模块133内的其他部件(例如,波形模块123)或基于用户界面元素(例如,应用程序图标或应用程序内的示能表示)或硬件输入设备(例如,home按钮或强度敏感输入表面,诸如强度敏感触摸屏)的激活而(以触发器模块121)触发操作的软件应用程序。在一些实施方案中,触发器模块121还(例如从触感反馈模块133,图1A和图3)接收触觉反馈生成指令。在一些实施方案中,触发器模块121响应于触感反馈模块133(或触感反馈模块133中的触发器模块121)(例如从触感反馈模块133,图1A和图3)接收触觉反馈指令而生成触发器信号。In some embodiments, thehaptic feedback module 133 also includes a trigger module 121 (eg, a software application, operating system, or other software module that determines to generate a haptic output and initiates a process for generating the corresponding haptic output). In some embodiments, trigger module 121 generates a trigger signal that is used to cause (eg, by waveform module 123 ) to generate a waveform. For example, the trigger module 121 generates trigger signals based on preset timing criteria. In some embodiments, trigger module 121 receives trigger signals from outside of haptic feedback module 133 (eg, in some embodiments,haptic feedback module 133 receives triggers from hardwareinput processing module 146 located outside of haptic feedback module 133 ) trigger signals) and relay trigger signals to other components within haptic feedback module 133 (eg, waveform module 123 ) or based on user interface elements (eg, application icons or in-application affordances) or hardware input devices A software application that triggers an operation (with trigger module 121) on activation (eg, of a home button or an intensity-sensitive input surface, such as an intensity-sensitive touch screen). In some embodiments, trigger module 121 also receives haptic feedback generation instructions (eg, fromhaptic feedback module 133 , FIGS. 1A and 3 ). In some embodiments, trigger module 121 generates a trigger in response to haptic feedback module 133 (or trigger module 121 in haptic feedback module 133 ) receiving haptic feedback instructions (eg, fromhaptic feedback module 133 , FIGS. 1A and 3 ) device signal.

波形模块123(例如从触发器模块121)接收触发器信号作为输入,并响应于接收到触发器信号而提供用于生成一个或多个触觉输出的波形(例如,从预定义的一组被指派用于供波形模块123使用的波形中选择的波形,诸如下文中参考图4F-4G更详细描述的波形)。The waveform module 123 receives as input a trigger signal (eg, from the trigger module 121 ), and in response to receiving the trigger signal provides a waveform for generating one or more haptic outputs (eg, assigned from a predefined set of For selected ones of the waveforms used by the waveform module 123, such as the waveforms described in more detail below with reference to Figures 4F-4G).

混合器125(例如从波形模块123)接收波形作为输入,并将这些波形混合在一起。例如,当混合器125接收两个或更多个波形(例如,第一通道中的第一波形和第二通道中的至少部分地与第一波形重叠的第二波形)时,混合器125输出对应于这两个或更多个波形之和的组合波形。在一些实施方案中,混合器125还修改这两个或更多个波形中的一个或多个波形以相对于这两个或更多个波形中的其余波形而强调特定波形(例如通过提高所述特定波形的规模和/或减小这些波形中的其他波形的规模)。在一些情况下,混合器125选择一个或多个波形来从组合波形中移除(例如,当有来自不止三个来源的波形已经被请求由触觉输出发生器167同时输出时,来自最老旧来源的波形被丢弃)。Mixer 125 receives the waveforms as input (eg, from waveform module 123) and mixes the waveforms together. For example, when mixer 125 receives two or more waveforms (eg, a first waveform in a first channel and a second waveform in a second channel that at least partially overlaps the first waveform), mixer 125 outputs A combined waveform corresponding to the sum of these two or more waveforms. In some embodiments, mixer 125 also modifies one or more of the two or more waveforms to emphasize particular waveforms relative to the rest of the two or more waveforms (eg, by increasing all of the waveforms) describe the size of particular waveforms and/or reduce the size of other of those waveforms). In some cases, mixer 125 selects one or more waveforms to remove from the combined waveform (eg, when there are waveforms from more than three sources that have been requested to be output simultaneously byhaptic output generator 167, from the oldest source waveforms are discarded).

混合器127接收波形(例如来自混合器125的组合波形)作为输入,并修改这些波形。在一些实施方案中,压缩器127缩减这些波形(例如,根据触觉输出发生器167(图1A)或357(图3)的物理规范),使得对应于这些波形的触觉输出被缩减。在一些实施方案中,压缩器127诸如通过为波形强加预定义的最大幅值来对波形进行限制。例如,压缩器127减小超过预定义幅值阈值的波形部分的幅值,而保持不超过所述预定义幅值阈值的波形部分的幅值。在一些实施方案中,压缩器127缩减波形的动态范围。在一些实施方案中,压缩器127动态地缩减波形的动态范围,使得组合波形保持在触觉输出发生器167的性能规范(例如力和/或可移动质块位移限制)内。Mixer 127 receives waveforms (eg, combined waveforms from mixer 125) as input, and modifies these waveforms. In some embodiments, compressor 127 reduces these waveforms (eg, according to the physical specifications of haptic output generator 167 (FIG. 1A) or 357 (FIG. 3)) such that the haptic output corresponding to these waveforms is reduced. In some embodiments, the compressor 127 limits the waveform, such as by imposing a predefined maximum amplitude on the waveform. For example, the compressor 127 reduces the amplitude of the waveform portion that exceeds a predefined amplitude threshold, while maintaining the amplitude of the waveform portion that does not exceed the predefined amplitude threshold. In some embodiments, compressor 127 reduces the dynamic range of the waveform. In some embodiments, the compressor 127 dynamically reduces the dynamic range of the waveform such that the combined waveform remains within the performance specifications of the haptic output generator 167 (eg, force and/or movable mass displacement limits).

低通滤波器129接收波形(例如来自压缩器127的经压缩波形)作为输入,并对波形进行滤波(例如平滑处理)(例如移除或减少波形中的高频信号分量)。例如在某些情况下,压缩器127在经压缩的波形中包括妨碍触觉输出生成和/或在根据经压缩波形生成触觉输出时超过触觉输出发生器167的性能规范的无关信号(例如高频信号分量)。低通滤波器129减少或移除波形中的此类无关信号。Low pass filter 129 receives as input the waveform (eg, the compressed waveform from compressor 127 ) and filters (eg, smooths) the waveform (eg, removes or reduces high frequency signal components in the waveform). For example, in some cases, compressor 127 includes extraneous signals (eg, high frequency signals) in the compressed waveform that interfere with haptic output generation and/or exceed the performance specifications ofhaptic output generator 167 when generating haptic output from the compressed waveform. quantity). Low pass filter 129 reduces or removes such extraneous signals from the waveform.

热控制器131接收波形(例如来自低通滤波器129的经滤波波形)作为输入,并根据设备100的热条件(例如基于在设备100内检测到的内部温度,诸如触感反馈控制器161的温度,和/或设备100检测到的外部温度)调节波形。例如在一些情况下,触感反馈控制器161的输出根据温度而变化(例如,响应于接收到相同波形,触感反馈控制器161在触感反馈控制器161处于第一温度时生成第一触觉输出,而在触感反馈控制器161处于与第一温度不同的第二温度时生成第二触觉输出)。例如,触觉输出的量值(或幅值)可根据温度而变化。为了降低温度变化的效应,波形被修改(例如,波形的幅值基于温度而被增大或减小)。Thermal controller 131 receives as input a waveform (eg, a filtered waveform from low-pass filter 129 ) and responds to thermal conditions of device 100 (eg, based on an internal temperature detected withindevice 100 , such as the temperature of haptic feedback controller 161 ) , and/or the external temperature detected by the device 100 ) adjustment waveform. For example, in some cases, the output ofhaptic feedback controller 161 varies according to temperature (eg, in response to receiving the same waveform,haptic feedback controller 161 generates a first haptic output whenhaptic feedback controller 161 is at a first temperature, and The second haptic output is generated when thehaptic feedback controller 161 is at a second temperature different from the first temperature). For example, the magnitude (or magnitude) of the haptic output may vary according to temperature. To reduce the effects of temperature changes, the waveform is modified (eg, the amplitude of the waveform is increased or decreased based on temperature).

在一些实施方案中,触觉反馈模块133(例如触发器模块121)耦接到硬件输入处理模块146。在一些实施方案中,图1A中的其他输入控制器160包括硬件输入处理模块146。在一些实施方案中,硬件输入处理模块146接收来自硬件输入设备145(例如,图1A中的其他输入或控制设备116,诸如,home按钮或强度敏感输入表面,诸如强度敏感触摸屏)的输入。在一些实施方案中,硬件输入设备145是本文所述的任何输入设备,诸如触敏显示器系统112(图1A)、键盘/鼠标350(图3)、触摸板355(图3)、其他输入或控制设备116之一(图1A)或强度敏感home按钮。在一些实施方案中,硬件输入设备145由强度敏感home按钮构成,而不是由触敏显示器系统112(图1A)、键盘/鼠标350(图3)或触摸板355(图3)构成。在一些实施方案中,响应于来自硬件输入设备145(例如,强度敏感home按钮或触摸屏)的输入,硬件输入处理模块146提供一个或多个触发器信号给触感反馈模块133以指示已检测到满足预定义输入标准的用户输入,诸如与主按钮“点击”(例如“按下点击”或“松开点击”)对应的输入。在一些实施方案中,触感反馈模块133响应于对应于主按钮“点击”的输入而提供对应于主按钮“点击”的波形,从而模拟按压物理主按钮的触感反馈。In some implementations, haptic feedback module 133 (eg, trigger module 121 ) is coupled to hardwareinput processing module 146 . In some embodiments,other input controllers 160 in FIG. 1A include hardwareinput processing modules 146 . In some embodiments, hardwareinput processing module 146 receives input from hardware input device 145 (eg, other input orcontrol device 116 in FIG. 1A , such as a home button or an intensity-sensitive input surface, such as an intensity-sensitive touch screen). In some embodiments,hardware input device 145 is any input device described herein, such as touch-sensitive display system 112 (FIG. 1A), keyboard/mouse 350 (FIG. 3), touchpad 355 (FIG. 3), other input or One of the control devices 116 (FIG. 1A) or the intensity sensitive home button. In some embodiments,hardware input device 145 consists of an intensity-sensitive home button rather than touch-sensitive display system 112 (FIG. 1A), keyboard/mouse 350 (FIG. 3), or touchpad 355 (FIG. 3). In some embodiments, in response to input from hardware input device 145 (eg, an intensity-sensitive home button or a touch screen), hardwareinput processing module 146 provides one or more trigger signals tohaptic feedback module 133 to indicate that a satisfaction has been detected User input of predefined input criteria, such as input corresponding to a primary button "click" (eg, "press down" or "release click"). In some embodiments, thehaptic feedback module 133 provides a waveform corresponding to a home button "click" in response to an input corresponding to a home button "click", thereby simulating the haptic feedback of pressing a physical home button.

在一些实施方案中,触觉输出模块包括触感反馈控制器161(例如图1A中的触感反馈控制器161),其控制触觉输出的生成。在一些实施方案中,触感反馈控制器161耦接到多个触觉输出发生器,并且选择所述多个触觉输出发生器中的一个或多个触觉输出发生器并发送波形给所选的所述一个或多个触觉输出发生器以用于生成触觉输出。在一些实施方案中,触感反馈控制器161协调对应于激活硬件输入设备145的触觉输出请求和对应于软件事件的触觉输出请求(例如来自触感反馈模块133的触觉输出请求),并修改所述两个或更多个波形中的一个或多个波形以相对于所述两个或更多个波形中的其余波形强调特定波形(例如通过提高所述特定波形的规模和/或减小这些波形中其余波形的规模,以相比于对应于软件事件的触觉输出优先处理对应于激活硬件输入设备145的触觉输出)。In some embodiments, the haptic output module includes a haptic feedback controller 161 (eg,haptic feedback controller 161 in FIG. 1A ) that controls the generation of haptic output. In some embodiments, thehaptic feedback controller 161 is coupled to a plurality of haptic output generators, and selects one or more of the plurality of haptic output generators and sends a waveform to the selected One or more haptic output generators for generating haptic output. In some embodiments,haptic feedback controller 161 coordinates a haptic output request corresponding to activatinghardware input device 145 and a haptic output request corresponding to a software event (eg, a haptic output request from haptic feedback module 133), and modifies the two one or more of the one or more waveforms to emphasize a particular waveform relative to the rest of the two or more waveforms (eg, by increasing the size of the particular waveform and/or reducing the number of The remaining waveforms are scaled to prioritize haptic outputs corresponding to activatinghardware input devices 145 over haptic outputs corresponding to software events).

在一些实施方案中,如图1C所示,触感反馈控制器161的输出耦接到设备100的音频电路(例如音频电路110,图1A),并将音频信号提供给设备100的音频电路。在一些实施方案中,触感反馈控制器161提供用于生成触觉输出的波形和用于与生成触觉输出一起提供音频输出的音频信号这二者。在一些实施方案中,触感反馈控制器161修改音频信号和/或(用于生成触觉输出的)波形使得音频输出和触觉输出同步(例如通过延迟音频信号和/或波形)在一些实施方案中,触感反馈控制器161包括用于将数字波形转换成模拟信号的数模转换器,模拟信号被放大器163和/或触觉输出发生器167接收。In some embodiments, as shown in FIG. 1C , the output ofhaptic feedback controller 161 is coupled to the audio circuitry of device 100 (eg,audio circuitry 110 , FIG. 1A ) and provides audio signals to the audio circuitry ofdevice 100 . In some embodiments,haptic feedback controller 161 provides both a waveform for generating haptic output and an audio signal for providing audio output in conjunction with generating haptic output. In some embodiments, thehaptic feedback controller 161 modifies the audio signal and/or the waveform (for generating the haptic output) so that the audio output and the haptic output are synchronized (eg, by delaying the audio signal and/or waveform). In some embodiments,Haptic feedback controller 161 includes a digital-to-analog converter for converting digital waveforms into analog signals that are received byamplifier 163 and/orhaptic output generator 167 .

在一些实施方案中,触觉输出模块包括放大器163。在一些实施方案中,放大器163接收(例如来自触感反馈控制器161的)波形,并放大所述波形然后将经放大的波形发送给触觉输出发生器167(例如,触觉输出发生器167(图1A)或357(图3)中任一者)。例如,放大器163将所接收的波形放大到符合触觉输出发生器167的物理规范的信号电平(例如放大到触觉输出发生器167为了生成触觉输出而需要的电压和/或电流使得发送给触觉输出发生器167的信号生成对应于从触感反馈控制器161接收的波形的触觉输出)并将经放大的波形发送给触觉输出发生器167。作为响应,触觉输出发生器167生成触觉输出(例如通过将可移动质块在一个或多个维度中相对于可移动质块的中性位置前后移位)。In some embodiments, the haptic output module includesamplifier 163 . In some implementations,amplifier 163 receives a waveform (eg, from haptic feedback controller 161 ), amplifies the waveform and then sends the amplified waveform to haptic output generator 167 (eg, haptic output generator 167 ( FIG. 1A ) ) or 357 (Figure 3) either). For example,amplifier 163 amplifies the received waveform to a signal level that conforms to the physical specification of haptic output generator 167 (eg, to the voltage and/or current required byhaptic output generator 167 to generate the haptic output so that the haptic output is sent to the The signal ofgenerator 167 generates a haptic output corresponding to the waveform received from haptic feedback controller 161 ) and sends the amplified waveform tohaptic output generator 167 . In response, thehaptic output generator 167 generates a haptic output (eg, by shifting the movable mass back and forth in one or more dimensions relative to the neutral position of the movable mass).

在一些实施方案中,触觉输出模块包括传感器169,其耦接到触觉输出发生器167。传感器169检测触觉输出发生器167或触觉输出发生器167的一个或多个部件(例如用于生成触觉输出的一个或多个运动部件,诸如膜)的状态或状态变化(例如机械位置、物理位移、和/或移动)。在一些实施方案中,传感器169是磁场传感器(例如霍尔效应传感器)或其他位移和/或运动传感器。在一些实施方案中,传感器169将信息(例如触觉输出发生器167中一个或多个部件的位置、位移和/或移动)提供给触感反馈控制器161,并且,根据传感器169提供的关于触觉输出发生器167的状态的信息,触感反馈控制器161调节从触感反馈控制器161输出的波形(例如,任选地经由放大器163发送给触觉输出发生器167的波形)。In some embodiments, the haptic output module includes a sensor 169 coupled to thehaptic output generator 167 . Sensor 169 detects a state or state change (eg, mechanical position, physical displacement) ofhaptic output generator 167 or one or more components of haptic output generator 167 (eg, one or more moving components used to generate haptic output, such as a membrane) , and/or move). In some embodiments, sensor 169 is a magnetic field sensor (eg, a Hall effect sensor) or other displacement and/or motion sensor. In some embodiments, sensors 169 provide information (eg, the position, displacement, and/or movement of one or more components in haptic output generator 167 ) tohaptic feedback controller 161 , and based on the information provided by sensors 169 regarding the haptic output Information on the state ofgenerator 167,haptic feedback controller 161 adjusts the waveform output from haptic feedback controller 161 (eg, optionally sent tohaptic output generator 167 via amplifier 163).

图2示出了根据一些实施方案的具有触摸屏(例如,图1A的触敏显示器系统112)的便携式多功能设备100。触摸屏任选地在用户界面(UI)200内显示一个或多个图形。在这些实施方案中以及在下文中描述的其他实施方案中,用户能够通过例如利用一个或多个手指202(在图中未按比例绘制)或一个或多个触笔203(在图中未按比例绘制)在图形上作出手势来选择这些图形中的一个或多个图形。在一些实施方案中,当用户中断与一个或多个图形的接触时,将发生对一个或多个图形的选择。在一些实施方案中,手势任选地包括一次或多次轻击、一次或多次轻扫(从左向右、从右向左、向上和/或向下)和/或已与设备100发生接触的手指的滚动(从右向左、从左向右、向上和/或向下)。在一些具体实施中或在一些情况下,不经意地与图形接触不会选择图形。例如,当与选择对应的手势是轻击时,在应用程序图标上方扫动的轻扫手势任选地不会选择对应的应用程序。FIG. 2 illustrates aportable multifunction device 100 having a touch screen (eg, touch-sensitive display system 112 of FIG. 1A ), according to some embodiments. The touch screen optionally displays one or more graphics within a user interface (UI) 200 . In these embodiments, as well as other embodiments described below, the user can use, for example, one or more fingers 202 (not drawn to scale in the figures) or one or more styluses 203 (not drawn to scale in the figures) draw) gestures on the graphics to select one or more of these graphics. In some embodiments, selection of one or more graphics will occur when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up and/or down) and/or has occurred withdevice 100 Scrolling of the touching finger (right to left, left to right, up and/or down). In some implementations or in some cases, inadvertent contact with a graphic does not select the graphic. For example, when the gesture corresponding to the selection is a tap, a swipe gesture to swipe over an application icon optionally does not select the corresponding application.

设备100任选地还包括一个或多个物理按钮,诸如“home”按钮、或菜单按钮204。如前所述,菜单按钮204任选地用于导航到任选地在设备100上被执行的一组应用程序中的任何应用程序136。作为另外一种选择,在一些实施方案中,菜单按钮被实现为被显示在触摸屏显示器上的GUI中的软键。Device 100 optionally also includes one or more physical buttons, such as a "home" button, ormenu button 204 . As previously mentioned,menu button 204 is optionally used to navigate to anyapplication 136 in a set of applications optionally executed ondevice 100 . Alternatively, in some embodiments, the menu buttons are implemented as soft keys displayed in a GUI on a touch screen display.

在一些实施方案中,设备100包括触摸屏显示器、菜单按钮204(有时称为主按钮204)、用于使设备通电/断电和用于锁定设备的下压按钮206、音量调节按钮208、用户身份模块(SIM)卡槽210、耳麦插孔212、和对接/充电外部端口124。下压按钮206任选地用于通过压下该按钮并且将该按钮保持在压下状态持续预定义的时间间隔来对设备进行开/关机;通过压下该按钮并在该预定义的时间间隔过去之前释放该按钮来锁定设备;和/或对设备进行解锁或发起解锁过程。在一些实施方案中,设备100还通过麦克风113来接受用于激活或去激活某些功能的语音输入。设备100还任选地包括用于检测触敏显示器系统112上的接触的强度的一个或多个接触强度传感器165和/或用于为设备100的用户生成触觉输出的一个或多个触觉输出发生器167。In some embodiments, thedevice 100 includes a touch screen display, a menu button 204 (sometimes referred to as a home button 204), apush button 206 for powering on/off the device and for locking the device, avolume adjustment button 208, a user identity Module (SIM)card slot 210 ,headset jack 212 , and docking/chargingexternal port 124 . Apush button 206 is optionally used to turn the device on/off by depressing the button and keeping the button depressed for a predefined time interval; Release the button before the past to lock the device; and/or unlock the device or initiate an unlocking process. In some embodiments,device 100 also accepts voice input throughmicrophone 113 for activating or deactivating certain functions.Device 100 also optionally includes one or morecontact intensity sensors 165 for detecting the intensity of a contact on touch-sensitive display system 112 and/or one or more haptic output occurrences for generating haptic output for a user ofdevice 100device 167.

图3是根据一些实施方案的具有显示器和触敏表面的示例性多功能设备的框图。设备300不必是便携式的。在一些实施方案中,设备300是膝上型电脑、台式计算机、平板电脑、多媒体播放器设备、导航设备、教育设备(诸如儿童学习玩具)、游戏系统或控制设备(例如,家用控制器或工业用控制器)。设备300通常包括一个或多个处理单元(CPU)310、一个或多个网络或其他通信接口360、存储器370和用于将这些部件互联的一根或多根通信总线320。通信总线320任选地包括使系统部件互连并且控制系统部件之间的通信的电路(有时称作芯片组)。设备300包括具有显示器340的输入/输出(I/O)接口330,该显示器通常是触摸屏显示器。I/O接口330还任选地包括键盘和/或鼠标(或其他指向设备)350和触控板355、用于在设备300上生成触觉输出的触觉输出发生器357(例如,类似于以上参考图1A所述的一个或多个触觉输出发生器167)、传感器359(例如,光学传感器、加速度传感器、接近传感器、触敏传感器、和/或类似于以上参考图1A所述的一个或多个接触强度传感器165的接触强度传感器)。存储器370包括高速随机存取存储器,诸如DRAM、SRAM、DDR RAM或其他随机存取固态存储器设备;并且任选地包括非易失性存储器,诸如一个或多个磁盘存储设备、光盘存储设备、闪存存储器设备或其他非易失性固态存储设备。存储器370任选地包括远离一个或多个CPU 310定位的一个或多个存储设备。在一些实施例中,存储器370存储与便携式多功能设备100(图1A)的存储器102中所存储的程序、模块和数据结构类似的程序、模块、和数据结构,或它们的子组。此外,存储器370任选地存储在便携式多功能设备100的存储器102中不存在的另外程序、模块和数据结构。例如,设备300的存储器370任选地存储绘图模块380、呈现模块382、文字处理模块384、网站创建模块386、盘编辑模块388、和/或电子表格模块390,而便携式多功能设备100(图1A)的存储器102任选地不存储这些模块。3 is a block diagram of an exemplary multifunction device having a display and a touch-sensitive surface, according to some embodiments. Device 300 need not be portable. In some embodiments, device 300 is a laptop computer, desktop computer, tablet computer, multimedia player device, navigation device, educational device (such as a children's learning toy), gaming system, or control device (eg, a home controller or industrial using the controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. Communication bus 320 optionally includes circuitry (sometimes referred to as a chipset) that interconnects and controls communications between system components. Device 300 includes an input/output (I/O) interface 330 having a display 340, which is typically a touch screen display. I/O interface 330 also optionally includes keyboard and/or mouse (or other pointing device) 350 and trackpad 355, haptic output generator 357 for generating haptic output on device 300 (eg, similar to the references above One or more of thetactile output generators 167 described in FIG. 1A ), sensors 359 (eg, optical sensors, acceleration sensors, proximity sensors, touch-sensitive sensors, and/or one or more similar to those described above with reference to FIG. 1A ) Contact Strength Sensor of Contact Strength Sensor 165). Memory 370 includes high speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and optionally nonvolatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory memory devices or other non-volatile solid state storage devices. Memory 370 optionally includes one or more storage devices located remotely from one or more CPUs 310 . In some embodiments, memory 370 stores programs, modules, and data structures similar to, or subsets of, those stored in memory 102 of portable multifunction device 100 (FIG. 1A). Additionally, memory 370 optionally stores additional programs, modules and data structures not present in memory 102 of portablemultifunction device 100 . For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while portable multifunction device 100 (Fig. The memory 102 of 1A) optionally does not store these modules.

图3中上述所识别的元件中的每个元件任选地存储在先前提到的存储器设备中的一个或多个存储器设备中。上述所识别的模块中的每个模块对应于用于执行上述功能的一组指令。上述所识别的模块或程序(即,指令集)不必被实现为单独的软件程序、过程或模块,因此这些模块的各种子集任选地在各种实施方案中组合或以其他方式重新布置。在一些实施方案中,存储器370任选地存储上述模块和数据结构的子组。此外,存储器370任选地存储上面未描述的另外的模块和数据结构。Each of the above-identified elements in FIG. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the above-identified modules corresponds to a set of instructions for performing the above-described functions. The above-identified modules or programs (ie, sets of instructions) need not necessarily be implemented as separate software programs, procedures, or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments . In some embodiments, memory 370 optionally stores subsets of the modules and data structures described above. In addition, memory 370 optionally stores additional modules and data structures not described above.

现在将注意力转到任选地在便携式多功能设备100上实现的用户界面(“UI”)的实施方案。Attention is now turned to an implementation of a user interface ("UI") optionally implemented on theportable multifunction device 100 .

图4A示出了根据一些实施方案的便携式多功能设备100上的应用程序菜单的示例用户界面400。类似的用户界面任选地在设备300上实现。在一些实施方案中,用户界面400包括以下元素或者其子集或超集:4A illustrates an example user interface 400 of an application menu on aportable multifunction device 100 in accordance with some embodiments. A similar user interface is optionally implemented on device 300 . In some embodiments, user interface 400 includes the following elements, or a subset or superset thereof:

●一种或多种无线通信(诸如蜂窝信号和Wi-Fi信号)的一个或多个信号强度指示器;one or more signal strength indicators for one or more wireless communications, such as cellular and Wi-Fi signals;

●时间;● time;

●蓝牙指示器;●Bluetooth indicator;

●电池状态指示器;●Battery status indicator;

●具有常用应用程序图标的托盘408,图标诸如:- Tray 408 with commonly used application icons such as:

○电话模块138的被标记为“电话”的图标416,该图标任选地包括未接来电或语音留言的数量的指示符414;o anicon 416 of thephone module 138 labeled "Phone," which optionally includes an indicator 414 of the number of missed calls or voice messages;

○电子邮件客户端模块140的被标记为“邮件”的图标418,该图标任选地包括未读电子邮件的数量的指示符410;o anicon 418 of theemail client module 140 labeled "Mail" which optionally includes an indicator 410 of the number of unread emails;

○浏览器模块147的被标记为“浏览器”的图标420;以及o theicon 420 of thebrowser module 147 labeled "Browser"; and

○视频和音乐播放器模块152的标记为“音乐”的图标422;以及o an icon 422 labeled "Music" for the video andmusic player module 152; and

●其他应用程序的图标,诸如:● Icons of other applications such as:

○IM模块141的被标记为“消息”的图标424;o icon 424 ofIM module 141 labeled "Messages";

○日历模块148的被标记为“日历”的图标426;o icon 426 ofcalendar module 148 labeled "Calendar";

○图像管理模块144的被标记为“照片”的图标428;o icon 428 of theimage management module 144 labeled "Photos";

○相机模块143的被标记为“相机”的图标430;o the icon 430 of thecamera module 143 labeled "Camera";

○在线视频模块155的被标记为“在线视频”的图标432;o icon 432 of theonline video module 155 labeled "Online Video";

○股市桌面小程序149-2的被标记为“股市”的图标434;○The icon 434 of the stock market desktop applet 149-2 marked as "stock market";

○地图模块154的被标记为“地图”的图标436;o the icon 436 of themap module 154 labeled "Map";

○天气桌面小程序149-1的被标记为“天气”的图标438;○The icon 438 of the weather desktop applet 149-1 marked as "weather";

○闹钟桌面小程序149-4的被标记为“时钟”的图标440;○ the icon 440 of the alarm clock desktop applet 149-4 labeled "Clock";

○健身支持模块142的被标记为“健身支持”的图标442;o icon 442 offitness support module 142 labeled "fitness support";

○记事本模块153的被标记为“记事本”的图标444;以及o an icon 444 of theNotepad module 153 labeled "Notepad"; and

○用于设置应用程序或模块的图标446,该图标446提供对设备100及其各种应用程序136的设置的访问。o An icon 446 for a settings application or module that provides access to settings for thedevice 100 and itsvarious applications 136 .

应当注意,图4A中示出的图标标签仅仅是示例性的。例如,其他标签任选地用于各种应用程序图标。在一些实施方案中,相应应用程序图标的标签包括与该相应应用程序图标对应的应用程序的名称。在一些实施方案中,特定应用程序图标的标签不同于与该特定应用程序图标对应的应用程序的名称。It should be noted that the icon labels shown in FIG. 4A are exemplary only. For example, other labels are optionally used for various application icons. In some embodiments, the label of the corresponding application icon includes the name of the application corresponding to the corresponding application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.

图4B示出了具有与显示器450分开的触敏表面451(例如,图3中的平板或触控板355)的设备(例如,图3中的设备300)上的示例性用户界面。尽管将参考触摸屏显示器112(其中组合了触敏表面和显示器)上的输入给出随后的许多示例,但是在一些实施方案中,设备检测与显示器分开的触敏表面上的输入,如图4B中所示。在一些实施方案中,触敏表面(例如,图4B中的451)具有与显示器(例如,450)上的主轴线(例如,图4B中的453)对应的主轴线(例如,图4B中的452)。根据这些实施方案,设备检测与显示器上相应位置对应的位置处的与触敏表面451的接触(例如,图4B中的460和462)(例如,在图4B中,460对应于468并且462对应于470)。这样,在触敏表面(例如,图4B中的451)与多功能设备的显示器(例如,图4B中的450)是分开的时侯,由设备在触敏表面上所检测到的用户输入(例如,接触460和462以及它们的移动)被该设备用于操纵显示器上的用户界面。应当理解,类似的方法任选地用于本文所述的其他用户界面。4B illustrates an exemplary user interface on a device (eg, device 300 in FIG. 3 ) having a touch-sensitive surface 451 (eg, tablet or trackpad 355 in FIG. 3 ) separate fromdisplay 450 . Although many of the examples that follow will be given with reference to input on touchscreen display 112 (where the touch-sensitive surface and display are combined), in some embodiments, the device detects input on a touch-sensitive surface that is separate from the display, as in FIG. 4B shown. In some implementations, the touch-sensitive surface (eg, 451 in FIG. 4B ) has a primary axis (eg, 453 in FIG. 4B ) that corresponds to the primary axis (eg, 453 in FIG. 4B ) on the display (eg, 450 ) 452). According to these embodiments, the device detects contact with the touch-sensitive surface 451 (eg, 460 and 462 in FIG. 4B ) at locations corresponding to corresponding locations on the display (eg, in FIG. 4B , 460 corresponds to 468 and 462 corresponds to 462 ) at 470). Thus, when the touch-sensitive surface (eg, 451 in FIG. 4B ) is separate from the display of the multifunction device (eg, 450 in FIG. 4B ), user input detected by the device on the touch-sensitive surface (eg, 450 in FIG. 4B ) For example,contacts 460 and 462 and their movements) are used by the device to manipulate the user interface on the display. It should be understood that similar methods are optionally used for other user interfaces described herein.

另外,虽然主要是参考手指输入(例如,手指接触、单指轻击手势、手指轻扫手势等)来给出下面的示例,但是应当理解的是,在一些实施方案中,这些手指输入中的一个或多个手指输入由来自另一输入设备的输入(例如,基于鼠标的输入或触笔输入)替换。例如,轻扫手势任选地由鼠标点击(例如,而不是接触),之后是光标沿着轻扫的路径的移动(例如,而不是接触的移动)替代。又如,轻击手势任选地由在光标位于轻击手势的位置上方时的鼠标点击(例如,代替对接触的检测,之后是停止检测接触)替代。类似地,当同时检测到多个用户输入时,应当理解的是,多个计算机鼠标任选地被同时使用,或鼠标和手指接触任选地被同时使用。Additionally, while the following examples are given primarily with reference to finger inputs (eg, finger touch, single finger tap gestures, finger swipe gestures, etc.), it should be understood that in some embodiments, any of these finger inputs The one or more finger inputs are replaced by input from another input device (eg, mouse-based input or stylus input). For example, a swipe gesture is optionally replaced by a mouse click (eg, rather than a contact), followed by movement of the cursor (eg, rather than a contact) along the swipe's path. As another example, a tap gesture is optionally replaced by a mouse click (eg, instead of detecting a contact, followed by ceasing to detect the contact) while the cursor is over the location of the tap gesture. Similarly, when multiple user inputs are detected simultaneously, it should be understood that multiple computer mice are optionally used simultaneously, or mouse and finger contacts are optionally used simultaneously.

如本文所用,术语“焦点选择器”是指用于指示用户正与之进行交互的用户界面的当前部分的输入元件。在包括光标或其他位置标记的一些具体实施中,光标充当“焦点选择器”,使得当光标在特定用户界面元素(例如,按钮、窗口、滑块或其他用户界面元素)上方时在触敏表面(例如,图3中的触控板355或图4B中的触敏表面451)上检测到输入(例如,按压输入)的情况下,该特定用户界面元素根据所检测到的输入而被调整。在包括使得能够实现与触摸屏显示器上的用户界面元素的直接交互的触摸屏显示器(例如,图1A中的触敏显示器系统112或图4A中的触摸屏)的一些具体实施中,在触摸屏上检测到的接触充当“焦点选择器”,使得当在触摸屏显示器上在特定用户界面元素(例如,按钮、窗口、滑块或其他用户界面元素)的位置处检测到输入(例如,通过接触的按压输入)时,根据所检测到的输入来调整特定用户界面元素。在一些具体实施中,焦点从用户界面的一个区域移动到用户界面的另一个区域,而无需光标的对应移动或触摸屏显示器上的接触的移动(例如,通过使用制表键或箭头键将焦点从一个按钮移动到另一个按钮);在这些具体实施中,焦点选择器根据焦点在用户界面的不同区域之间的移动而移动。不考虑焦点选择器所采取的具体形式,焦点选择器通常是由用户控制以便传送与用户界面的用户期望的交互(例如,通过向设备指示用户界面的用户期望与其进行交互的元素)的用户界面元素(或触摸屏显示器上的接触)。例如,在触敏表面(例如,触摸板或触摸屏)上检测到按压输入时,焦点选择器(例如,光标、接触或选择框)在相应按钮上方的位置将指示用户期望激活相应按钮(而不是设备显示器上示出的其他用户界面元素)。As used herein, the term "focus selector" refers to an input element used to indicate the current portion of a user interface with which a user is interacting. In some implementations that include a cursor or other position marker, the cursor acts as a "focus selector" so that when the cursor is over a particular user interface element (eg, a button, window, slider, or other user interface element) the touch-sensitive surface is Where an input (eg, a press input) is detected (eg, touchpad 355 in FIG. 3 or touch-sensitive surface 451 in FIG. 4B ), that particular user interface element is adjusted according to the detected input. In some implementations that include a touchscreen display (eg, touch-sensitive display system 112 in FIG. 1A or the touchscreen in FIG. 4A ) that enables direct interaction with user interface elements on the touchscreen display, detected on the touchscreen A contact acts as a "focus selector" such that when an input is detected on the touchscreen display at the location of a particular user interface element (eg, a button, window, slider, or other user interface element) (eg, through a press input of the contact) , which adjusts certain user interface elements based on the detected input. In some implementations, focus is moved from one area of the user interface to another area of the user interface without corresponding movement of the cursor or movement of a contact on the touchscreen display (eg, by using the tab or arrow keys to change focus from one button to another); in these implementations, the focus selector moves according to the movement of focus between different areas of the user interface. Regardless of the specific form a focus selector takes, a focus selector is typically a user interface that is controlled by the user in order to convey the user's desired interaction with the user interface (eg, by indicating to the device elements with which the user of the user interface desires to interact) elements (or contacts on touchscreen displays). For example, when a press input is detected on a touch-sensitive surface (eg, a touchpad or touchscreen), the position of the focus selector (eg, cursor, touch, or selection box) over the corresponding button will indicate that the user expects the corresponding button to be activated (and not other user interface elements shown on the device display).

如在本说明书和权利要求书中所使用的,术语触敏表面上的接触的“强度”是指触敏表面上的接触(例如,手指接触或触笔接触)的力或压力(每单位面积的力),或是指触敏表面上的接触的力或压力的代替物(代用物)。接触的强度具有值范围,该值范围包括至少四个不同的值并且更典型地包括上百个不同的值(例如,至少256个)。接触的强度任选地使用各种方法和各种传感器或传感器的组合来确定(或测量)。例如,在触敏表面下方或相邻于触敏表面的一个或多个力传感器任选地用于测量触敏表面上的不同点处的力。在一些具体实施中,来自多个力传感器的力测量被合并(例如,加权平均或者加和),以确定估计的接触力。类似地,触笔的压敏顶端任选地用于确定触笔在触敏表面上的压力。另选地,在触敏表面上检测到的接触区域的大小和/或其变化、接触附近的触敏表面的电容和/或其变化以及/或者接触附近的触敏表面的电阻和/或其变化任选地被用作触敏表面上的接触的力或压力的替代物。在一些具体实施中,接触力或压力的替代物测量直接用于确定是否已经超过强度阈值(例如,强度阈值以对应于替代物测量的单位来描述)。在一些具体实施中,将接触力或压力的替代测量值转换为预估力或压力,并且使用预估力或压力确定是否已超过强度阈值(例如,强度阈值是以压力单位测量的压力阈值)。使用接触的强度作为用户输入的属性,从而允许用户访问用户在用于(例如,在触敏显示器上)显示示能表示和/或接收用户输入(例如,经由触敏显示器、触敏表面或物理控件/机械控件诸如旋钮或按钮)的实地面积有限的尺寸更小的设备上本来不能容易地访问的附加设备功能。As used in this specification and in the claims, the term "strength" of a contact on a touch-sensitive surface refers to the force or pressure (per unit area) of a contact (eg, finger contact or stylus contact) on the touch-sensitive surface force), or a surrogate (surrogate) for force or pressure of contact on a touch-sensitive surface. The intensity of the contact has a range of values that includes at least four distinct values and more typically hundreds of distinct values (eg, at least 256). The strength of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure force at various points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (eg, weighted average or summed) to determine an estimated contact force. Similarly, the pressure-sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size and/or variation of the contact area detected on the touch-sensitive surface, the capacitance of the touch-sensitive surface in the vicinity of the contact and/or its variation, and/or the resistance of the touch-sensitive surface in the vicinity of the contact and/or its variation Variation is optionally used as a surrogate for the force or pressure of contact on the touch-sensitive surface. In some implementations, the surrogate measurement of contact force or pressure is used directly to determine whether an intensity threshold has been exceeded (eg, the intensity threshold is described in units corresponding to the surrogate measurement). In some implementations, the surrogate measurement of the contact force or pressure is converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (eg, the intensity threshold is a pressure threshold measured in pressure units) . Use the strength of the contact as an attribute of the user input, thereby allowing the user to access the user interface for displaying an affordance (eg, on a touch-sensitive display) and/or receiving user input (eg, via a touch-sensitive display, touch-sensitive surface, or physical Controls/mechanical controls (such as knobs or buttons) with limited physical area for additional device functions that would not otherwise be readily accessible on smaller sized devices.

在一些实施方案中,接触/运动模块130使用一组一个或多个强度阈值来确定操作是否已由用户执行(例如,确定用户是否已“点击”图标)。在一些实施方案中,根据软件参数来确定强度阈值的至少一个子集(例如,强度阈值不由特定物理致动器的激活阈值来确定,并且可在不改变设备100的物理硬件的情况下进行调整)。例如,在不改变触控板或触摸屏显示器硬件的情况下,触控板或触摸屏显示器的鼠标“点击”阈值可被设置为预定义阈值的大范围中的任一个阈值。另外,在一些具体实施中,设备的用户提供有用于调节一组强度阈值中的一个或多个强度阈值(例如,通过调节各个强度阈值和/或通过利用对“强度”参数的系统级点击来一次调节多个强度阈值)的软件设置。In some embodiments, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by the user (eg, determining whether the user has "clicked" on an icon). In some embodiments, at least a subset of the intensity thresholds are determined from software parameters (eg, the intensity thresholds are not determined by activation thresholds for a particular physical actuator and can be adjusted without changing the physical hardware of the device 100 ) ). For example, the mouse "click" threshold for a trackpad or touchscreen display can be set to any of a large range of predefined thresholds without changing the trackpad or touchscreen display hardware. Additionally, in some implementations, the user of the device is provided with a means to adjust one or more of a set of intensity thresholds (eg, by adjusting individual intensity thresholds and/or by utilizing a system-level click on the "intensity" parameter software settings to adjust multiple intensity thresholds at once).

如说明书和权利要求中所使用的,接触的“特征强度”这一术语是指基于接触的一个或多个强度的接触的特征。在一些实施方案中,特征强度基于多个强度样本。特征强度任选地基于相对于预定义事件(例如,在检测到接触之后,在检测到接触抬离之前,在检测到接触开始移动之前或之后,在检测到接触结束之前,在检测到接触的强度增大之前或之后和/或在检测到接触的强度减小之前或之后)而言在预先确定的时间段(例如,0.05秒、0.1秒、0.2秒、0.5秒、1秒、2秒、5秒、10秒)期间采集的预定义数量的强度样本或一组强度样本。接触的特征强度任选地基于以下各项中的一者或多者:接触强度的最大值、接触强度的均值、接触强度的平均值、接触强度的前10%处的值、接触强度的半最大值、接触强度的90%最大值、通过在预定义时间段上或从预定义时间开始低通滤波接触强度而生成的值等。在一些实施方案中,在确定特征强度时使用接触的持续时间(例如,在特征强度是接触的强度在时间上的平均值时)。在一些实施方案中,将特征强度与一组一个或多个强度阈值进行比较,以确定用户是否已执行操作。例如,该组一个或多个强度阈值可包括第一强度阈值和第二强度阈值。在本示例中,特征强度未超过第一强度阈值的接触导致第一操作,特征强度超过第一强度阈值但未超过第二强度阈值的接触导致第二操作,并且特征强度超过第二强度阈值的接触导致第三操作。在一些实施方案中,使用特征强度和一个或多个强度阈值之间的比较来确定是否要执行一个或多个操作(例如,是否执行相应选项或放弃执行相应操作),而不是用于确定执行第一操作还是第二操作。As used in the specification and claims, the term "characteristic strength" of a contact refers to a characteristic of a contact based on one or more strengths of the contact. In some embodiments, the characteristic intensity is based on a plurality of intensity samples. Feature intensities are optionally based on relative to predefined events (e.g., after a contact is detected, before a contact lift-off is detected, before or after a contact starts to move, before a contact ends, before a contact is detected. before or after an increase in intensity and/or before or after a decrease in intensity of the contact is detected) for a predetermined time period (eg, 0.05 seconds, 0.1 seconds, 0.2 seconds, 0.5 seconds, 1 second, 2 seconds, A predefined number of intensity samples or a set of intensity samples collected during 5 seconds, 10 seconds). The characteristic strength of the contact is optionally based on one or more of the following: the maximum value of the contact strength, the mean value of the contact strength, the mean value of the contact strength, the value at the top 10% of the contact strength, the half of the contact strength. Maximum value, 90% maximum of contact intensity, values generated by low pass filtering the contact intensity over or from a predefined time period, etc. In some embodiments, the duration of the contact is used in determining the characteristic intensity (eg, when the characteristic intensity is an average of the intensity of the contact over time). In some embodiments, the feature intensity is compared to a set of one or more intensity thresholds to determine whether the user has performed an action. For example, the set of one or more intensity thresholds may include a first intensity threshold and a second intensity threshold. In this example, a contact whose characteristic intensity does not exceed a first intensity threshold results in a first operation, a contact whose characteristic intensity exceeds a first intensity threshold but does not exceed a second intensity threshold results in a second operation, and a feature whose intensity exceeds the second intensity threshold Contact results in a third action. In some implementations, a comparison between a feature intensity and one or more intensity thresholds is used to determine whether to perform one or more operations (eg, whether to perform the corresponding option or abstain from performing the corresponding operation), rather than for determining to perform The first operation or the second operation.

在一些实施方案中,识别手势的一部分以用于确定特征强度。例如,触敏表面可接收连续轻扫接触,该连续轻扫接触从起始位置过渡并达到结束位置(例如拖动手势),在该结束位置处,接触的强度增大。在该实施例中,接触在结束位置处的特征强度可仅基于连续轻扫接触的一部分,而不是整个轻扫接触(例如,仅结束位置处的轻扫接触的一部分)。在一些实施方案中,可在确定接触的特征强度之前向轻扫手势的强度应用平滑化算法。例如,平滑化算法任选地包括以下各项中的一种或多种:不加权滑动平均平滑化算法、三角平滑化算法、中值滤波器平滑化算法和/或指数平滑化算法。在一些情况下,这些平滑化算法消除了轻扫接触的强度中的窄的尖峰或凹陷,以实现确定特征强度的目的。In some implementations, a portion of the gesture is identified for use in determining feature strength. For example, a touch-sensitive surface may receive a continuous swipe contact that transitions from a starting position to an ending position (eg, a drag gesture) where the strength of the contact increases. In this embodiment, the characteristic strength of the contact at the end location may be based on only a portion of the continuous swipe contact, rather than the entire swipe contact (eg, only a portion of the swipe contact at the end location). In some implementations, a smoothing algorithm may be applied to the strength of the swipe gesture prior to determining the characteristic strength of the contact. For example, the smoothing algorithm optionally includes one or more of: an unweighted moving average smoothing algorithm, a triangular smoothing algorithm, a median filter smoothing algorithm, and/or an exponential smoothing algorithm. In some cases, these smoothing algorithms eliminate narrow spikes or dips in the intensity of the swipe contact for the purpose of determining the intensity of a feature.

本文描述的用户界面图任选地包括各种强度图,这些强度图示出触敏表面上的接触相对于一个或多个强度阈值(例如,接触检测强度阈值IT0、轻按压强度阈值ITL、深按压强度阈值ITD(例如,至少初始高于ITL)和/或一个或多个其他强度阈值(例如,比ITL低的强度阈值ITH))的当前强度。该强度图通常不是所显示的用户界面的一部分,但是被提供以帮助解释所述图。在一些实施方案中,轻按压强度阈值对应于这样的强度:在该强度下设备将执行通常与点击物理鼠标或触摸板的按钮相关联的操作。在一些实施方案中,深按压强度阈值对应于这样的强度:在该强度下设备将执行与通常与点击物理鼠标或触控板的按钮相关联的操作不同的操作。在一些实施方案中,当检测到特征强度低于轻按压强度阈值(例如,并且高于标称接触检测强度阈值IT0,比标称接触检测强度阈值低的接触不再被检测到)的接触时,设备将根据接触在触敏表面上的移动来移动焦点选择器,而不执行与轻按压强度阈值或深按压强度阈值相关联的操作。一般来讲,除非另有陈述,否则这些强度阈值在不同组的用户界面附图之间是一致的。The user interface graphs described herein optionally include various intensity graphs showing contacts on a touch-sensitive surface relative to one or more intensity thresholds (eg, contact detection intensity threshold IT0 , light press intensity threshold ITL , the current intensity of a deep compression intensity threshold ITD (eg, at least initially above ITL ) and/or one or more other intensity thresholds (eg, an intensity thresholdITH lower than ITL ). The intensity graph is not typically part of the displayed user interface, but is provided to aid in the interpretation of the graph. In some embodiments, the light press intensity threshold corresponds to the intensity at which the device would perform an operation typically associated with clicking a button of a physical mouse or touchpad. In some embodiments, the deep-press intensity threshold corresponds to the intensity at which the device will perform a different operation than that normally associated with clicking a button of a physical mouse or trackpad. In some embodiments, when a contact is detected with a characteristic intensity below the light press intensity threshold (eg, and above the nominal contact detection intensity threshold IT0 , contacts below the nominal touch detection intensity threshold are no longer detected) , the device will move the focus selector based on the movement of the contact on the touch-sensitive surface without performing the operation associated with the light press intensity threshold or the deep press intensity threshold. Generally, unless otherwise stated, these intensity thresholds are consistent across sets of user interface drawings.

在一些实施方案中,设备对设备所检测到的输入的响应取决于基于输入期间的接触强度的标准。例如,对于一些“轻按压”输入,在输入期间超过第一强度阈值的接触的强度触发第一响应。在一些实施方案中,设备对由设备所检测到的输入的响应取决于包括输入期间的接触强度和基于时间的标准两者的标准。例如,对于一些“深按压”输入,只要在满足第一强度阈值与满足第二强度阈值之间经过延迟时间,在输入期间超过大于轻按压的第一强度阈值的第二强度阈值的接触的强度便触发第二响应。该延迟时间的持续时间通常小于200ms(毫秒)(例如,40ms、100ms、或120ms,这取决于第二强度阈值的量值,其中该延迟时间随着第二强度阈值增大而增大)。该延迟时间帮助避免意外地识别深按压输入。又如,对于一些“深按压”输入,在达到第一强度阈值之后将出现敏感度降低的时间段。在该敏感度降低的时间段期间,第二强度阈值增大。第二强度阈值的这种暂时增大还有助于避免意外深按压输入。对于其他深按压输入,对检测到深按压输入的响应不取决于基于时间的标准。In some embodiments, the response of the device to an input detected by the device depends on a criterion based on the intensity of contact during the input. For example, for some "tap" inputs, the intensity of a contact that exceeds a first intensity threshold during the input triggers a first response. In some embodiments, the response of the device to an input detected by the device depends on criteria including both contact strength during the input and time-based criteria. For example, for some "deep press" inputs, as long as a delay time elapses between meeting the first intensity threshold and meeting the second intensity threshold, the intensity of the contact during the input exceeds a second intensity threshold greater than the first intensity threshold for a light press A second response is triggered. The duration of the delay time is typically less than 200ms (milliseconds) (eg, 40ms, 100ms, or 120ms, depending on the magnitude of the second intensity threshold, where the delay time increases as the second intensity threshold increases). This delay time helps avoid accidental recognition of a deep press input. As another example, for some "deep press" inputs, a period of reduced sensitivity will occur after a first intensity threshold is reached. During the period of reduced sensitivity, the second intensity threshold is increased. This temporary increase in the second intensity threshold also helps avoid accidental deep press inputs. For other deep press inputs, the response to detection of a deep press input does not depend on time-based criteria.

在一些实施方案中,输入强度阈值和/或对应输出中的一者或多者基于一个或多个因素(诸如,用户设置、接触运动、输入定时、应用程序运行、施加强度时的速率、同时输入的数量、用户历史、环境因素(例如,环境噪声)、焦点选择器位置等而变化。示例性的因素在美国专利申请序列号14/399,606和14/624,296中有所描述,这些美国专利申请全文以引用方式并入本文。In some embodiments, one or more of the input intensity threshold and/or corresponding output is based on one or more factors (such as user settings, contact motion, input timing, application running, rate at which intensity is applied, simultaneous varies depending on the number of inputs, user history, environmental factors (eg, ambient noise), focus selector position, etc. Exemplary factors are described in US Patent Application Serial Nos. 14/399,606 and 14/624,296, which The entire contents are incorporated herein by reference.

例如,图4C示出了部分地基于触摸输入476随时间的强度而随时间改变的动态强度阈值480。动态强度阈值480是两个分量的总和:在从触摸输入476初始被检测到开始的预定义的延迟时间p1之后随时间衰减的第一分量474、和随时间而跟踪触摸输入476的强度的第二分量478。第一分量474的初始高强度阈值减少意外触发“深按压”响应,同时仍然允许在触摸输入476提供足够强度的情况下进行即时“深按压”响应。第二分量478减少通过触摸输入的逐渐的强度波动而无意触发“深按压”响应。在一些实施方案中,在触摸输入476满足动态强度阈值480时(例如,在图4C中的点481处),触发“深按压”响应。For example, FIG. 4C shows adynamic intensity threshold 480 that changes over time based in part on the intensity oftouch input 476 over time.Dynamic intensity threshold 480 is the sum of two components: afirst component 474 that decays with time after a predefined delay time p1 from whentouch input 476 is initially detected, and afirst component 474 that tracks the intensity oftouch input 476 over time. Twocomponents 478. The initial high intensity threshold of thefirst component 474 reduces accidental triggering of a "deep press" response, while still allowing for an instant "deep press" response if thetouch input 476 provides sufficient intensity. Thesecond component 478 reduces gradual intensity fluctuations through touch input without inadvertently triggering a "deep press" response. In some implementations, a "deep press" response is triggered whentouch input 476 meets dynamic intensity threshold 480 (eg, atpoint 481 in Figure 4C).

图4D示出了另一个动态强度阈值486(例如,强度阈值ID)。图4D还示出了两个其他强度阈值:第一强度阈值IH和第二强度阈值IL。在图4D中,尽管触摸输入484在时间p2之前满足第一强度阈值IH和第二强度阈值IL,但是直到在时间482处经过延迟时间p2才提供响应。同样在图4D中,动态强度阈值486随时间衰减,其中衰减在从时间482(触发了与第二强度阈值IL相关联的响应的时候)已经过预定义的延迟时间p1之后的时间488开始。这种类型的动态强度阈值减少紧接在触发与较低阈值强度(诸如第一强度阈值IH或第二强度阈值IL)相关联的响应之后或与其同时意外触发与动态强度阈值ID相关联的响应。Figure 4D shows another dynamic intensity threshold486 (eg, intensity threshold ID). Figure 4D also shows two other intensity thresholds: a first intensity thresholdIH and a second intensity thresholdIL . In FIG. 4D, althoughtouch input 484 satisfies the first intensity thresholdIH and the second intensity thresholdIL prior to time p2, a response is not provided until delay time p2 elapses attime 482. Also in Figure 4D, thedynamic intensity threshold 486 decays over time, where the decay begins attime 488 after a predefined delay time p1 has elapsed from time 482 (when the response associated with the second intensity thresholdIL was triggered) . This type of dynamic intensity threshold reduction occurs immediately after or at the same time as triggering a response associated with a lower threshold intensity (such as the first intensity thresholdIH or the second intensity thresholdIL ) associated with an accidental triggering of the dynamic intensity thresholdID Linked response.

图4E示出了又一个动态强度阈值492(例如,强度阈值ID)。在图4E中,在从触摸输入490被初始检测到的时候已经过延迟时间p2之后,触发与强度阈值IL相关联的响应。同时,动态强度阈值492在从触摸输入490被初始检测到的时候已经过预定义的延迟时间p1之后衰减。因此,在触发与强度阈值IL相关联的响应之后降低触摸输入490的强度,接着在不释放触摸输入490的情况下增大触摸输入490的强度可触发与强度阈值ID相关联的响应(例如,在时间494处),即使当触摸输入490的强度低于另一强度阈值(例如,强度阈值IL)时也是如此。Figure 4E shows yet another dynamic intensity threshold492 (eg, intensity threshold ID). In Figure 4E, the response associated with the intensity thresholdIL is triggered after a delay time p2 has elapsed from whentouch input 490 was initially detected. At the same time, thedynamic intensity threshold 492 decays after a predefined delay time p1 has elapsed from when thetouch input 490 was initially detected. Therefore, decreasing the intensity oftouch input 490 after triggering the response associated with intensity thresholdIL , followed by increasing the intensity oftouch input 490 without releasing touch input490 may trigger the response associated with intensity threshold ID ( For example, at time 494), even when the intensity oftouch input 490 is below another intensity threshold (eg, intensity thresholdIL ).

接触特征强度从低于轻按压强度阈值ITL的强度增大到介于轻按压强度阈值ITL与深按压强度阈值ITD之间的强度有时被称为“轻按压”输入。接触的特征强度从低于深按压强度阈值ITD的强度增大到高于深按压强度阈值ITD的强度有时称为“深按压”输入。接触特征强度从低于接触检测强度阈值IT0的强度增大到介于接触检测强度阈值IT0与轻按压强度阈值ITL之间的强度有时被称为检测到触摸表面上的接触。接触的特征强度从高于接触检测强度阈值IT0的强度减小到低于接触检测强度阈值IT0的强度有时被称为检测到接触从触摸表面抬离。在一些实施方案中,IT0为零。在一些实施方案中,IT0大于零在一些图示中,阴影圆或椭圆用于表示触敏表面上的接触的强度。在一些图示中,没有阴影的圆或椭圆用于表示触敏表面上的相应接触而不指定相应接触的强度。The increase in contact feature intensity from an intensity below the light press intensity threshold ITL to an intensity between the light press intensity threshold ITL and the deep press intensity threshold ITD is sometimes referred to as a "light press" input. An increase in the characteristic intensity of a contact from an intensity below the deep-press intensity threshold ITD to an intensity above the deep-press intensity threshold ITD is sometimes referred to as a "deep press" input. The increase in contact feature intensity from an intensity below the contact detection intensity threshold IT0 to an intensity between the contact detection intensity threshold IT0 and the light press intensity threshold ITL is sometimes referred to as detecting a contact on the touch surface. A reduction in the characteristic intensity of a contact from an intensity above the contact detection intensity threshold IT0 to an intensity below the contact detection intensity threshold IT0 is sometimes referred to as detecting contact lift off from the touch surface. In some embodiments, IT0 is zero. In some embodiments, IT0 is greater than zero. In some illustrations, shaded circles or ellipses are used to represent the intensity of the contact on the touch-sensitive surface. In some illustrations, unshaded circles or ellipses are used to represent corresponding contacts on the touch-sensitive surface without specifying the strength of the corresponding contacts.

在本文中所述的一些实施方案中,响应于检测到包括相应按压输入的手势或响应于检测到利用相应接触(或多个接触)所执行的相应按压输入来执行一个或多个操作,其中至少部分地基于检测到该接触(或多个接触)的强度增大到高于按压输入强度阈值而检测到该相应按压输入。在一些实施方案中,响应于检测到相应接触的强度增大到高于按压输入强度阈值来执行相应操作(例如,在相应按压输入的“向下冲程”上执行相应操作)。在一些实施方案中,按压输入包括相应接触的强度增大到高于按压输入强度阈值以及该接触的强度随后减小到低于按压输入强度阈值,并且响应于检测到相应接触的强度随后减小到低于按压输入阈值来执行相应操作(例如,在相应按压输入的“向上冲程”上执行相应操作)。In some embodiments described herein, the one or more operations are performed in response to detecting a gesture comprising a corresponding press input or in response to detecting a corresponding press input performed with a corresponding contact (or contacts), wherein The respective press input is detected based at least in part on detecting an increase in the intensity of the contact (or contacts) above a press input intensity threshold. In some implementations, the corresponding operation is performed in response to detecting that the intensity of the corresponding contact increases above a press input intensity threshold (eg, the corresponding operation is performed on a "down stroke" of the corresponding press input). In some embodiments, the press input includes an increase in the intensity of the corresponding contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and a subsequent decrease in intensity of the corresponding contact in response to detection to below the press input threshold to perform the corresponding operation (eg, perform the corresponding operation on the "up stroke" of the corresponding press input).

在一些实施方案中,设备采用强度滞后以避免有时被称为“抖动”的意外输入,其中设备限定或选择与按压输入强度阈值具有预定义关系的滞后强度阈值(例如,滞后强度阈值比按压输入强度阈值低X个强度单位,或滞后强度阈值是按压输入强度阈值的75%、90%或某个合理比例)。因此,在一些实施方案中,按压输入包括相应接触的强度增大到高于按压输入强度阈值以及该接触的强度随后减小到低于对应于按压输入强度阈值的滞后强度阈值,并且响应于检测到相应接触的强度随后减小到低于滞后强度阈值来执行相应操作(例如,在相应按压输入的“向上冲程”上执行相应操作)。类似地,在一些实施方案中,仅在设备检测到接触强度从等于或低于滞后强度阈值的强度增大到等于或高于按压输入强度阈值的强度并且任选地接触强度随后减小到等于或低于滞后强度的强度时才检测到按压输入,并且响应于检测到按压输入(例如,根据环境,接触强度增大或接触强度减小)来执行相应操作。In some embodiments, the device employs an intensity hysteresis to avoid accidental input sometimes referred to as "jitter", wherein the device defines or selects a hysteresis intensity threshold that has a predefined relationship to a press input intensity threshold (eg, the hysteresis intensity threshold is greater than the press input intensity threshold). The intensity threshold is X intensity units lower, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press input intensity threshold). Thus, in some embodiments, the press input includes an increase in the strength of the corresponding contact above a press input strength threshold and a subsequent decrease in strength of the contact below a hysteresis strength threshold corresponding to the press input strength threshold, and in response to detecting The intensity to the corresponding contact is then reduced below the hysteresis intensity threshold to perform the corresponding operation (eg, performing the corresponding operation on an "up stroke" of the corresponding press input). Similarly, in some embodiments, the device detects an increase in contact intensity from an intensity equal to or below the hysteresis intensity threshold to an intensity equal to or above the press input intensity threshold and optionally a subsequent decrease in contact intensity to equal to or equal to the hysteresis intensity threshold. The press input is detected only at a strength lower than the hysteresis strength, and a corresponding operation is performed in response to the detection of the press input (eg, the contact strength increases or the contact strength decreases depending on the environment).

为了容易解释,任选地响应于检测到以下情况而触发对响应于与按压输入强度阈值相关联的按压输入或响应于包括按压输入的手势而执行的操作的描述:接触的强度增大到高于按压输入强度阈值、接触的强度从低于滞后强度阈值的强度增大到高于按压输入强度阈值的强度、接触的强度减小到低于按压输入强度阈值、或接触的强度减小到低于与按压输入强度阈值对应的滞后强度阈值。另外,在将操作描述为响应于检测到接触的强度减小到低于按压输入强度阈值而执行的示例中,任选地响应于检测到接触的强度减小到低于对应于并且小于按压输入强度阈值的滞后强度阈值来执行操作。如上所述,在一些实施方案中,对这些操作的触发还取决于满足基于时间的标准(例如,在满足第一强度阈值和满足第二强度阈值之间已经过延迟时间)。For ease of explanation, a description of an operation performed in response to a press input associated with a press input intensity threshold or in response to a gesture including a press input is optionally triggered in response to detecting that the intensity of the contact increases to high At the press input intensity threshold, the intensity of the contact increases from an intensity below the hysteresis intensity threshold to an intensity above the press input intensity threshold, the intensity of the contact decreases below the press input intensity threshold, or the intensity of the contact decreases to a low at the hysteresis intensity threshold corresponding to the pressing input intensity threshold. Additionally, in examples where the operations are described as being performed in response to detecting a decrease in the intensity of the contact below a press input intensity threshold, optionally in response to detecting a decrease in the intensity of the contact below corresponding to and less than the press input Intensity Threshold Hysteresis Intensity Threshold to perform the operation. As noted above, in some embodiments, triggering of these operations is also contingent on meeting time-based criteria (eg, a delay time has elapsed between meeting the first intensity threshold and meeting the second intensity threshold).

如本说明书和权利要求书中所使用的,术语“触觉输出”是指将由用户利用用户的触感检测到的设备相对于设备的先前位置的物理位移、设备的部件(例如,触敏表面)相对于设备的另一个部件(例如,外壳)的物理位移、或部件相对于设备的质心的位移。例如,在设备或设备的部件与用户对触摸敏感的表面(例如,手指、手掌或用户手部的其他部分)接触的情况下,通过物理位移生成的触觉输出将由用户解释为触感,该触感对应于设备或设备的部件的物理特征的所感知的变化。例如,触敏表面(例如,触敏显示器或触控板)的移动任选地由用户解释为对物理致动按钮的“按下点击”或“松开点击”。在一些情况下,用户将感觉到触感,诸如“按下点击”或“松开点击”,即使在通过用户的移动而物理地被按压(例如,被移位)的与触敏表面相关联的物理致动按钮没有移动时。又如,即使在触敏表面的光滑度无变化时,触敏表面的移动也会任选地由用户解释或感测为触敏表面的“粗糙度”。虽然用户对触摸的此类解释将受到用户的个体化感官知觉的限制,但是对触摸的许多感官知觉是大多数用户共有的。因此,当触觉输出被描述为对应于用户的特定感官知觉(例如,“按下点击”、“松开点击”、“粗糙度”)时,除非另外陈述,否则所生成的触觉输出对应于设备或其部件的物理位移,该物理位移将会生成典型(或普通)用户的所述感官知觉。使用触觉输出向用户提供触觉反馈增强了设备的可操作性,并且使用户设备界面更高效(例如,通过帮助用户提供适当的输入并减少操作设备/与设备交互时的用户错误),从而通过使用户能够更快速且高效地使用设备进一步减少了电力使用并且延长了设备的电池寿命。As used in this specification and in the claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a component of the device (eg, a touch-sensitive surface) that is detected by a user using the user's sense of touch. Physical displacement of another component of the device (eg, the housing), or displacement of a component relative to the center of mass of the device. For example, where a device or part of a device is in contact with a user's touch-sensitive surface (eg, a finger, palm, or other part of the user's hand), the haptic output generated by physical displacement will be interpreted by the user as a haptic sensation corresponding to Perceived changes in physical characteristics of equipment or components of equipment. For example, movement of a touch-sensitive surface (eg, a touch-sensitive display or trackpad) is optionally interpreted by the user as a "press-click" or "release-click" of a physically actuated button. In some cases, the user will feel a tactile sensation, such as a "press down click" or "release click", even on a touch-sensitive surface associated with a touch-sensitive surface that is physically pressed (eg, displaced) by the user's movement. When the physical actuation button is not moving. As another example, even when there is no change in the smoothness of the touch-sensitive surface, movement of the touch-sensitive surface is optionally interpreted or sensed by the user as the "roughness" of the touch-sensitive surface. While such interpretations of touch by a user will be limited by the user's individual sensory perceptions, many sensory perceptions of touch are common to most users. Thus, when a haptic output is described as corresponding to a user's particular sensory perception (eg, "click down," "click release," "roughness"), unless otherwise stated, the generated haptic output corresponds to a device or the physical displacement of its components that would generate said sensory perception of a typical (or average) user. Using haptic output to provide haptic feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), thereby enabling The user's ability to use the device more quickly and efficiently further reduces power usage and extends the battery life of the device.

在一些实施方案中,触觉输出模式指定触觉输出的特性,诸如触觉输出的幅值、触觉输出的运动波形的形状、触觉输出的频率、和/或触觉输出的持续时间。In some embodiments, the haptic output pattern specifies characteristics of the haptic output, such as the magnitude of the haptic output, the shape of the motion waveform of the haptic output, the frequency of the haptic output, and/or the duration of the haptic output.

当设备(例如经由移动可移动质块生成触觉输出的一个或多个触觉输出发生器)生成具有不同触觉输出模式的触觉输出时,触觉输出可在握持或触摸设备的用户中产生不同触感。虽然用户的感官基于用户对触觉输出的感知,但大多数用户将能够辨识设备生成的触觉输出的波形、频率和幅值的变化。因此,波形、频率和幅值可被调节以向用户指示已执行了不同操作。这样,具有被设计、选择和/或安排用于模拟给定环境(例如,包括图形特征和对象的用户界面、具有虚拟边界和虚拟对象的模拟物理环境、具有物理边界和物理对象的真实物理环境、和/或以上任意者的组合)中对象的特性(例如大小、材料、重量、刚度、光滑度等);行为(例如振荡、位移、加速、旋转、伸展等);和/或交互(例如碰撞、粘附、排斥、吸引、摩擦等)的触觉输出模式的触觉输出在一些情况下将为用户提供有帮助的反馈,其减少输入错误并提高用户对设备的操作的效率。另外,触觉输出任选地被生成为对应于与所模拟物理特性(诸如输入阈值或对象选择)无关的反馈。此类触觉输出在一些情况下将为用户提供有帮助的反馈,其减少输入错误并提高用户对设备的操作的效率。When a device (eg, one or more haptic output generators that generate haptic output via moving a movable mass) generates haptic output with different haptic output patterns, the haptic output can produce different haptic sensations in a user holding or touching the device. While the user's senses are based on the user's perception of the haptic output, most users will be able to discern changes in the waveform, frequency, and amplitude of the haptic output generated by the device. Thus, the waveform, frequency and amplitude can be adjusted to indicate to the user that a different operation has been performed. In this way, there is a user interface that is designed, selected and/or arranged to simulate a given environment (eg, a user interface including graphical features and objects, a simulated physical environment with virtual boundaries and virtual objects, a real physical environment with physical boundaries and physical objects , and/or a combination of any of the above) properties of objects (eg, size, material, weight, stiffness, smoothness, etc.); behaviors (eg, oscillations, displacements, accelerations, rotations, stretches, etc.); and/or interactions (eg, The haptic output of the haptic output modes of collision, sticking, repulsion, attraction, friction, etc.) will in some cases provide helpful feedback to the user, which reduces input errors and increases the efficiency of the user's operation of the device. Additionally, the haptic output is optionally generated to correspond to feedback independent of the simulated physical characteristics, such as input thresholds or object selection. Such haptic output will in some cases provide helpful feedback to the user, which reduces input errors and increases the efficiency of the user's operation of the device.

在一些实施方案中,具有合适触觉输出模式的触觉输出充当在用户界面中或在设备中屏幕后面发生感兴趣事件的提示。感兴趣事件的示例包括设备上或用户界面中提供的示能表示(例如真实或虚拟按钮、或拨动式开关)的激活、所请求操作的成功或失败、达到或穿过用户界面中的边界、进入新状态、在对象之间切换输入焦点、激活新模式、达到或穿过输入阈值、检测或识别一种类型的输入或手势等等。在一些实施方案中,提供触觉输出以充当关于除非改变方向或中断输入被及时检测到、否则会发生的即将发生事件或结果的警告或提示。触觉输出在其它情境中也用于丰富用户体验、改善具有视觉或运动困难或者其它可达性需要的用户对设备的可达性、和/或改善用户界面和/或设备的效率和功能性。任选地将触觉输出与音频输入和/或视觉用户界面改变进行比较,这进一步增强用户与用户界面和/或设备交互时用户的体验,并有利于关于用户界面和/或设备的状态的信息的更好传输,并且这减少输入错误并提高用户对设备的操作的效率。In some embodiments, a haptic output with an appropriate haptic output pattern acts as a cue that an event of interest occurs in a user interface or behind a screen in a device. Examples of events of interest include activation of an affordance (such as a real or virtual button, or toggle switch) provided on the device or in the user interface, the success or failure of a requested operation, reaching or crossing a boundary in the user interface , entering a new state, switching input focus between objects, activating a new mode, reaching or crossing input thresholds, detecting or recognizing a type of input or gesture, and more. In some embodiments, a haptic output is provided to serve as a warning or reminder of an impending event or outcome unless a change of direction or interruption input is detected in time. Haptic outputs are also used in other contexts to enrich the user experience, improve accessibility of devices to users with visual or motor difficulties or other accessibility needs, and/or improve the efficiency and functionality of user interfaces and/or devices. Optionally compare the haptic output to audio input and/or visual user interface changes, which further enhances the user's experience when interacting with the user interface and/or device, and facilitates information about the state of the user interface and/or device better transmission, and this reduces input errors and increases the efficiency of the user's operation of the device.

图4F至4H提供可单独地或组合地、按原样或通过一个或多个变换(例如,调制、放大、截短等)用于在各种情景中为了各种目的(诸如上文所述的那些以及针对本文讨论的用户界面和方法所述的那些)生成合适触感反馈的一组样本触觉输出模式。触觉输出的控制板的这个示例显示一组三个波形和八个频率可如何被用于生成触觉输出模式的阵列。除这些图所示的触觉输出模式之外,任选地通过改变触觉输出模式的增益值来在幅值方面调整这些触觉输出模式中每一者,如图所示,例如,对于图4I至图4K中的FullTap 80Hz、FullTap200Hz、MiniTap 80Hz、MiniTap 200Hz、MicroTap 80Hz和MicroTap 200Hz,它们各自被示出为具有1.0、0.75、0.5和0.25的增益的变体。如图4I至图4K所示,改变触觉输出模式的增益则改变模式的幅值,而不改变模式的频率或改变波形的形状。在一些实施方案中,改变触觉输出模式的频率还导致较低幅值,因为一些触觉输出发生器受限于多少力可被施加于可移动质块,因此质块的较高频率移动被约束到较低幅值以确保生成波形所需要的加速不需要触觉输出发生器的操作力范围之外的力(例如,230Hz、270Hz、和300Hz的FullTap的峰值幅值比80Hz、100Hz、125Hz、和200Hz的FullTap的幅值低)。Figures 4F-4H provide that, alone or in combination, as is or through one or more transformations (eg, modulation, amplification, truncation, etc.) for various purposes (such as those described above) in various contexts those and those described for the user interfaces and methods discussed herein) generate a set of sample haptic output patterns for suitable haptic feedback. This example of a haptic output dashboard shows how a set of three waveforms and eight frequencies can be used to generate an array of haptic output patterns. In addition to the haptic output patterns shown in these figures, each of these haptic output patterns is optionally adjusted in magnitude by changing the gain value of the haptic output patterns, as shown, eg, for Figures 4I-4 FullTap 80Hz, FullTap 200Hz, MiniTap 80Hz, MiniTap 200Hz, MicroTap 80Hz and MicroTap 200Hz in 4K, each shown as a variant with gains of 1.0, 0.75, 0.5 and 0.25. As shown in FIGS. 4I-4K, changing the gain of the haptic output pattern changes the amplitude of the pattern without changing the frequency of the pattern or changing the shape of the waveform. In some embodiments, changing the frequency of the haptic output pattern also results in lower amplitudes, since some haptic output generators are limited by how much force can be applied to the movable mass, so the higher frequency movement of the mass is constrained to Lower amplitudes to ensure that the acceleration required to generate the waveform does not require forces outside the operating force range of the haptic output generator (e.g., peak amplitudes for FullTap at 230Hz, 270Hz, and 300Hz are higher than 80Hz, 100Hz, 125Hz, and 200Hz) The amplitude of the FullTap is low).

图4F至图4K示出了具有特定波形的触觉输出模式。触觉输出模式的波形表示相对于中性位置(例如,xzero)的物理位移对时间的图案,可移动质块通过该图案以生成具有该触觉输出模式的触觉输出。例如,图4F所示的第一组触觉输出模式(例如“FullTap”的触觉输出模式)每一个均具有包括具有两个完整循环的振荡的波形(例如,开始且结束于中性位置且穿过中性位置三次的振荡)。图4G所示的第二组触觉输出模式(例如“MiniTap”的触觉输出模式)每一个均具有包括具有一个完整循环的振荡的波形(例如,开始且结束于中性位置且穿过中性位置一次的振荡)。图4H所示的第三组触觉输出模式(例如“MicroTap”的触觉输出模式)每一个均具有包括具有半个完整循环的振荡的波形(例如,开始且结束于中性位置且不穿过中性位置的振荡)。触觉输出模式的波形还包括代表在触觉输出开始和结束处可移动质块的逐渐加速和减速的起始缓冲和结束缓冲。图4F至图4K所示的示例波形包括代表可移动质块的最大和最小移动程度的xmin和xmax值。对于可移动质块较大的较大电子设备,该质块的最小和最大移动程度可以更大或更小。图4F至图4K所示的示例描述1维中质块的移动,然而,类似的原理也可适用于两维或三维中可移动质块的移动。4F-4K illustrate haptic output patterns with specific waveforms. The waveform of the haptic output pattern represents a pattern of physical displacement versus time relative to a neutral position (eg, xzero) through which the movable mass passes to generate a haptic output having that haptic output pattern. For example, the first set of haptic output patterns shown in FIG. 4F (eg, the "FullTap" haptic output pattern) each have a waveform that includes oscillations with two full cycles (eg, starting and ending at a neutral position and passing through three oscillations in the neutral position). The second set of haptic output patterns shown in FIG. 4G (eg, the "MiniTap" haptic output patterns) each have a waveform that includes an oscillation with one full cycle (eg, starts and ends at and through a neutral position) one oscillation). The third set of haptic output patterns shown in FIG. 4H (eg, those of "MicroTap") each have a waveform that includes oscillations with half a full cycle (eg, starts and ends at a neutral position and does not pass through the middle) oscillations in sexual position). The waveform of the haptic output pattern also includes start and end buffers representing the gradual acceleration and deceleration of the movable mass at the beginning and end of the haptic output. The example waveforms shown in FIGS. 4F-4K include xmin and xmax values that represent the maximum and minimum degrees of movement of the movable mass. For larger electronic devices with a larger movable mass, the minimum and maximum movement of the mass can be greater or less. The examples shown in Figures 4F-4K describe the movement of a mass in 1 dimension, however, similar principles can be applied to the movement of a movable mass in two or three dimensions.

如图4F至图4K所示,每个触觉输出模式还具有对应的特征频率,其影响用户从具有该特征频率的触觉输出感觉到的触感的“节距”。对于连续触觉输出,特征频率代表触觉输出发生器的可移动质块在给定时间段内完成的循环(例如每秒循环)的数量。对于离散触觉输出,生成离散输出信号(例如具有0.5、1、或2个循环),并且特征频率值指定可移动质块需要移动多快以生成具有该特征频率的触觉输出。如图4F至图4H所示,对于每种类型的触觉输出(例如,由相应波形限定,诸如,FullTap、MiniTap或MicroTap),较高频率值与可移动质块的较快移动对应,因此,一般来讲,与较短的触觉输出完成时间(例如,包括完成离散触觉输出的所需循环数量的时间加上起始和结束缓冲时间)对应。例如,特征频率为80Hz的FullTap比特征频率为100Hz的FullTap花更长时间完成(例如在图4F中,35.4msvs.28.3ms)。此外,对于给定频率,在相应频率在其波形中具有更多循环的触觉输出比在相同相应频率在其波形中具有更少循环的触觉输出花更长时间完成。例如,150Hz的FullTap比150Hz的MiniTap花更长时间完成(例如19.4ms vs.12.8ms),150Hz的MiniTap比150Hz的MicroTap花更长时间完成(例如12.8ms vs.9.4ms)。然而对于具有不同频率的触觉输出模式,这个规则可能不适用(例如,具有更多循环但具有更高频率的触觉输出可能比具有更少循环但具有更低频率的触觉输出花更短的时间量完成,反之亦然)。例如,在300Hz,FullTap与MiniTap花同样长的时间(例如9.9ms)。As shown in Figures 4F-4K, each haptic output pattern also has a corresponding eigenfrequency that affects the "pitch" of the haptic sensation a user feels from a haptic output having that eigenfrequency. For continuous haptic output, the characteristic frequency represents the number of cycles (eg, cycles per second) that the movable mass of the haptic output generator completes in a given time period. For discrete haptic outputs, a discrete output signal is generated (eg, with 0.5, 1, or 2 cycles), and the eigenfrequency value specifies how fast the movable mass needs to move to generate a haptic output with that eigenfrequency. As shown in Figures 4F-4H, for each type of haptic output (e.g., defined by a corresponding waveform, such as FullTap, MiniTap, or MicroTap), higher frequency values correspond to faster movement of the movable mass, thus, In general, this corresponds to a shorter haptic output completion time (eg, including the time to complete the required number of cycles of the discrete haptic output plus start and end buffer times). For example, a FullTap with an eigenfrequency of 80Hz took longer to complete than a FullTap with a eigenfrequency of 100Hz (eg, 35.4ms vs. 28.3ms in Figure 4F). Furthermore, for a given frequency, a haptic output that has more cycles in its waveform at the corresponding frequency takes longer to complete than a haptic output that has fewer cycles in its waveform at the same corresponding frequency. For example, a 150Hz FullTap takes longer to complete than a 150Hz MiniTap (eg 19.4ms vs. 12.8ms), and a 150Hz MiniTap takes longer to complete than a 150Hz MicroTap (eg 12.8ms vs. 9.4ms). However, this rule may not apply for haptic output patterns with different frequencies (eg, a haptic output with more cycles but a higher frequency may take a shorter amount of time than a haptic output with fewer cycles but a lower frequency done, and vice versa). For example, at 300Hz, FullTap takes as long as MiniTap (eg 9.9ms).

如图4F至图4K所示,触觉输出模式还具有特征幅值,其影响触觉信号中包含的能量的量,或者用户通过具有该特征幅值的触觉输出可感觉到的触感的“强度”。在一些实施方案中,触觉输出模式的特征幅值是指代表在生成触觉输出时可移动质块相对于中性位置的最大位移的绝对或归一化值。在一些实施方案中,触觉输出模式的特征幅值可根据(例如基于用户界面情境和行为自定义的)各种条件和/或预先配置的度量(例如基于输入的度量、和/或基于用户界面的度量)调节,例如通过固定或动态确定的增益系数(例如介于0和1之间的值)来调节。在一些实施方案中,基于输入的度量(例如强度变化度量或输入速度度量)测量在触发生成触觉输出的输入期间该输入的特性(例如按压输入中接触的特征强度的改变速率或接触在触敏表面上的移动速率)。在一些实施方案中,基于用户界面的度量(例如跨边界速度度量)测量在触发生成触觉输出的用户界面改变期间用户界面元素的特性(例如该元素在用户界面中穿过隐错或可见边界的移动速度)。在一些实施方案中,触觉输出模式的特征幅值可被“包络”调制,并且相邻循环的峰值可具有不同幅值,其中以上所示波形之一通过乘以随时间改变(例如从0变到1)的包络参数来进一步修改,以在生成触觉输出时随着时间逐渐调节触觉输出的部分的幅值。As shown in Figures 4F-4K, the haptic output pattern also has a characteristic magnitude that affects the amount of energy contained in the haptic signal, or the "strength" of the tactile sensation that a user can feel through a haptic output with that characteristic magnitude. In some embodiments, the characteristic magnitude of the haptic output pattern refers to an absolute or normalized value representing the maximum displacement of the movable mass relative to the neutral position when the haptic output is generated. In some embodiments, characteristic magnitudes of haptic output patterns may be based on various conditions (eg, customized based on user interface context and behavior) and/or pre-configured metrics (eg, input-based metrics, and/or user interface-based metrics) metric) adjustment, eg by a fixed or dynamically determined gain factor (eg a value between 0 and 1). In some embodiments, an input-based metric (eg, an intensity change metric or an input velocity metric) measures a characteristic of the input (eg, the rate of change of a characteristic intensity of a contact in a press input or the rate of change of the contact's characteristic intensity in a touch-sensitive output) during the input that triggers the generation of the haptic output. rate of movement on the surface). In some embodiments, a user interface-based metric (eg, a cross-boundary velocity metric) measures a characteristic of a user interface element (eg, the element's movement across a glitch or visible boundary in the user interface) during a user interface change that triggers the generation of a haptic output. Moving speed). In some implementations, the characteristic amplitudes of the haptic output patterns can be "envelope" modulated, and the peaks of adjacent cycles can have different amplitudes, where one of the waveforms shown above changes over time by multiplying (eg, from 0 Change to the envelope parameter of 1) for further modification to gradually adjust the amplitude of the portion of the haptic output over time as the haptic output is generated.

虽然在图4F至图4K中,为了进行示意性的说明,在样本触觉输出模式中只表示了特定频率、幅值和波形,但具有其他频率、幅值和波形的触觉输出模式也可用于类似目的。例如,可使用具有介于0.5到4个循环之间的波形。也可使用60Hz-400Hz范围中的其他频率。Although in FIGS. 4F-4K, only specific frequencies, amplitudes and waveforms are shown in the sample haptic output patterns for illustrative purposes, haptic output patterns with other frequencies, amplitudes and waveforms can also be used for similar Purpose. For example, waveforms with between 0.5 and 4 cycles can be used. Other frequencies in the range of 60Hz-400Hz may also be used.

用户界面和相关联的过程User Interface and Associated Processes

现在将注意力转向可在具有显示器、触敏表面、用于生成触觉输出的(任选地)一个或多个触觉输出发生器以及(任选地)用于检测与触敏表面的接触的强度的一个或多个传感器的电子设备诸如便携式多功能设备100或设备300上实现的用户界面(“UI”)和相关联的过程的实施方案。Attention is now turned to the ability to have a display, a touch-sensitive surface, (optionally) one or more haptic output generators for generating haptic output, and (optionally) for detecting the intensity of contact with the touch-sensitive surface An implementation of a user interface ("UI") and associated processes implemented on an electronic device such asportable multifunction device 100 or device 300 of one or more sensors.

图5A至图5AT示出了根据一些实施方案的示例用户界面,其用于在从显示第一用户界面区域切换到显示第二用户界面区域时显示虚拟对象的表示。这些附图中的用户界面被用于示出下文所述的过程,包括图8A至图8E、图9A至图9D、图10A至图10D、图16A至图16G、图17A至图17D、图18A至图18I、图19A至图19H以及图20A至图20F中的过程。为了便于解释,将参考在具有触敏显示器系统112的设备上执行的操作来讨论实施方案中的一些实施方案。在此类实施方案中,焦点选择器为任选地:相应手指或触笔接触、对应于手指或触笔接触的代表点(例如,相应接触的重心或与相应接触相关联的点)、或在触敏显示系统112上所检测到的两个或更多个接触的重心。然而,响应于当在显示器450上显示附图中示出的用户界面连同焦点选择器时检测到触敏表面451上的接触,任选地在具有显示器450和独立的触敏表面451的设备上执行类似的操作。5A-5AT illustrate example user interfaces for displaying representations of virtual objects when switching from displaying a first user interface area to displaying a second user interface area, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including Figures 8A-8E, 9A-9D, 10A-10D, 16A-16G, 17A-17D, Processes in 18A-18I, 19A-19H, and 20A-20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch-sensitive display system 112 . In such embodiments, the focus selector is optionally: the corresponding finger or stylus contact, a representative point corresponding to the finger or stylus contact (eg, the center of gravity of the corresponding contact or a point associated with the corresponding contact), or The center of gravity of two or more contacts detected on the touch-sensitive display system 112 . However, in response to detecting a contact on the touch-sensitive surface 451 while the user interface shown in the drawing is displayed on thedisplay 450 along with the focus selector, optionally on a device having thedisplay 450 and a separate touch-sensitive surface 451 Do something similar.

图5A示出了其中使用参照图5B至图5AT所描述的用户界面的真实世界情景。Figure 5A shows a real world scenario in which the user interface described with reference to Figures 5B to 5AT is used.

图5A示出了桌子5004所处的物理空间5002。设备100由用户握持在用户的手5006中。Figure 5A shows thephysical space 5002 in which the table 5004 is located. Thedevice 100 is held in the user'shand 5006 by the user.

图5B示出了显示在显示器112上的即时消息用户界面5008。即时消息用户界面5008包括:包括接收的文本消息5012的消息气泡5010、包括发送的文本消息5016的消息气泡5014以及包括在消息中接收到的虚拟对象(例如,虚拟椅子5020)和虚拟对象指示符5022的消息气泡5018,该虚拟对象指示符指示虚拟椅子5020是在增强现实视图中(例如,在设备100的一个或多个相机的视场的表示内)可见的对象。即时消息用户界面5008还包括被配置为显示消息输入的消息输入区域5024。FIG. 5B shows instantmessaging user interface 5008 displayed ondisplay 112 . The instantmessaging user interface 5008 includes amessage bubble 5010 including a receivedtext message 5012, amessage bubble 5014 including a senttext message 5016, and a virtual object (eg, virtual chair 5020) and a virtual object indicator received in themessage Message bubble 5018 of 5022, the virtual object indicator indicating thatvirtual chair 5020 is an object visible in the augmented reality view (eg, within a representation of the field of view of one or more cameras of device 100). The instantmessaging user interface 5008 also includes amessage input area 5024 configured to display message input.

图5C至图5G示出了使得即时消息用户界面5008的一部分被设备100的一个或多个相机的视场替换的输入。在图5C中,检测到与设备100的触摸屏112的接触5026。该接触的特征强度高于接触检测强度阈值IT0且低于提示按压强度阈值ITH,如强度水平计5028所示。在图5D中,如强度水平计5028所示,接触5026的特征强度增加到高于提示按压强度阈值ITH,这使得消息气泡5018的区域增大、虚拟椅子5020的尺寸增大以及即时消息用户界面5008在消息气泡5018后面开始模糊(例如,向用户提供增大接触的特征强度的效果的视觉反馈)。在图5E中,如强度水平计5028所示,接触5026的特征强度增加到高于轻按压强度阈值ITL,这使得消息气泡5018被盘面5030替换、虚拟椅子5020的尺寸进一步增大以及即时消息用户界面5008在盘面5030后面进一步模糊。在图5F中,如强度水平计5028所示,接触5026的特征强度增加到高于深按压强度阈值ITD,这使得设备100的触觉输出发生器167输出触觉输出(如5032处所示),该触觉输出指示用设备100的一个或多个相机的视场替换即时消息用户界面5008的一部分的标准已得到满足。FIGS. 5C-5G illustrate inputs that cause a portion of instantmessaging user interface 5008 to be replaced by the field of view of one or more cameras ofdevice 100 . In FIG. 5C,contact 5026 with thetouch screen 112 of thedevice 100 is detected. The characteristic intensity of the contact is above the contact detection intensity threshold IT0 and below the prompt press intensity threshold IH , as indicated by the intensity level meter 5028 . In Figure 5D, as indicated by the intensity level meter 5028, the characteristic intensity of thecontact 5026 increases above the prompting press intensity thresholdITH , which causes the area of themessage bubble 5018 to increase, the size of thevirtual chair 5020 to increase, and the instant message user Theinterface 5008 begins to blur behind the message bubble 5018 (eg, providing visual feedback to the user of the effect of increasing the strength of the feature of the contact). In Figure 5E, as indicated by the intensity level meter 5028, the characteristic intensity of thecontact 5026 increases above the light press intensity threshold ITL , which causes themessage bubble 5018 to be replaced by thedisk 5030, the further increase in the size of thevirtual chair 5020, and the instant message Theuser interface 5008 is further blurred behind thedisk 5030. In Figure 5F, as indicated by the intensity level meter 5028, the characteristic intensity of thecontact 5026 increases above the deep press intensity threshold ITD , which causes thehaptic output generator 167 of thedevice 100 to output a haptic output (as shown at 5032), The haptic output indicates that the criteria for replacing a portion of instantmessaging user interface 5008 with the field of view of one or more cameras ofdevice 100 have been met.

在一些实施方案中,在接触5026的特征强度达到深按压强度阈值ITD(如图5F所示)之前,图5C至图5E所示的进展是可逆的。例如,在图5D和/或图5E所示的增加之后,减小接触5026的特征强度将使得与接触5026的减小的强度水平对应的界面状态被显示(例如,根据确定接触的减小的特征强度高于轻按压强度阈值ITL,示出如图5E所示的界面;根据确定接触的减小的特征强度高于提示按压强度阈值ITH,示出如图5D所示的界面;并且根据确定接触的减小的特征强度低于提示按压强度阈值ITH,示出如图5C所示的界面)。在一些实施方案中,在图5D和/或图5E所示的增加之后,减小接触5026的特征强度将使得如图5C所示的界面被重新显示。In some embodiments, the progression shown in FIGS. 5C-5E is reversible until the characteristic intensity ofcontact 5026 reaches a deep-press intensity threshold ITD (shown in FIG. 5F ). For example, after the increase shown in Figures 5D and/or 5E, decreasing the characteristic intensity of thecontact 5026 will cause an interface state corresponding to the reduced intensity level of thecontact 5026 to be displayed (eg, based on determining the reduced intensity of the contact). The characteristic intensity is higher than the light press intensity threshold ITL , showing the interface as shown in FIG. 5E ; according to the determination that the reduced characteristic intensity of the contact is higher than the prompt press intensity threshold IH , the interface shown in FIG. 5D is shown; and Upon determining that the reduced characteristic intensity of the contact is below the prompting press intensity thresholdITH , the interface shown in Figure 5C is shown). In some embodiments, after the increase shown in Figure 5D and/or Figure 5E, decreasing the characteristic strength of thecontact 5026 will cause the interface shown in Figure 5C to be redisplayed.

图5F至图5J示出了动画过渡,在该动画过渡期间,即时消息用户界面的一部分被设备100的一个或多个相机(下文称为“相机”)的视场所替换。从图5F到图5G,接触5026已抬离触摸屏112,并且虚拟椅子5020已朝向图5I中的最终位置旋转。在图5G中,相机的视场5034在盘片5030中已开始淡入视图(如虚线所指示)。在图5H中,相机的视场5034(例如,示出由相机捕获的物理空间5002的视图)在盘片5030中已完成淡入视图。从图5H到图5I,虚拟椅子5020继续朝向其在图5I中的最终位置旋转。在图5I中,触觉输出发生器167已输出指示已经在相机的视场5034中检测到至少一个平面(例如,地板表面5038)的触觉输出(如5036处所示)。虚拟椅子5020被放置在所检测到的平面上(例如,根据设备100的确定虚拟对象被配置为以竖直取向被放置在所检测到的水平表面上,诸如,地板表面5038)。当即时消息用户界面的一部分被转换成显示器112上的相机的视场5034的表示时,在显示器112上连续调整虚拟椅子5020的尺寸。例如,基于在相机的视场5034中预定义的虚拟椅子5020的“真实世界”尺寸和/或所检测到的对象(诸如,桌子5004)的尺寸确定如相机的视场5034中所示的虚拟椅子5020相对于物理空间5002的比例。在图5J中,虚拟椅子5020被显示在其最终位置处,具有相对于在相机的视场5034中所检测到的地板表面的预定义取向。在一些实施方案中,虚拟椅子5020的初始着陆位置为相对于在相机的视场中所检测到的平面的预定义位置,诸如,在所检测到的平面的未占用区域的中心。在一些实施方案中,根据接触5026的抬离位置确定虚拟椅子5020的初始着陆位置(例如,在图5F中,接触5026的抬离位置可与接触5026的初始向下触摸位置不同,这是由在转变到增强现实环境的标准得到满足之后接触5026在触摸屏112上的移动引起的)。FIGS. 5F-5J illustrate an animated transition during which a portion of the instant messaging user interface is replaced by the field of view of one or more cameras of device 100 (hereinafter “cameras”). From Figures 5F to 5G, thecontact 5026 has been lifted off thetouch screen 112 and thevirtual chair 5020 has been rotated towards the final position in Figure 5I. In Figure 5G, the camera's field ofview 5034 has begun to fade into view in the disc 5030 (as indicated by the dashed line). In FIG. 5H , the camera's field of view 5034 (eg, showing the view of thephysical space 5002 captured by the camera) has finished fading into view in thedisc 5030 . From Figures 5H to 5I, thevirtual chair 5020 continues to rotate towards its final position in Figure 5I. In Figure 5I, thehaptic output generator 167 has output a haptic output (as shown at 5036) indicating that at least one plane (eg, a floor surface 5038) has been detected in the camera's field ofview 5034. Thevirtual chair 5020 is placed on the detected plane (eg, the virtual object is configured to be placed in a vertical orientation on the detected horizontal surface, such as thefloor surface 5038 , as determined by the device 100 ). Thevirtual chair 5020 is continuously resized on thedisplay 112 as a portion of the instant messaging user interface is converted into a representation of the camera's field ofview 5034 on thedisplay 112 . For example, a virtualized image as shown in the camera's field ofview 5034 is determined based on the "real world" size of thevirtual chair 5020 and/or the size of the detected object (such as the table 5004 ) predefined in the camera's field ofview 5034 Scale ofchair 5020 relative tophysical space 5002. In Figure 5J, thevirtual chair 5020 is displayed in its final position with a predefined orientation relative to the floor surface detected in the camera's field ofview 5034. In some embodiments, the initial landing position of thevirtual chair 5020 is a predefined position relative to a detected plane in the camera's field of view, such as in the center of an unoccupied area of the detected plane. In some embodiments, the initial landing position of thevirtual chair 5020 is determined from the lift-off position of the contact 5026 (eg, in FIG. 5F , the lift-off position of thecontact 5026 may be different from the initial touch-down position of thecontact 5026 by the Movement of thecontact 5026 on thetouch screen 112 after the criteria for transition to the augmented reality environment are met).

图5K至图5L示出了调整相机的视场5034的设备100的移动(例如,通过用户的手5006)。当设备100相对于物理空间5002移动时,所显示的相机的视场5034改变,并且虚拟椅子5020在所显示的相机的视场5034中保持相对于地板表面5038的相同位置和取向不变。5K-5L illustrate movement of the device 100 (eg, by the user's hand 5006) to adjust the field ofview 5034 of the camera. As thedevice 100 moves relative to thephysical space 5002, the displayed camera's field ofview 5034 changes, and thevirtual chair 5020 remains in the same position and orientation relative to thefloor surface 5038 in the displayed camera's field ofview 5034.

图5M至图5Q示出了使得虚拟椅子5020在所显示的相机的视场5034中的地板表面5038上移动的输入。在图5N中,在与虚拟椅子5020对应的位置处检测到与设备100的触摸屏112的接触5040。在图5N至图5O中,当接触5040沿箭头5042所指示的路径移动时,接触5040拖动虚拟椅子5020。当虚拟椅子5020通过接触5040移动时,虚拟椅子5020的尺寸改变以保持如相机的视场5034中所示的虚拟椅子5020相对于物理空间5002的比例。例如,在图5N至图5P中,当虚拟椅子5020从相机的视场5034的前景移动到远离设备100且更靠近相机的视场5034中的桌子5004的位置时,虚拟椅子5020的尺寸减小(例如,使得相机的视场5034中的椅子相对于桌子5004的比例得到保持)。另外,当虚拟椅子5020通过接触5040移动时,在相机的视场5034中所识别的平面被突出显示。例如,在图5O中,地板平面5038被突出显示。在图5O至图5P中,当接触5040沿箭头5044所指示的路径移动时,接触5040继续拖动虚拟椅子5020。在图5Q中,接触5040已抬离触摸屏112。在一些实施方案中,如图5N至图5Q所示,虚拟椅子5020的移动路径受到相机的视场5034中的地板表面5038的约束,就像接触5040在地板表面5038上拖动虚拟椅子5020。在一些实施方案中,如参照图5N至图5P所描述的接触5040是如参照图5C至图5F所描述的接触5026的继续(例如,接触5026未抬离,并且使得即时消息用户界面5008的一部分被相机的视场5034替换的这一接触也拖动相机的视场5034中的虚拟椅子5020。Figures 5M-5Q illustrate an input that causes thevirtual chair 5020 to move over thefloor surface 5038 in the displayed camera's field ofview 5034. In FIG. 5N , acontact 5040 with thetouch screen 112 of thedevice 100 is detected at a location corresponding to thevirtual chair 5020 . In FIGS. 5N-5O, thecontact 5040 drags thevirtual chair 5020 as thecontact 5040 moves along the path indicated by the arrow 5042. As thevirtual chair 5020 moves throughcontact 5040, the size of thevirtual chair 5020 changes to maintain the scale of thevirtual chair 5020 relative to thephysical space 5002 as shown in the camera's field ofview 5034. For example, in Figures 5N-5P, the size of thevirtual chair 5020 decreases as thevirtual chair 5020 moves from the foreground of the camera's field ofview 5034 to a position away from thedevice 100 and closer to the table 5004 in the camera's field of view 5034 (eg, so that the proportion of the chair in the camera's field ofview 5034 relative to the table 5004 is maintained). Additionally, as thevirtual chair 5020 moves throughcontact 5040, the identified plane in the camera's field ofview 5034 is highlighted. For example, in Figure 5O,floor plane 5038 is highlighted. In FIGS. 5O-5P, thecontact 5040 continues to drag thevirtual chair 5020 as thecontact 5040 moves along the path indicated by thearrow 5044. In FIG. 5Q ,contact 5040 has been lifted offtouch screen 112 . In some embodiments, as shown in FIGS. 5N-5Q, the movement path of thevirtual chair 5020 is constrained by thefloor surface 5038 in the camera's field ofview 5034, as acontact 5040 drags thevirtual chair 5020 on thefloor surface 5038. In some embodiments,contact 5040 as described with reference to FIGS. 5N-5P is a continuation ofcontact 5026 as described with reference to FIGS. 5C-5F (eg,contact 5026 is not lifted off, and causes the instantmessaging user interface 5008 This contact, partially replaced by the camera's field ofview 5034, also drags thevirtual chair 5020 in the camera's field ofview 5034.

图5Q至图5U示出了使虚拟椅子5020从地板表面5038移动到在相机的视场5034中检测到的不同平面(例如,桌面5046)的输入。在图5R中,在与虚拟椅子5020对应的位置处检测到与设备100的触摸屏112的接触5050。在图5R至图5S中,当接触5048沿箭头5050所指示的路径移动时,接触5048拖动虚拟椅子5020。当虚拟椅子5020通过接触5048移动时,虚拟椅子5020的尺寸改变以保持如相机的视场5034中所示的虚拟椅子5020相对于物理空间5002的比例。另外,当虚拟椅子5020通过接触5040移动时,桌面平面5046被突出显示(例如,如图5S所示)。在图5S至图5T中,当接触5048沿箭头5052所指示的路径移动时,接触5040继续拖动虚拟椅子5020。在图5U中,接触5048已抬离触摸屏112,并且虚拟椅子5020以面向与之前相同的方向的竖直取向被放置在桌面平面5046上。5Q-5U illustrate an input to move avirtual chair 5020 from afloor surface 5038 to a different plane (eg, a table top 5046) detected in the camera's field ofview 5034. In FIG. 5R , acontact 5050 with thetouch screen 112 of thedevice 100 is detected at a location corresponding to thevirtual chair 5020 . In FIGS. 5R-5S, thecontact 5048 drags thevirtual chair 5020 as thecontact 5048 moves along the path indicated by thearrow 5050. As thevirtual chair 5020 moves throughcontact 5048, the size of thevirtual chair 5020 changes to maintain the scale of thevirtual chair 5020 relative to thephysical space 5002 as shown in the camera's field ofview 5034. Additionally, when thevirtual chair 5020 is moved through thecontact 5040, thedesktop plane 5046 is highlighted (eg, as shown in Figure 5S). In Figures 5S-5T, thecontact 5040 continues to drag thevirtual chair 5020 as thecontact 5048 moves along the path indicated byarrow 5052. In Figure 5U,contact 5048 has been lifted offtouch screen 112, andvirtual chair 5020 is placed ontabletop plane 5046 in a vertical orientation facing the same direction as before.

图5U至图5AD示出了将虚拟椅子5020拖动到触摸屏显示器112的边缘的输入,其使得相机的视场5034停止显示。在图5V中,在与虚拟椅子5020对应的位置处检测到与设备100的触摸屏112的接触5054。在图5V至图5W中,当接触5054沿箭头5056所指示的路径移动时,接触5054拖动虚拟椅子5020。在图5W至图5X中,当接触5054沿箭头5058所指示的路径移动时,接触5054将虚拟椅子5020继续拖动到图5X所示的位置。5U-5AD illustrate the input of dragging thevirtual chair 5020 to the edge of thetouch screen display 112, which causes the camera's field ofview 5034 to stop displaying. In FIG. 5V , acontact 5054 with thetouch screen 112 of thedevice 100 is detected at a location corresponding to thevirtual chair 5020 . In FIGS. 5V-5W, thecontact 5054 drags thevirtual chair 5020 as thecontact 5054 moves along the path indicated by thearrow 5056. In Figures 5W-5X, as thecontact 5054 moves along the path indicated byarrow 5058, thecontact 5054 continues to drag thevirtual chair 5020 to the position shown in Figure 5X.

如图5Y至图5AD所示,通过图5U至图5X所示的接触5054进行的输入导致从在盘片5030中显示相机的视场5034到停止显示相机的视场5034并返回到完全显示即时消息用户界面5008的转变。在图5Y中,相机的视场5034在盘片5030中开始淡出。在图5Y至图5Z中,盘片5030转变到消息气泡5018。在图5Z中,不再显示相机的视场5034。在图5AA中,即时消息用户界面5008停止模糊,并且消息气泡5018的尺寸返回到消息气泡5018的原始尺寸(例如,如图5B所示)。As shown in Figures 5Y-5AD, input throughcontacts 5054 shown in Figures 5U-5X results in going from displaying the camera's field ofview 5034 in theplatter 5030 to ceasing to display the camera's field ofview 5034 and back to full display instant Transition of themessage user interface 5008. In FIG. 5Y, the camera's field ofview 5034 begins to fade out in thedisc 5030. In FIGS. 5Y-5Z,disk 5030 transitions tomessage bubble 5018. In Figure 5Z, the camera's field ofview 5034 is no longer shown. In Figure 5AA, the instantmessage user interface 5008 stops blurring, and the size of themessage bubble 5018 returns to the original size of the message bubble 5018 (eg, as shown in Figure 5B).

图5AA至图5AD示出了当虚拟椅子5020从与图5AA中的接触5054对应的位置移动到即时消息用户界面5008中的虚拟椅子5020的原始位置(例如,如图5B所示)时发生的虚拟椅子5020的动画过渡。在图5AB中,接触5054已抬离触摸屏112。在图5AB至图5AC中,虚拟椅子5020的尺寸逐渐增大,并且该虚拟椅子朝向其在图5AD中的最终位置旋转。FIGS. 5AA-5AD illustrate what happens when thevirtual chair 5020 is moved from the position corresponding to thecontact 5054 in FIG. 5AA to the original position of thevirtual chair 5020 in the instant messaging user interface 5008 (eg, as shown in FIG. 5B ) Animated transition ofvirtual chair 5020. In FIG. 5AB ,contact 5054 has been lifted offtouch screen 112 . In Figures 5AB-5AC, the size of thevirtual chair 5020 is gradually increased, and the virtual chair is rotated towards its final position in Figure 5AD.

在图5B至图5AD中,虚拟椅子5020在即时消息用户界面5008内和在所显示的相机的视场5034内具有基本上相同的三维外观,并且虚拟椅子5020在从显示即时消息用户界面5008到显示相机的视场5034的转变期间以及在逆向转变期间保持该相同的三维外观。在一些实施方案中,虚拟椅子5020的表示在应用程序用户界面(例如,即时消息用户界面)中具有与在增强现实环境中(例如,在所显示的相机的视场中)不同的外观。例如,虚拟椅子5020在应用程序用户界面中任选地具有二维或更多维风格化的外观,同时在增强现实环境中具有更逼真的三维纹理外观;并且虚拟椅子5020在显示应用程序用户界面与显示增强现实环境之间的转变期间的中间外观是虚拟椅子5020的二维外观与三维外观之间的一系列插值外观。In Figures 5B-5AD, thevirtual chair 5020 has substantially the same three-dimensional appearance within the instantmessaging user interface 5008 and within the displayed camera's field ofview 5034, and thevirtual chair 5020 has substantially the same three-dimensional appearance from the instantmessaging user interface 5008 to the This same three-dimensional appearance is maintained during the transition of the display camera's field ofview 5034 and during the reverse transition. In some embodiments, the representation of thevirtual chair 5020 has a different appearance in an application user interface (eg, an instant messaging user interface) than in an augmented reality environment (eg, in a displayed camera's field of view). For example, thevirtual chair 5020 optionally has a two-dimensional or more stylized appearance in the application user interface, while having a more realistic three-dimensional textured appearance in the augmented reality environment; and thevirtual chair 5020 is displaying the application user interface The intermediate appearance during the transition between displaying the augmented reality environment is a series of interpolated appearances between the two-dimensional appearance and the three-dimensional appearance of thevirtual chair 5020.

图5AE示出了互联网浏览器用户界面5060。互联网浏览器用户界面5060包括URL/搜索输入区域5062,其被配置为显示用于网络浏览器和浏览器控件5064(例如,包括后退按钮和前进按钮的导航控件、用于显示共享界面的共享控件、用于显示书签界面的书签控件以及用于显示选项卡界面的选项卡控件)的URL/搜索输入。互联网浏览器用户界面5060还包括网络对象5066、5068、5070、5072、5074和5076。在一些实施方案中,相应的网络对象包括链接,使得响应于在相应网络对象上的轻击输入,在互联网浏览器用户界面5060中显示与网络对象对应的链接的互联网位置(例如,替换相应的网络对象的显示)。网络对象5066、5068和5072包括三维虚拟对象的二维表示,如虚拟对象指示符5078、5080和5082分别所指示。网络对象5070、5074和5076包括二维图像(但网络对象5070、5074和5076的二维图像不与三维虚拟对象对应,如不存在虚拟对象指示符所指示)。与网络对象5068对应的虚拟对象为灯对象5084。FIG. 5AE shows an internetbrowser user interface 5060. Internetbrowser user interface 5060 includes URL/search input area 5062 configured to display navigation controls for the web browser and browser controls 5064 (eg, navigation controls including back and forward buttons, sharing controls for displaying a sharing interface , a bookmark control to display a bookmark interface, and a tab control to display a tab interface) URL/search input. Internetbrowser user interface 5060 also includesweb objects 5066 , 5068 , 5070 , 5072 , 5074 and 5076 . In some embodiments, the corresponding web object includes a link such that, in response to a tap input on the corresponding web object, the Internet location of the link corresponding to the web object is displayed in the internet browser user interface 5060 (eg, replacing the corresponding web object) display of network objects). Network objects 5066, 5068, and 5072 include two-dimensional representations of three-dimensional virtual objects, as indicated byvirtual object indicators 5078, 5080, and 5082, respectively. Network objects 5070, 5074, and 5076 include two-dimensional images (but the two-dimensional images ofnetwork objects 5070, 5074, and 5076 do not correspond to three-dimensional virtual objects, as indicated by the absence of a virtual object indicator). The virtual object corresponding to thenetwork object 5068 is thelight object 5084 .

图5AF至图5AH示出了使得互联网浏览器用户界面5060的一部分被相机的视场5034替换的输入。在图5AF中,检测到与设备100的触摸屏112的接触5086。该接触的特征强度高于接触检测强度阈值IT0且低于提示按压强度阈值ITH,如强度水平计5028所示。在图5AG中,如强度水平计5028所示,接触5026的特征强度增加到高于轻按压强度阈值ITL已使得相机的视场5034显示在网络对象5068(例如,被虚拟灯5084覆盖)。在图5AH中,如强度水平计5028所示,接触5086的特征强度增加到高于深按压强度阈值ITD使得相机的视场5034替换互联网浏览器用户界面5060的更大部分(例如,仅留下URL/搜索输入区域5062和浏览器控件5064),并且设备100的触觉输出发生器167输出触觉输出(如5088处所示),该触觉输出指示用相机的视场5034替换互联网浏览器用户界面5060的一部分的标准已得到满足。在一些实施方案中,响应于参照图5AF至图5AH所描述的输入,相机的视场5034完全替换触摸屏显示器112上的互联网浏览器用户界面506。Figures 5AF-5AH illustrate inputs that cause a portion of the internetbrowser user interface 5060 to be replaced by the camera's field ofview 5034. In FIG. 5AF,contact 5086 with thetouch screen 112 of thedevice 100 is detected. The characteristic intensity of the contact is above the contact detection intensity threshold IT0 and below the prompt press intensity threshold IH , as indicated by the intensity level meter 5028 . In Figure 5AG, as indicated by intensity level meter 5028, an increase in the characteristic intensity ofcontact 5026 above the light press intensity threshold ITL has caused the camera's field ofview 5034 to be displayed at web object 5068 (eg, covered by virtual light 5084). In Figure 5AH, as indicated by the intensity level meter 5028, the characteristic intensity of thecontact 5086 increases above the deep press intensity threshold ITD such that the camera's field ofview 5034 replaces a larger portion of the internet browser user interface 5060 (eg, leaving only URL/search input area 5062 and browser controls 5064), and thehaptic output generator 167 of thedevice 100 outputs a haptic output (as shown at 5088) indicating that the internet browser user interface is replaced with the camera's field ofview 5034 Part of the 5060 standard has been met. In some embodiments, the camera's field ofview 5034 completely replaces the internet browser user interface 506 on thetouchscreen display 112 in response to the input described with reference to FIGS. 5AF-5AH.

图5AI至图5AM示出了使得虚拟灯5084移动的输入。在图5AI至图5AJ中,当接触5086沿箭头5090所指示的路径移动时,接触5086拖动虚拟灯5084。当虚拟灯5084通过接触5086移动时,虚拟灯5084的尺寸不变,并且虚拟灯5084的路径任选地不受到捕获在相机的视场中的物理空间的结构的约束。当虚拟灯5084通过接触5086移动时,在相机的视场5034中所识别的平面被突出显示。例如,在图5AJ中,当虚拟灯5084在地板平面5038上方移动时,地板平面5038被突出显示。在图5AJ至图5AK中,当接触5086沿箭头5092所指示的路径移动时,接触5086继续拖动虚拟灯5084。在图5AK至图5AL中,当接触5086沿箭头5094所指示的路径移动时,接触5086继续拖动虚拟灯5084,停止突出显示地板平面5038,并且在虚拟灯5084在桌子5004上方移动时突出显示桌面5046。在图5AM中,接触5086已抬离触摸屏112。当接触5086已抬离时,虚拟灯5086的尺寸被调整为具有相对于相机的视场5034中的桌子5004的正确比例,并且虚拟灯5086以竖直取向被放置在相机的视场5034中的桌面5046上。Figures 5AI-5AM show the input to move thevirtual light 5084. In FIGS. 5AI-5AJ, thecontact 5086 drags the virtual light 5084 as thecontact 5086 moves along the path indicated by thearrow 5090. As thevirtual light 5084 is moved bycontact 5086, the size of thevirtual light 5084 does not change, and the path of thevirtual light 5084 is optionally not constrained by the structure of the physical space captured in the camera's field of view. When thevirtual light 5084 is moved bycontact 5086, the identified plane in the camera's field ofview 5034 is highlighted. For example, in Figure 5AJ, when virtual light 5084 moves overfloor plane 5038,floor plane 5038 is highlighted. In FIGS. 5AJ-5AK, thecontact 5086 continues to drag the virtual light 5084 as thecontact 5086 moves along the path indicated byarrow 5092. In FIGS. 5AK-5AL, ascontact 5086 moves along the path indicated byarrow 5094,contact 5086 continues to drag virtual light 5084, stops highlightingfloor plane 5038, and highlights as virtual light 5084 moves over table 5004Desktop 5046. In FIG. 5AM ,contact 5086 has lifted offtouch screen 112 . When thecontact 5086 has been lifted off, thevirtual light 5086 is sized to have the correct scale relative to the table 5004 in the camera's field ofview 5034, and thevirtual light 5086 is placed in a vertical orientation in the camera's field ofview 5034 on thedesktop 5046.

图5AM至图5AQ示出了将虚拟灯5084拖动到触摸屏显示器112的边缘的输入,其使得相机的视场5034停止显示且互联网浏览器用户界面5060恢复。在图5AN中,在与虚拟灯5084对应的位置处检测到与设备100的触摸屏112的接触5096。在图5AN至图5AO中,当接触5096沿箭头5098所指示的路径移动时,接触5096拖动虚拟灯5084。在图5AO至图5AP中,当接触5054沿箭头5100所指示的路径移动时,接触5096将虚拟灯5084继续拖动到图5AP所示的位置。在图5AQ中,接触5096已抬离触摸屏112。Figures 5AM-5AQ illustrate the input of dragging the virtual light 5084 to the edge of thetouch screen display 112, which causes the camera's field ofview 5034 to stop displaying and the internetbrowser user interface 5060 to resume. In FIG. 5AN, contact 5096 withtouch screen 112 ofdevice 100 is detected at a location corresponding tovirtual light 5084. In FIGS. 5AN-5AO, the contact 5096 drags the virtual light 5084 as the contact 5096 moves along the path indicated by the arrow 5098. In Figures 5AO-5AP, ascontact 5054 moves along the path indicated by arrow 5100, contact 5096 continues to drag virtual light 5084 to the position shown in Figure 5AP. In FIG. 5AQ, contact 5096 has lifted offtouch screen 112.

如图5AQ至图5AT所示,通过图5AM至图5AP所示的接触5096进行的输入导致从显示相机的视场5034到停止显示相机的视场5034并返回到完全显示互联网浏览器用户界面5060的转变。在图5AR中,相机的视场5034开始淡出(如虚线所指示)。在图5AR至图5AT中,虚拟灯5084的尺寸增加,并且该虚拟灯朝向其在互联网浏览器用户界面5060中的原始位置移动。在图5AS中,不再显示相机的视场5034,并且互联网浏览器用户界面5060开始淡入(如虚线所指示)。在图5AT中,完全显示互联网浏览器用户界面5060,并且虚拟灯5084已返回到其在互联网浏览器用户界面5060内的原始尺寸和位置。As shown in FIGS. 5AQ-5AT, input through contacts 5096 shown in FIGS. 5AM-5AP results in going from displaying the camera's field ofview 5034 to ceasing to display the camera's field ofview 5034 and back to fully displaying the internetbrowser user interface 5060 transformation. In Figure 5AR, the camera's field ofview 5034 begins to fade out (as indicated by the dashed line). In FIGS. 5AR-5AT, the size of thevirtual light 5084 is increased and the virtual light is moved towards its original position in the internetbrowser user interface 5060. In Figure 5AS, the camera's field ofview 5034 is no longer displayed, and the internetbrowser user interface 5060 begins to fade in (as indicated by the dashed line). In FIG. 5AT, the internetbrowser user interface 5060 is fully displayed, and thevirtual light 5084 has returned to its original size and position within the internetbrowser user interface 5060.

图6A至图6AJ示出了根据一些实施方案的示例用户界面,其用于在第一用户界面区域中显示虚拟对象的第一表示、在第二用户界面区域中显示虚拟对象的第二表示以及显示具有一个或多个相机的视场的表示的虚拟对象的第三表示。这些附图中的用户界面被用于示出下文所述的过程,包括图8A至图8E、图9A至图9D、图10A至图10D、图16A至图16G、图17A至图17D、图18A至图18I、图19A至图19H以及图20A至图20F中的过程。为了便于解释,将参考在具有触敏显示器系统112的设备上执行的操作来讨论实施方案中的一些实施方案。在此类实施方案中,焦点选择器为任选地:相应手指或触笔接触、对应于手指或触笔接触的代表点(例如,相应接触的重心或与相应接触相关联的点)、或在触敏显示系统112上所检测到的两个或更多个接触的重心。然而,响应于在显示附图中示出的在显示器450上的用户界面以及焦点选择器时检测触敏表面451上的接触,任选地在具有显示器450和独立的触敏表面451的设备上执行类似的操作。6A-6AJ illustrate example user interfaces for displaying a first representation of a virtual object in a first user interface area, displaying a second representation of the virtual object in a second user interface area, and A third representation of the virtual object is displayed with a representation of the field of view of the one or more cameras. The user interfaces in these figures are used to illustrate the processes described below, including Figures 8A-8E, 9A-9D, 10A-10D, 16A-16G, 17A-17D, Processes in 18A-18I, 19A-19H, and 20A-20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch-sensitive display system 112 . In such embodiments, the focus selector is optionally: the corresponding finger or stylus contact, a representative point corresponding to the finger or stylus contact (eg, the center of gravity of the corresponding contact or a point associated with the corresponding contact), or The center of gravity of two or more contacts detected on the touch-sensitive display system 112 . However, in response to detecting a contact on touch-sensitive surface 451 while displaying the user interface shown in the figures ondisplay 450 and the focus selector, optionally on adevice having display 450 and a separate touch-sensitive surface 451 Do something similar.

图6A示出了即时消息用户界面5008,其包括:包括接收的文本消息5012的消息气泡5010、包括发送的文本消息5016的消息气泡5014以及包括在消息中接收到的虚拟对象(例如,虚拟椅子5020)和虚拟对象指示符5022的消息气泡5018,该虚拟对象指示符指示虚拟椅子5020是在增强现实视图中(例如,在所显示的设备100的一个或多个相机的视场内)可见的对象。参照图5B进一步详细描述了即时消息用户界面5008。6A shows an instantmessaging user interface 5008 that includes amessage bubble 5010 including a receivedtext message 5012, amessage bubble 5014 including a senttext message 5016, and a virtual object (eg, a virtual chair) received in the message 5020) and amessage bubble 5018 for avirtual object indicator 5022 that indicates that thevirtual chair 5020 is visible in the augmented reality view (eg, within the displayed field of view of one or more cameras of the device 100). object. The instantmessaging user interface 5008 is described in further detail with reference to Figure 5B.

图6B至图6C示出了使虚拟椅子5020旋转的输入。在图6B中,检测到与设备100的触摸屏112的接触6002。接触6002在触摸屏112上沿由箭头6004所指示的路径移动。在图6C中,响应于接触的移动,即时消息用户界面5008向上滚动(使得消息气泡5010滚离显示器,使得消息气泡5014和5018向上滚动,并显露另外的消息气泡6005),并且虚拟椅子5020旋转(例如,向上倾斜)。虚拟椅子5020旋转的量值和方向与接触6002沿箭头6004所指示的路径的移动对应。在图6D中,接触6002已抬离触摸屏112。在一些实施方案中,虚拟椅子5020在消息气泡5018内的这种旋转行为被用作虚拟椅子5020是在包括设备100的相机的视场的增强现实环境中可见的虚拟对象的指示。6B-6C illustrate the input to rotate thevirtual chair 5020. In FIG. 6B,contact 6002 with thetouch screen 112 of thedevice 100 is detected.Contact 6002 moves ontouch screen 112 along a path indicated byarrow 6004 . In Figure 6C, in response to the movement of the contact, the instantmessaging user interface 5008 scrolls up (causingmessage bubble 5010 to roll off the display, causing message bubbles 5014 and 5018 to scroll up and revealing additional message bubbles 6005), andvirtual chair 5020 rotates (eg, tilt up). The magnitude and direction of thevirtual chair 5020 rotation corresponds to the movement of thecontact 6002 along the path indicated by thearrow 6004. In FIG. 6D ,contact 6002 has been lifted offtouch screen 112 . In some embodiments, this rotational behavior ofvirtual chair 5020 withinmessage bubble 5018 is used as an indication thatvirtual chair 5020 is a virtual object visible in an augmented reality environment that includes the field of view ofdevice 100's camera.

图6E至图6L示出了使得即时消息用户界面5008被登台用户界面6010替换并且随后改变虚拟椅子5020的取向的输入。在图6E中,检测到与设备100的触摸屏112的接触6006。该接触的特征强度高于接触检测强度阈值IT0且低于提示按压强度阈值ITH,如强度水平计5028所示。在图6F中,如强度水平计5028所示,接触6006的特征强度增加到高于提示按压强度阈值ITH,这使得消息气泡5018的区域增大、虚拟椅子5020的尺寸增大以及即时消息用户界面5008在消息气泡5018后面开始模糊(例如,向用户提供增大接触的特征强度的效果的视觉反馈)。在图6G中,如强度水平计5028所示,接触6006的特征强度增加到高于轻按压强度阈值ITL,这使得消息气泡5018被盘面6008替换、虚拟椅子5020的尺寸进一步增大以及即时消息用户界面5008在盘面6008后面进一步模糊。在图6H中,如强度水平计5028所示,接触6006的特征强度增加到高于深按压强度阈值ITD使得即时消息用户界面5008停止显示,并发起登台用户界面6010的淡入(由虚线指示)。另外,如图6H所示,接触6006的特征强度增加到高于深按压强度阈值ITD使得设备100的触觉输出发生器167输出触觉输出(如6012处所指示),该触觉输出指示用登台用户界面6010替换即时消息用户界面5008的标准已得到满足。6E-6L illustrate inputs that cause instantmessaging user interface 5008 to be replaced by staginguser interface 6010 and subsequently change the orientation ofvirtual chair 5020. In Figure 6E,contact 6006 with thetouch screen 112 of thedevice 100 is detected. The characteristic intensity of the contact is above the contact detection intensity threshold IT0 and below the prompt press intensity threshold IH , as indicated by the intensity level meter 5028 . In Figure 6F, as indicated by the intensity level meter 5028, the characteristic intensity of thecontact 6006 increases above the prompting press intensity thresholdITH , which causes the area of themessage bubble 5018 to increase, the size of thevirtual chair 5020 to increase, and the instant message user Theinterface 5008 begins to blur behind the message bubble 5018 (eg, providing visual feedback to the user of the effect of increasing the strength of the feature of the contact). In Figure 6G, as indicated by the intensity level meter 5028, the characteristic intensity of thecontact 6006 increases above the light press intensity threshold ITL , which causes themessage bubble 5018 to be replaced by thedisk 6008, the further increase in the size of thevirtual chair 5020, and the instant message Theuser interface 5008 is further blurred behind thedisk 6008. In Figure 6H, as indicated by the intensity level meter 5028, the characteristic intensity of thecontact 6006 increases above the deep press intensity threshold ITD causing the instantmessaging user interface 5008 to cease display and initiates a fade-in of the staging user interface 6010 (indicated by the dashed line) . Additionally, as shown in FIG. 6H, increasing the characteristic intensity ofcontact 6006 above the deep-press intensity threshold ITD causes thehaptic output generator 167 of thedevice 100 to output a haptic output (as indicated at 6012) that indicates the use of astaging user interface 6010 The criteria for replacing the instantmessaging user interface 5008 have been met.

在一些实施方案中,在接触6006的特征强度达到深按压强度阈值ITD(如图6H所示)之前,图6E至图6G所示的进展是可逆的。例如,在图6F和/或图6G所示的增加之后,减小接触6006的特征强度将使得与接触6006的减小的强度水平对应的界面状态被显示(例如,根据确定接触的减小的特征强度高于轻按压强度阈值ITL,示出如图6G所示的界面;根据确定接触的减小的特征强度高于提示按压强度阈值ITH,示出如图6F所示的界面;并且根据确定接触的减小的特征强度低于提示按压强度阈值ITH,示出如图6E所示的界面)。在一些实施方案中,在图6F和/或图6G所示的增加之后,减小接触6006的特征强度将使得如图6E所示的界面被重新显示。In some embodiments, the progression shown in FIGS. 6E-6G is reversible until the characteristic intensity of thecontact 6006 reaches the deep-press intensity threshold ITD (shown in FIG. 6H ). For example, after the increase shown in Figures 6F and/or 6G, decreasing the characteristic intensity ofcontact 6006 will cause an interface state corresponding to the reduced intensity level ofcontact 6006 to be displayed (eg, based on determining the reduced intensity of contact 6006). The characteristic intensity is higher than the light press intensity threshold ITL , showing the interface shown in FIG. 6G ; according to the determination that the reduced characteristic intensity of the contact is higher than the prompt press intensity threshold IH , the interface shown in FIG. 6F is shown; and Upon determining that the reduced characteristic intensity of the contact is below the prompting press intensity thresholdITH , the interface shown in Figure 6E is shown). In some embodiments, after the increase shown in Figure 6F and/or Figure 6G, decreasing the characteristic strength of thecontact 6006 will cause the interface shown in Figure 6E to be redisplayed.

在图6I中,显示登台用户界面6010。登台用户界面6010包括虚拟椅子5020显示在其上的台架6014。从图6H至图6I,虚拟椅子5020被动画以指示从图6H中的虚拟椅子5020的位置到图6I中的虚拟椅子5020的位置的转变。例如,虚拟椅子5020相对于台架6014旋转到预定义的位置、以预定义的取向旋转并且/或者旋转预定义的距离(例如,使得虚拟椅子看起来由台架6014支撑)。登台用户界面6010还包括后退控件6016,其在被激活时(例如,通过在与后退控件6016对应的位置处的轻击输入)使得先前所显示的用户界面(例如,即时消息用户界面5008)重新被显示。登台用户界面6010还包括切换控件6018,其指示当前的显示模式(例如,当前的显示模式为登台用户界面模式,如突出显示的“3D”指示符所指示),并且其在被激活时使得转变到选择的显示模式。例如,当显示登台用户界面6010时,在与切换控件6018对应的位置(例如,与切换控件6018上包括文本“世界”的部分对应的位置)处通过接触进行的轻击输入使得登台用户界面6010被相机的视场替换。登台用户界面6010还包括共享控件6020(例如,用于显示共享界面的共享控件)。In Figure 6I, astaging user interface 6010 is displayed. The staginguser interface 6010 includes agantry 6014 on which thevirtual chair 5020 is displayed. From Figures 6H-6I, thevirtual chair 5020 is animated to indicate the transition from the position of thevirtual chair 5020 in Figure 6H to the position of thevirtual chair 5020 in Figure 6I. For example, thevirtual chair 5020 is rotated relative to thegantry 6014 to a predefined position, rotated in a predefined orientation, and/or rotated a predefined distance (eg, so that the virtual chair appears to be supported by the gantry 6014). The staginguser interface 6010 also includes aback control 6016 that, when activated (eg, by a tap input at a location corresponding to the back control 6016 ), causes a previously displayed user interface (eg, instant messaging user interface 5008 ) to refresh. being shown. The staginguser interface 6010 also includes atoggle control 6018 that indicates the current display mode (eg, the current display mode is the staging user interface mode, as indicated by the highlighted "3D" indicator), and which, when activated, causes the transition to the selected display mode. For example, when thestaging user interface 6010 is displayed, a tap input by contact at a location corresponding to the toggle control 6018 (eg, a location corresponding to the portion of thetoggle control 6018 that includes the text "world") causes thestaging user interface 6010 Replaced by the camera's field of view. The staginguser interface 6010 also includes a share control 6020 (eg, a share control for displaying a share interface).

图6J至图6L示出了由接触6006的移动引起的虚拟椅子5020相对于台架6014的旋转。在图6J至图6K中,当接触6006沿箭头6022所指示的路径移动时,虚拟椅子5020旋转(例如,围绕垂直于接触6066的移动的第一轴)。在图6K至图6L中,当接触6006沿箭头6024所指示的路径并且随后沿箭头6025所指示的路径移动时,虚拟椅子5020旋转(例如,围绕垂直于接触6066的移动的第二轴)。在图6M中,接触6006已抬离触摸屏112。在一些实施方案中,如图6J至图6L所示,虚拟椅子5020的旋转受到台架6014的表面的约束。例如,在虚拟椅子的旋转期间,虚拟椅子5020的至少一条腿保持与台架6014的表面接触。在一些实施方案中,台架6014的表面充当虚拟椅子5020的自由旋转和垂直平移的参考系,而不对虚拟椅子5020的移动造成特定的约束。6J-6L illustrate the rotation ofvirtual chair 5020 relative to table 6014 caused by movement ofcontact 6006 . In FIGS. 6J-6K,virtual chair 5020 rotates (eg, about a first axis perpendicular to the movement of contact 6066) ascontact 6006 moves along the path indicated byarrow 6022. In FIGS. 6K-6L,virtual chair 5020 rotates (eg, about a second axis perpendicular to the movement of contact 6066) ascontact 6006 moves along the path indicated byarrow 6024 and then along the path indicated byarrow 6025. In FIG. 6M ,contact 6006 has been lifted offtouch screen 112 . In some embodiments, the rotation of thevirtual chair 5020 is constrained by the surface of thegantry 6014, as shown in FIGS. 6J-6L. For example, at least one leg of thevirtual chair 5020 remains in contact with the surface of thegantry 6014 during rotation of the virtual chair. In some embodiments, the surface of thegantry 6014 acts as a frame of reference for free rotation and vertical translation of thevirtual chair 5020 without particular constraints on the movement of thevirtual chair 5020.

图6N至图6O示出了调整所显示的虚拟椅子5020的尺寸的输入。在图6N中,检测到与触摸屏112的第一接触6026和第二接触6030。第一接触6026沿箭头6028所指示的路径移动,在第一接触6026移动的同时,第二接触6030沿箭头6032所指示的路径移动。在图6N至图6O中,当第一接触6026和第二接触6030沿箭头6028和6032所指示的路径分别移动时(例如,在分离手势中),所显示的虚拟椅子5020的尺寸增大。在图6P中,第一接触6030和第二接触6026已抬离触摸屏112,并且在接触6026和6030抬离之后,虚拟椅子5020保持增大的尺寸。6N-6O illustrate an input to adjust the size of the displayedvirtual chair 5020. In Figure 6N, afirst contact 6026 and asecond contact 6030 with thetouch screen 112 are detected. Thefirst contact 6026 moves along the path indicated byarrow 6028, and while thefirst contact 6026 moves, thesecond contact 6030 moves along the path indicated byarrow 6032. In Figures 6N-6O, the displayedvirtual chair 5020 increases in size as thefirst contact 6026 and thesecond contact 6030 move along the paths indicated byarrows 6028 and 6032, respectively (eg, in a separation gesture). In Figure 6P, thefirst contact 6030 and thesecond contact 6026 have been lifted off thetouch screen 112, and after thecontacts 6026 and 6030 are lifted off, thevirtual chair 5020 remains in an increased size.

图6Q至图6U示出了使得登台用户界面6010被设备100的一个或多个相机的视场6036替换的输入。在图6Q中,检测到与设备100的触摸屏112的接触6034。该接触的特征强度高于接触检测强度阈值IT0且低于提示按压强度阈值ITH,如强度水平计5028所示。在图6R中,如强度水平计5028所示,接触5026的特征强度增加到高于提示按压强度阈值ITH已使得登台用户界面6010在虚拟椅子5020后面开始模糊(如虚线所指示)。在图6S中,如强度水平计5028所示,接触6034的特征强度增加到高于轻按压强度阈值ITL使得登台用户界面6010停止显示,并发起相机的视场6036的淡入(虚线所指示)。在图6T中,如强度水平计5028所示,接触6034的特征强度增加到高于深按压强度阈值ITD使得相机的视场6036显示。另外,如图6T所示,接触6034的特征强度增加到高于深按压强度阈值ITD使得设备100的触觉输出发生器167输出触觉输出(如6038处所指示),该触觉输出指示用相机的视场6036的显示替换登台用户界面6010的显示的标准已得到满足。在图6U中,接触6034已抬离触摸屏112。在一些实施方案中,在接触6034的特征强度达到深按压强度阈值ITD(如图6T所示)之前,图6Q至图6T所示的进展是可逆的。例如,在图6R和/或图6S所示的增加之后,减小接触6034的特征强度将使得与接触6034的减小的强度水平对应的界面状态被显示。FIGS. 6Q-6U illustrate inputs that cause thestaging user interface 6010 to be replaced by the field ofview 6036 of one or more cameras of thedevice 100 . In FIG. 6Q,contact 6034 with thetouch screen 112 of thedevice 100 is detected. The characteristic intensity of the contact is above the contact detection intensity threshold IT0 and below the prompt press intensity threshold IH , as indicated by the intensity level meter 5028 . In Figure 6R, as indicated by the intensity level meter 5028, the characteristic intensity of thecontact 5026 increasing above the prompt press intensity thresholdITH has caused thestaging user interface 6010 to begin blurring (as indicated by the dashed line) behind thevirtual chair 5020. In Figure 6S, as indicated by the intensity level meter 5028, the characteristic intensity of thecontact 6034 increasing above the light press intensity threshold ITL causes thestaging user interface 6010 to cease display and initiates a fade-in of the camera's field of view 6036 (indicated by the dashed line) . In Figure 6T, as indicated by the intensity level meter 5028, the characteristic intensity of thecontact 6034 increases above the deep press intensity threshold ITD such that the camera's field ofview 6036 is displayed. Additionally, as shown in FIG. 6T, increasing the characteristic intensity ofcontact 6034 above the deep-press intensity threshold ITD causes thehaptic output generator 167 of thedevice 100 to output a haptic output (as indicated at 6038) that indicates the use of the camera's visual The criterion that the display offield 6036 replaces the display of staginguser interface 6010 has been met. In FIG. 6U ,contact 6034 has been lifted offtouch screen 112 . In some embodiments, the progression shown in FIGS. 6Q-6T is reversible until the characteristic intensity of thecontact 6034 reaches the deep-press intensity threshold ITD (shown in FIG. 6T ). For example, decreasing the characteristic intensity ofcontact 6034 after the increase shown in FIG. 6R and/or FIG. 6S will cause an interface state corresponding to the reduced intensity level ofcontact 6034 to be displayed.

从图6Q至图6U,虚拟椅子5020被放置在检测到的平面上(例如,根据设备100确定虚拟椅子5020被配置为以竖直取向放置在检测到的水平表面上,诸如,地板表面5038),并且虚拟椅子5020的尺寸被调整(例如,基于相机的视场6036中定义的虚拟椅子5020的“真实世界”尺寸和/或所检测到的对象尺寸(诸如,桌子5004),确定如相机的视场6036中所示的虚拟椅子5020相对于物理空间5002的比例)。当虚拟椅子5020从登台用户界面6010转变到相机的视场6036时,在显示登台界面6010时由虚拟椅子5020的旋转引起的虚拟椅子5020的取向得以保持(例如,如参照图6J至图6K所描述的)。例如,虚拟椅子5020相对于地板表面5038的取向与虚拟椅子5020相对于台架5014的表面的最终取向相同。在一些实施方案中,当在视场6036中相对于物理空间5002的尺寸调整虚拟椅子5020的尺寸时,考虑对登台用户界面中的虚拟对象5020的尺寸的调整。From Figures 6Q-6U,virtual chair 5020 is placed on a detected plane (eg,virtual chair 5020 is determined fromdevice 100 to be configured to be placed in a vertical orientation on a detected horizontal surface, such as floor surface 5038) , and the size of thevirtual chair 5020 is adjusted (eg, based on the "real world" size of thevirtual chair 5020 defined in the camera's field ofview 6036 and/or the detected object size (such as the table 5004 ), determined as the camera's The scale of thevirtual chair 5020 shown in the field ofview 6036 relative to the physical space 5002). When thevirtual chair 5020 transitions from the staginguser interface 6010 to the camera's field ofview 6036, the orientation of thevirtual chair 5020 caused by the rotation of thevirtual chair 5020 when thestaging interface 6010 is displayed is maintained (eg, as described with reference to FIGS. 6J-6K ). describe). For example, the orientation of thevirtual chair 5020 relative to thefloor surface 5038 is the same as the final orientation of thevirtual chair 5020 relative to the surface of thegantry 5014 . In some embodiments, when resizing thevirtual chair 5020 relative to the size of thephysical space 5002 in the field ofview 6036, the resizing of thevirtual object 5020 in the staging user interface is considered.

图6V至图6Y示出了使得相机的视场6036被登台用户界面6010替换的输入。在图6V中,在与切换控件6018对应的位置(例如,与切换控件6018上包括文本“3D”的部分对应的位置)处检测到通过接触6040进行的输入(例如,轻击输入)。在图6W至图6Y中,响应于通过接触6040进行的输入,相机的视场6036淡出(如图6W中的虚线所指示),登台用户界面6010淡入(如图6X中的虚线所指示),并且完全显示登台用户界面6010(如图6Y所示)。从图6V至图6Y,虚拟椅子5020的尺寸被调整,并且虚拟椅子5020的位置改变(例如,使虚拟椅子5020返回到用于登台用户界面的预定义位置和尺寸)。6V-6Y illustrate inputs that cause the camera's field ofview 6036 to be replaced by the staginguser interface 6010. In Figure 6V, input (eg, a tap input) bycontact 6040 is detected at a location corresponding to toggle control 6018 (eg, a location corresponding to the portion oftoggle control 6018 that includes the text "3D"). In Figures 6W-6Y, in response to input viacontact 6040, the camera's field ofview 6036 fades out (as indicated by the dashed line in Figure 6W), the staginguser interface 6010 fades in (as indicated by the dashed line in Figure 6X), And the staginguser interface 6010 is fully displayed (as shown in Figure 6Y). From Figures 6V-6Y, the size of thevirtual chair 5020 is adjusted and the position of thevirtual chair 5020 is changed (eg, returning thevirtual chair 5020 to a predefined position and size for the staging user interface).

图6Z至图6AC示出了使得登台用户界面6010被即时消息用户界面5008替换的输入。在图6Z中,在与后退控件6016对应的位置处检测到通过接触6042进行的输入(例如,轻击输入)。在图6AA至图6AC中,响应于通过接触6042进行的输入,登台用户界面6010淡出(如图6AA中的虚线所指示),即时消息用户界面5008淡入(如图6AB中的虚线所指示),并且完全显示即时消息用户界面5008(如图6AC所示)。从图6Z至图6AB,在显示器上连续调整虚拟椅子5020的尺寸、取向和位置(例如,以使虚拟椅子5020返回到用于即时消息用户界面5008的预定义位置、尺寸和取向)。6Z-6AC illustrate inputs that cause staginguser interface 6010 to be replaced by instantmessaging user interface 5008. In FIG. 6Z , an input (eg, a tap input) throughcontact 6042 is detected at a location corresponding to backcontrol 6016 . In Figures 6AA-6AC, in response to input viacontact 6042, staginguser interface 6010 fades out (indicated by the dashed line in Figure 6AA), instantmessaging user interface 5008 fades in (indicated by the dashed line in Figure 6AB), And the instant messaging user interface 5008 (shown in Figure 6AC) is fully displayed. From Figures 6Z to 6AB, the size, orientation and position of thevirtual chair 5020 is continuously adjusted on the display (eg, to return thevirtual chair 5020 to a predefined position, size and orientation for the instant messaging user interface 5008).

图6AD至图6AJ示出了使得即时消息用户界面5008被相机的视场6036替换(例如,绕过登台用户界面6010的显示)的输入。在图6AD中,在与虚拟椅子5020对应的位置处检测到接触6044。通过接触6044进行的输入包括长触摸手势(在此期间,接触6044在触敏表面上与虚拟对象5020的表示对应的位置处以小于阈值移动量的移动保持至少预定义的阈值时间量)以及随后的向上轻扫手势(向上拖动虚拟椅子5020)。如图6AD至图6AE所示,当接触6044沿箭头6046所指示的路径移动时,虚拟椅子5020被向上拖动。在图6AE中,即时消息用户界面5008在虚拟椅子5020后面淡出。如图6AE至图6AF所示,当接触6044沿箭头6048所指示的路径移动时,虚拟椅子5020继续被向上拖动。在图6AF中,相机的视场5036在虚拟椅子5020后面淡入。在图6AG中,响应于包括长触摸手势以及随后的向上轻扫手势的通过接触6044进行的输入,完全显示相机的视场5036。在图6AH中,接触6044抬离触摸屏112。在图6AH至图6AJ中,响应于接触6044的抬离,虚拟椅子5020被释放(例如,因为虚拟椅子5020不再被接触约束或拖动)并下落到平面(例如,地板表面5038,根据确定水平(地板)表面与虚拟椅子5020对应)上。另外,如图6AJ所示,设备100的触觉输出发生器167输出触觉输出(如6050处所示),该触觉输出指示虚拟椅子5020已着陆在地板表面5038上。6AD-6AJ illustrate inputs that cause the instantmessaging user interface 5008 to be replaced by the camera's field of view 6036 (eg, bypassing the display of the staging user interface 6010). In FIG. 6AD,contact 6044 is detected at a location corresponding tovirtual chair 5020. The input viacontact 6044 includes a long touch gesture during whichcontact 6044 remains at a location on the touch-sensitive surface corresponding to the representation ofvirtual object 5020 with movement less than a threshold movement amount for at least a predefined threshold amount of time and subsequent Swipe up gesture (dragvirtual chair 5020 up). As shown in FIGS. 6AD-6AE, as thecontact 6044 moves along the path indicated by the arrow 6046, thevirtual chair 5020 is dragged upward. In FIG. 6AE, instantmessaging user interface 5008 fades out behindvirtual chair 5020. As shown in Figures 6AE-6AF, thevirtual chair 5020 continues to be dragged upward as thecontact 6044 moves along the path indicated by thearrow 6048. In Figure 6AF, the camera's field ofview 5036 fades in behind thevirtual chair 5020. In Figure 6AG, the camera's field ofview 5036 is fully displayed in response to input viacontact 6044 including a long touch gesture followed by a swipe up gesture. In FIG. 6AH,contact 6044 is lifted offtouch screen 112. In FIGS. 6AH-6AJ, in response to the lift-off of thecontact 6044, thevirtual chair 5020 is released (eg, because thevirtual chair 5020 is no longer constrained or dragged by the contact) and falls to a plane (eg, thefloor surface 5038, according to the determination The horizontal (floor) surface corresponds to the virtual chair 5020). Additionally, as shown in FIG. 6AJ, thehaptic output generator 167 of thedevice 100 outputs a haptic output (shown at 6050) that indicates that thevirtual chair 5020 has landed on thefloor surface 5038.

图7A至图7P示出了根据一些实施方案的示例用户界面,其用于显示具有指示项目与虚拟三维对象对应的视觉指示的项目。这些附图中的用户界面被用于示出下文所述的过程,包括图8A至图8E、图9A至图9D、图10A至图10D、图16A至图16G、图17A至图17D、图18A至图18I、图19A至图19H以及图20A至图20F中的过程。为了便于解释,将参考在具有触敏显示器系统112的设备上执行的操作来讨论实施方案中的一些实施方案。在此类实施方案中,焦点选择器为任选地:相应手指或触笔接触、对应于手指或触笔接触的代表点(例如,相应接触的重心或与相应接触相关联的点)、或在触敏显示系统112上所检测到的两个或更多个接触的重心。然而,响应于在显示附图中示出的在显示器450上的用户界面以及焦点选择器时检测触敏表面451上的接触,任选地在具有显示器450和独立的触敏表面451的设备上执行类似的操作。7A-7P illustrate example user interfaces for displaying items with visual indications indicating that the items correspond to virtual three-dimensional objects, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including Figures 8A-8E, 9A-9D, 10A-10D, 16A-16G, 17A-17D, Processes in 18A-18I, 19A-19H, and 20A-20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch-sensitive display system 112 . In such embodiments, the focus selector is optionally: the corresponding finger or stylus contact, a representative point corresponding to the finger or stylus contact (eg, the center of gravity of the corresponding contact or a point associated with the corresponding contact), or The center of gravity of two or more contacts detected on the touch-sensitive display system 112 . However, in response to detecting a contact on touch-sensitive surface 451 while displaying the user interface shown in the figures ondisplay 450 and the focus selector, optionally on adevice having display 450 and a separate touch-sensitive surface 451 Do something similar.

图7A示出了在显示应用程序菜单的用户界面400时检测到的输入。该输入与显示第一用户界面(例如,互联网浏览器用户界面5060)的请求对应。在图7A中,在与浏览器模块147的图标420对应的位置处检测到通过接触7000进行的输入(例如,轻击输入)。响应于该输入,显示互联网浏览器用户界面5060,如图7B所示。FIG. 7A shows input detected when the user interface 400 of the application menu is displayed. The input corresponds to a request to display a first user interface (eg, Internet browser user interface 5060). In FIG. 7A , an input (eg, a tap input) through contact 7000 is detected at a location corresponding toicon 420 ofbrowser module 147 . In response to this input, an internetbrowser user interface 5060 is displayed, as shown in Figure 7B.

图7B示出了互联网浏览器用户界面5060(例如,如参照图5AE详细描述的)。互联网浏览器用户界面5060包括网络对象5066、5068、5070、5072、5074和5076。网络对象5066、5068和5072包括三维虚拟对象的二维表示,如虚拟对象指示符5078、5080和5082分别所指示。网络对象5070、5074和5076包括二维图像(但网络对象5070、5074和5076的二维图像不与三维虚拟对象对应,如不存在虚拟对象指示符所指示)。Figure 7B shows an internet browser user interface 5060 (eg, as described in detail with reference to Figure 5AE). Internetbrowser user interface 5060 includesweb objects 5066 , 5068 , 5070 , 5072 , 5074 and 5076 . Network objects 5066, 5068, and 5072 include two-dimensional representations of three-dimensional virtual objects, as indicated byvirtual object indicators 5078, 5080, and 5082, respectively. Network objects 5070, 5074, and 5076 include two-dimensional images (but the two-dimensional images ofnetwork objects 5070, 5074, and 5076 do not correspond to three-dimensional virtual objects, as indicated by the absence of a virtual object indicator).

图7C至图7D示出了使得互联网浏览器用户界面5060平移(例如,滚动)的输入。在图7B中,检测到与触摸屏112的接触7002。在图7C至图7D中,当接触7002沿箭头7004所指示的路径移动时,网络对象5066、5068、5070、5072、5074和5076向上滚动,从而显露另外的网络对象7003和7005。另外,当接触7002沿箭头7004所指示的路径移动时,分别包括虚拟对象指示符5078、5080和5082的网络对象5066、5068和5072中的虚拟对象根据输入的方向(向上垂直)旋转(例如,向上倾斜)。例如,虚拟灯5084从图7C中的第一取向向上倾斜到图7D中的第二取向。当接触滚动互联网浏览器用户界面5060时,网络对象5070、5074和5076的二维图像不旋转。在图7E中,接触7002已抬离触摸屏112。在一些实施方案中,网络对象5066、5068和5072中所描绘的对象的旋转行为被用作这些网络对象具有在增强现实环境中可见的对应的三维虚拟对象的视觉指示,而网络对象5070、5074和5076中所描绘的对象不存在此类旋转行为被用作这些网络对象不具有在增强现实环境中可见的对应的三维虚拟对象的视觉指示。Figures 7C-7D illustrate an input that causes the internetbrowser user interface 5060 to pan (eg, scroll). In Figure 7B,contact 7002 withtouch screen 112 is detected. In Figures 7C-7D, ascontact 7002 moves along the path indicated byarrow 7004,web objects 5066, 5068, 5070, 5072, 5074, and 5076 scroll up, revealing additional web objects 7003 and 7005. Additionally, ascontact 7002 moves along the path indicated byarrow 7004, virtual objects inweb objects 5066, 5068, and 5072, which includevirtual object indicators 5078, 5080, and 5082, respectively, rotate according to the direction of the input (up-vertical) (eg, tilt up). For example,virtual light 5084 is tilted up from the first orientation in Figure 7C to the second orientation in Figure 7D. The two-dimensional images ofweb objects 5070, 5074, and 5076 do not rotate when contact scrolls the Internetbrowser user interface 5060. In FIG. 7E ,contact 7002 has been lifted offtouch screen 112 . In some embodiments, the rotational behavior of objects depicted innetwork objects 5066, 5068, and 5072 is used as a visual indication that these network objects have corresponding three-dimensional virtual objects visible in the augmented reality environment, while network objects 5070, 5074 The absence of such rotational behavior for the objects depicted in and 5076 is used as a visual indication that these network objects do not have corresponding three-dimensional virtual objects visible in the augmented reality environment.

图7F至图7G示出了视差效应,其中虚拟对象响应于设备100相对于物理世界的取向的变化而在显示器上旋转。7F-7G illustrate the parallax effect, where virtual objects rotate on the display in response to changes in the orientation ofdevice 100 relative to the physical world.

图7F1示出,设备100由用户7006握持在用户的手5006中,使得设备100具有基本上垂直的取向。图7F2示出了如当设备100处于图7F1所示的取向时设备100显示的互联网浏览器用户界面5060。7F1 showsdevice 100 being held byuser 7006 in user'shand 5006 such thatdevice 100 has a substantially vertical orientation. FIG. 7F2 shows an internetbrowser user interface 5060 as displayed bydevice 100 whendevice 100 is in the orientation shown in FIG. 7F1.

图7G1示出,设备100由用户7006握持在用户的手5006中,使得设备100具有基本上水平的取向。图7G2示出了如当设备100处于图7G1所示的取向时设备100显示的互联网浏览器用户界面5060。从图7F2到图7G2,分别包括虚拟对象指示符5078、5080和5082的网络对象5066、5068和5072中的虚拟对象的取向根据设备的取向的变化旋转(例如,向上倾斜)。例如,根据物理空间中的设备取向的同时变化,虚拟灯5084从图7F2中的第一取向向上倾斜到图7G2中的第二取向。当设备的取向改变时,网络对象5070、5074和5076的二维图像不旋转。在一些实施方案中,网络对象5066、5068和5072中所描绘的对象的旋转行为被用作这些网络对象具有在增强现实环境中可见的对应的三维虚拟对象的视觉指示,而网络对象5070、5074和5076中所描绘的对象不存在此类旋转行为被用作这些网络对象不具有在增强现实环境中可见的对应的三维虚拟对象的视觉指示。7G1 showsdevice 100 being held in user'shand 5006 byuser 7006 such thatdevice 100 has a substantially horizontal orientation. Figure 7G2 shows an internetbrowser user interface 5060 as displayed bydevice 100 whendevice 100 is in the orientation shown in Figure 7G1. From Figures 7F2 to 7G2, the orientation of virtual objects innetwork objects 5066, 5068 and 5072 includingvirtual object indicators 5078, 5080 and 5082, respectively, is rotated (eg, tilted up) according to changes in the orientation of the device. For example, thevirtual light 5084 is tilted upward from the first orientation in FIG. 7F2 to the second orientation in FIG. 7G2 according to simultaneous changes in device orientation in the physical space. The two-dimensional images ofweb objects 5070, 5074, and 5076 do not rotate when the orientation of the device changes. In some embodiments, the rotational behavior of objects depicted innetwork objects 5066, 5068, and 5072 is used as a visual indication that these network objects have corresponding three-dimensional virtual objects visible in the augmented reality environment, while network objects 5070, 5074 The absence of such rotational behavior for the objects depicted in and 5076 is used as a visual indication that these network objects do not have corresponding three-dimensional virtual objects visible in the augmented reality environment.

图7H至图7L示出了与显示第二用户界面(例如,即时消息用户界面5008)的请求对应的输入。在图7H中,在与显示器112的下边缘对应的位置处检测到接触7008。在图7H至图7I中,接触7008沿箭头7010所指示的路径向上移动。在图7I至图7J中,接触7008沿箭头7012所指示的路径继续向上移动。在图7H至图7J中,当接触7008从显示器112的下边缘向上移动时,互联网浏览器用户界面5060的尺寸减小,如图7I所示;并且在图7J中,显示多任务用户界面7012(例如,响应于通过接触7008进行的向上边缘轻扫手势)。多任务用户界面7012被配置为允许从具有保留状态(例如,当相应的应用程序为在设备上执行的前台应用程序时,保留状态为相应应用程序的最后状态)的各种应用程序和各种控制界面(例如,控制中心用户界面7014、互联网浏览器用户界面5060和即时消息用户界面5008,如图7J所示)中选择界面。在图7K中,接触7008已抬离触摸屏112。在图7L中,在与即时消息用户界面5008对应的位置处检测到通过接触7016进行的输入(例如,轻击输入)。响应于通过接触7016进行的输入,显示即时消息用户界面5008,如图7M所示。7H-7L illustrate input corresponding to a request to display a second user interface (eg, instant messaging user interface 5008). In FIG. 7H ,contact 7008 is detected at a location corresponding to the lower edge ofdisplay 112 . In FIGS. 7H-7I , thecontact 7008 moves upward along the path indicated by arrow 7010 . In Figures 7I-7J, thecontact 7008 continues to move upward along the path indicated byarrow 7012. In Figures 7H-7J, as thecontact 7008 moves up from the lower edge of thedisplay 112, the Internetbrowser user interface 5060 is reduced in size, as shown in Figure 7I; and in Figure 7J, themultitasking user interface 7012 is displayed (eg, in response to an up edge swipe gesture via contact 7008). Themultitasking user interface 7012 is configured to allow various applications and various applications that have a retained state (eg, when the corresponding application is a foreground application executing on the device, the retained state is the last state of the corresponding application). Select an interface in a control interface (eg, controlcenter user interface 7014, internetbrowser user interface 5060, and instantmessaging user interface 5008, shown in Figure 7J). In FIG. 7K ,contact 7008 has been lifted offtouch screen 112 . In FIG. 7L, an input via contact 7016 (eg, a tap input) is detected at a location corresponding to instantmessaging user interface 5008. In response to input viacontact 7016, instantmessaging user interface 5008 is displayed, as shown in Figure 7M.

图7M示出了包括消息气泡5018的即时消息用户界面5008(例如,如参照图5B进一步详细描述的),该消息气泡包括在消息中接收到的虚拟对象(例如,虚拟椅子5020)和虚拟对象指示符5022,该虚拟对象指示符指示虚拟椅子5020是虚拟三维对象(例如,在增强现实视图中可见的对象和/或从不同角度可见的对象)。即时消息用户界面5008还包括消息气泡6005和消息气泡7018,前者包括发送的文本消息,后者包括接收的包括表情符号7020的文本消息。表情符号7020是不与虚拟三维对象对应的二维图像。为此,显示的表情符号7020不具有虚拟对象指示符。7M shows an instant messaging user interface 5008 (eg, as described in further detail with reference to FIG. 5B ) including amessage bubble 5018 that includes a virtual object (eg, virtual chair 5020 ) and virtual objects received in the message Anindicator 5022 that indicates that thevirtual chair 5020 is a virtual three-dimensional object (eg, an object visible in an augmented reality view and/or an object visible from a different angle). The instantmessaging user interface 5008 also includes amessage bubble 6005 including a sent text message and amessage bubble 7018 including a received text message including anemoji 7020. Theemoticon 7020 is a two-dimensional image that does not correspond to a virtual three-dimensional object. For this reason, the displayedemoji 7020 does not have a virtual object indicator.

图7N示出了地图用户界面7022,其包括地图7024、第一兴趣点的兴趣点信息区域7026以及第二兴趣点的兴趣点信息区域7032。例如,第一兴趣点和第二兴趣点是地图7024所示的与搜索输入区域7025中的搜索条目“Apple”对应的区域内或附近的搜索结果。在第一兴趣点信息区域7026中,显示的第一兴趣点对象7028具有虚拟对象指示符7030,该虚拟对象指示符指示第一兴趣点对象7028是虚拟三维对象。在第二兴趣点信息区域7032中,显示的第二兴趣点对象7034不具有虚拟对象指示符,因为第二兴趣点对象7034不与在增强现实视图中可见的虚拟三维对象对应。7N shows a map user interface 7022 that includes a map 7024, a point of interest information area 7026 for a first point of interest, and a point of interest information area 7032 for a second point of interest. For example, the first point of interest and the second point of interest are search results in or near the area shown on map 7024 that corresponds to the search term "Apple" in search input area 7025. In the first point of interest information area 7026, the displayed first point of interest object 7028 has a virtual object indicator 7030 that indicates that the first point of interest object 7028 is a virtual three-dimensional object. In the second point of interest information area 7032, the second point of interest object 7034 is displayed without a virtual object indicator because the second point of interest object 7034 does not correspond to a virtual three-dimensional object visible in the augmented reality view.

图7O示出了文件管理用户界面7036,其包括文件管理控件7038、文件管理搜索输入区域7040、用于第一文件(例如,可移植文档格式(PDF)文件)的文件信息区域7042、用于第二文件(例如,照片文件)的文件信息区域7044、用于第三文件(例如,虚拟椅子对象)的文件信息区域7046以及用于第四文件(例如,PDF文件)的文件信息区域7048。第三文件信息区域7046包括邻近文件信息区域7046的文件预览对象7045显示的虚拟对象指示符7050,该虚拟对象指示符指示第三文件与虚拟三维对象对应。显示的第一文件信息区域7042、第二文件信息区域7044和第四文件信息区域7048不具有虚拟对象指示符,因为与这些文件信息区域对应的文件不具有在增强现实环境中可见的对应虚拟三维对象。70 shows a filemanagement user interface 7036 that includes file management controls 7038, a file managementsearch input area 7040, afile information area 7042 for a first file (eg, a Portable Document Format (PDF) file), a Afile information area 7044 for a second file (eg, a photo file), afile information area 7046 for a third file (eg, a virtual chair object), and afile information area 7048 for a fourth file (eg, a PDF file). The thirdfile information area 7046 includes avirtual object indicator 7050 displayed adjacent to thefile preview object 7045 of thefile information area 7046, the virtual object indicator indicating that the third file corresponds to a virtual three-dimensional object. The firstfile information area 7042, the secondfile information area 7044, and the fourthfile information area 7048 are displayed without virtual object indicators because the files corresponding to these file information areas do not have corresponding virtual three-dimensional dimensions visible in the augmented reality environment object.

图7P示出了电子邮件用户界面7052,其包括电子邮件导航控件7054、电子邮件信息区域7056以及包括第一附件7060的表示和第二附件7062的表示的电子邮件内容区域7058。第一附件7060的表示包括虚拟对象指示符7064,该虚拟对象指示符指示第一附件是在增强现实环境中可见的虚拟三维对象。显示的第二附件7062不具有虚拟对象指示符,因为第二附件不是在增强现实环境中可见的虚拟三维对象。FIG. 7P shows anemail user interface 7052 that includes email navigation controls 7054 , anemail information area 7056 , and anemail content area 7058 that includes a representation of thefirst attachment 7060 and a representation of thesecond attachment 7062 . The representation of thefirst accessory 7060 includes avirtual object indicator 7064 that indicates that the first accessory is a virtual three-dimensional object visible in the augmented reality environment. The displayedsecond accessory 7062 does not have a virtual object indicator because the second accessory is not a virtual three-dimensional object visible in the augmented reality environment.

图8A至图8E是示出根据一些实施方案的在从显示第一用户界面区域切换到显示第二用户界面区域时显示虚拟对象的表示的方法800的流程图。方法800在具有显示器、触敏表面以及一个或多个相机(例如,设备上与显示器和触敏表面相对的一侧上的一个或多个后向相机)的电子设备(例如,图3中的设备300或图1A中的便携式多功能设备100)处执行。在一些实施方案中,显示器是触摸屏显示器,并且触敏表面在显示器上或与显示器集成。在一些实施方案中,显示器与触敏表面是分开的。方法800中的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。8A-8E are flowcharts illustrating amethod 800 of displaying a representation of a virtual object when switching from displaying a first user interface area to displaying a second user interface area, according to some embodiments. Themethod 800 is performed in an electronic device (eg, the device 300 or portablemultifunction device 100 in FIG. 1A ). In some embodiments, the display is a touch screen display and the touch-sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations inmethod 800 are optionally combined, and/or the order of some operations is optionally changed.

方法800涉及检测在设备的触敏表面处通过接触进行的输入,该输入用于在第一用户界面区域中显示虚拟对象的表示。响应于该输入,设备使用标准来确定在用设备的一个或多个相机的视场替换第一用户界面区域的至少一部分的显示时是否连续显示虚拟对象的表示。使用标准来确定在用一个或多个相机的视场替换第一用户界面区域的至少一部分的显示时是否连续显示虚拟对象的表示,这使得多种不同类型的操作能够响应于输入而执行。响应于输入而使多种不同类型的操作能够执行(例如,通过用一个或多个相机的视场替换用户界面的至少一部分的显示,或者通过保持第一用户界面区域的显示,而不用一个或多个相机的视场的表示替换第一用户界面区域的至少一部分的显示)提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。Method 800 involves detecting an input by contact at a touch-sensitive surface of a device for displaying a representation of a virtual object in a first user interface area. In response to the input, the device uses criteria to determine whether to continuously display the representation of the virtual object while replacing the display of at least a portion of the first user interface area with the field of view of one or more cameras of the device. Using criteria to determine whether to continuously display representations of virtual objects while replacing the display of at least a portion of the first user interface area with the field of view of one or more cameras enables a variety of different types of operations to be performed in response to input. enabling a variety of different types of operations to be performed in response to the input (eg, by replacing the display of at least a portion of the user interface with the field of view of one or more cameras, or by maintaining the display of the first user interface area without using one or more cameras) The representation of the fields of view of the multiple cameras (replacing the display of at least a portion of the first user interface area) increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn enhances the operability of the device by enabling the user to more quickly and efficiently. Using the device reduces power usage and extends the battery life of the device.

设备在显示器112上的第一用户界面区域(例如,二维图形用户界面或其一部分(例如,家具图像的可浏览列表、包含一个或多个可选对象的图像等))中显示(802)虚拟对象的表示(例如,三维对象的图形表示,诸如,虚拟椅子5020、虚拟灯5084、鞋子、家具、手工工具、装饰品、人、表情符号、游戏角色、虚拟家具等)。例如,第一用户界面区域为如图5B所示的即时消息用户界面5008或如图5AE所示的互联网浏览器用户界面5060。在一些实施方案中,除设备周围的物理环境的图像之外,第一用户界面区域还包括背景(例如,第一用户界面区域的背景为预选的背景颜色/图案或背景图像,该背景图像不同于由一个或多个相机同时捕获的输出图像,并且不同于一个或多个相机的视场中的实时内容)。The device is displayed on thedisplay 112 in a first user interface area (eg, a two-dimensional graphical user interface or a portion thereof (eg, a browsable list of furniture images, images containing one or more selectable objects, etc.)) (802) Representations of virtual objects (eg, graphical representations of three-dimensional objects, such asvirtual chairs 5020,virtual lights 5084, shoes, furniture, hand tools, decorations, people, emojis, game characters, virtual furniture, etc.). For example, the first user interface area is instantmessaging user interface 5008 as shown in FIG. 5B or internetbrowser user interface 5060 as shown in FIG. 5AE. In some embodiments, the first user interface area includes a background in addition to an image of the physical environment surrounding the device (eg, the background of the first user interface area is a preselected background color/pattern or a background image, the background image being different due to output images captured simultaneously by one or more cameras and differing from real-time content in the field of view of one or more cameras).

当在显示器上的第一用户界面区域中显示虚拟对象的第一表示时,设备检测(804)在触敏表面112上与显示器上的虚拟对象的表示对应的位置处通过接触进行的第一输入(例如,在触摸屏显示器上的虚拟对象的第一表示上检测到接触,或者在与虚拟对象的第一表示同时显示在第一用户界面区域中的示能表示上检测到接触,该示能表示被配置为在被接触调用时触发虚拟对象的AR视图的显示)。例如,第一输入为如参照图5C至图5F描述的通过接触5020进行的输入或如参照图5AF至图5AL描述的通过接触5086进行的输入。While displaying the first representation of the virtual object in the first user interface area on the display, the device detects (804) a first input by contact at a location on the touch-sensitive surface 112 that corresponds to the representation of the virtual object on the display (For example, a contact is detected on a first representation of a virtual object on a touch screen display, or a contact is detected on an affordance displayed in the first user interface area concurrently with the first representation of the virtual object, the affordance is configured to trigger the display of the AR view of the virtual object when invoked by touch). For example, the first input is an input throughcontact 5020 as described with reference to FIGS. 5C-5F or an input throughcontact 5086 as described with reference to FIGS. 5AF-5AL.

响应于检测到通过接触进行的第一输入(806),根据确定通过接触进行的第一输入满足第一(例如,AR-触发)标准(例如,AR-触发标准是被配置为识别轻扫输入、触摸保持输入、按压输入、轻击输入、强度高于预定义强度阈值的用力按压或另一类型的预定义输入手势的标准,该标准与触发相机的激活、设备周围物理环境的增强现实(AR)视图的显示、虚拟对象的三维表示在物理环境的增强现实视图内部的放置以及/或以上动作中的两者或更多者的组合相关联):设备在显示器上显示第二用户界面区域,这包括用一个或多个相机的视场的表示替换第一用户界面区域的至少一部分的显示,并且设备在从显示第一用户界面区域切换到显示第二用户界面区域时连续显示虚拟对象的表示。例如,显示器上的第二用户界面区域为如参照图5H描述的盘面5030中的相机的视场5034或如参照图5AH描述的相机的视场5034。在图5C至图5I中,根据确定通过接触5026进行的输入具有增加到高于深按压强度阈值ITD的特征强度,当从显示第一用户界面区域(即时消息用户界面5008)切换到显示第二用户界面区域时,连续显示虚拟椅子对象5020,其中显示第二用户界面区域即为用盘面5030中的相机的视场5034替换即时消息用户界面5008的一部分的显示。在图5AF至图5AH中,根据确定通过接触5086进行的输入具有增加到高于深按压强度阈值ITD的特征强度,当从显示第一用户界面区域(互联网浏览器用户界面5060)切换到显示第二用户界面区域时,连续显示虚拟灯对象5084,其中显示第二用户界面区域即为用相机的视场5034替换互联网浏览器用户界面5060的一部分的显示。In response to detecting the first input via the contact (806), based on determining that the first input via the contact satisfies a first (eg, AR-triggering) criterion (eg, the AR-triggering criterion is configured to recognize a swipe input , touch hold input, press input, tap input, hard press with an intensity above a predefined intensity threshold, or another type of predefined input gesture criteria that are related to triggering the activation of the camera, augmented reality of the physical environment around the device ( AR) display of the view, placement of the three-dimensional representation of the virtual object inside the augmented reality view of the physical environment, and/or a combination of two or more of the above actions associated): the device displays a second user interface area on the display , this includes replacing the display of at least a portion of the first user interface area with a representation of the field of view of the one or more cameras, and the device continuously displays a representation of the virtual object while switching from displaying the first user interface area to displaying the second user interface area express. For example, the second user interface area on the display is the field ofview 5034 of the camera in thepanel 5030 as described with reference to FIG. 5H or the field ofview 5034 of the camera as described with reference to FIG. 5AH. In FIGS. 5C-5I , upon switching from displaying the first user interface area (instant messaging user interface 5008) to displaying the When there are two user interface areas, thevirtual chair object 5020 is continuously displayed, wherein displaying the second user interface area is to replace the display of a part of the instantmessaging user interface 5008 with the field ofview 5034 of the camera in thepanel 5030 . In Figures5AF -5AH, upon switching from displaying the first user interface area (Internet browser user interface 5060) to displaying the Thevirtual light object 5084 is continuously displayed in the second user interface area, wherein displaying the second user interface area is a display that replaces a portion of the internetbrowser user interface 5060 with the camera's field ofview 5034 .

在一些实施方案中,连续显示虚拟对象的表示包括保持虚拟对象的表示的显示或显示虚拟对象的第一表示变化为虚拟对象的第二表示的动画过渡(例如,具有不同尺寸、来自不同视角、具有不同的渲染风格或在显示器上的不同位置处的虚拟对象的视图)。在一些实施方案中,一个或多个相机的视场5034显示设备周围的物理环境5002的实时图像,该实时图像在设备相对于物理环境的位置和取向发生改变时实时地更新(例如,如图5K至图5L所示)。在一些实施方案中,第二用户界面区域完全替换显示器上的第一用户界面。In some embodiments, continuously displaying the representation of the virtual object includes maintaining the display of the representation of the virtual object or displaying an animated transition of the first representation of the virtual object changing to the second representation of the virtual object (eg, having a different size, from a different perspective, views of virtual objects with different rendering styles or at different locations on the display). In some embodiments, the field ofview 5034 of the one or more cameras displays a real-time image of thephysical environment 5002 surrounding the device, which is updated in real-time as the position and orientation of the device relative to the physical environment changes (eg, as shown in FIG. 5K to Figure 5L). In some embodiments, the second user interface area completely replaces the first user interface on the display.

在一些实施方案中,第二用户界面区域覆盖第一用户界面区域的一部分(例如,第一用户界面区域的一部分沿显示器的边缘或围绕显示器的边界示出)。在一些实施方案中,第二用户界面区域在第一用户界面区域旁边弹出。在一些实施方案中,第一用户界面区域内的背景被相机的视场5034的内容替换。在一些实施方案中,设备显示示出虚拟对象从如第一用户界面区域中所示的第一取向移动并旋转(例如,如图5E至图5I所示)到第二取向(例如,相对于捕获在一个或多个相机的视场中的物理环境的一部分的当前取向来预定义的取向)的动画过渡。例如,动画包括从在显示第一用户界面区域时显示虚拟对象的二维表示到在显示第二用户界面区域时显示虚拟对象的三维表示的转变。在一些实施方案中,虚拟对象的三维表示具有基于如二维图形用户界面(例如,第一用户界面区域)中所示的虚拟对象的形状和取向来预定义的锚定平面。当转变到增强现实视图(例如,第二用户界面区域)时,虚拟对象的三维表示被移动、被重新定尺寸并且被重新取向,以使虚拟对象从显示器上的原始位置到达显示器上的新位置(例如,到达增强现实视图的中心,或者到达增强现实视图中的另一预定义位置),并且在移动期间或者在移动结束时,对虚拟对象的三维表示进行重新取向,使得虚拟对象的三维表示处于相对于在一个或多个相机的视场中识别到的预定义平面(例如,可充当虚拟对象的三维表示的支撑平面的物理表面,诸如,垂直墙壁或水平地板表面)的预定义位置处和/或取向下。In some embodiments, the second user interface area covers a portion of the first user interface area (eg, a portion of the first user interface area is shown along the edge of the display or around the boundary of the display). In some embodiments, the second user interface area pops up next to the first user interface area. In some embodiments, the background within the first user interface area is replaced by the contents of the camera's field ofview 5034. In some embodiments, the device display shows the virtual object moving and rotating from a first orientation as shown in the first user interface area (eg, as shown in FIGS. 5E-5I ) to a second orientation (eg, relative to An animated transition that captures the current orientation of a portion of the physical environment in the field of view of one or more cameras to a predefined orientation). For example, the animation includes a transition from displaying a two-dimensional representation of the virtual object when the first user interface area is displayed to displaying a three-dimensional representation of the virtual object when the second user interface area is displayed. In some embodiments, the three-dimensional representation of the virtual object has an anchor plane that is predefined based on the shape and orientation of the virtual object as shown in the two-dimensional graphical user interface (eg, the first user interface area). When transitioning to an augmented reality view (eg, a second user interface area), the three-dimensional representation of the virtual object is moved, resized, and reoriented so that the virtual object goes from its original location on the display to a new location on the display (eg, reach the center of the augmented reality view, or reach another predefined location in the augmented reality view), and during or at the end of the movement, reorient the three-dimensional representation of the virtual object such that the three-dimensional representation of the virtual object at a predefined location relative to a predefined plane identified in the field of view of one or more cameras (eg, a physical surface that can serve as a support plane for a three-dimensional representation of a virtual object, such as a vertical wall or a horizontal floor surface) and/or orientation down.

在一些实施方案中,第一标准包括(808)当(例如,根据以下确定)接触在触敏表面上与虚拟对象的表示对应的位置处以小于阈值移动量的移动保持至少预定义时间量(例如,长按压时间阈值)时所满足的标准。在一些实施方案中,根据确定接触满足用于识别另一类型的手势(例如,轻击)的标准,在保持显示虚拟对象的同时,设备还执行除触发AR用户界面之外的另一预定义功能。根据接触在触敏表面上与虚拟对象的表示对应的位置处是否以小于阈值移动量的移动保持至少预定义时间量,确定在用相机的视场替换第一用户界面区域的至少一部分的显示时是否连续显示虚拟对象的表示,这使得多种不同类型的操作能够响应于输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the first criterion includes (808) when (eg, determined according to) the contact at a location on the touch-sensitive surface corresponding to the representation of the virtual object moves with movement less than a threshold amount of movement for at least a predefined amount of time (eg, as determined below). , the long press time threshold). In some embodiments, upon determining that the contact satisfies the criteria for recognizing another type of gesture (eg, a tap), while maintaining the display of the virtual object, the device also performs another predefined function in addition to triggering the AR user interface Function. Determining when to replace the display of at least a portion of the first user interface area with the camera's field of view is based on whether the contact remains at a location on the touch-sensitive surface corresponding to the representation of the virtual object with movement less than a threshold movement amount for at least a predefined amount of time Whether to display representations of virtual objects continuously, which enables many different types of operations to be performed in response to input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,第一标准包括(810)当(例如,根据以下确定)接触的特征强度增加到高于第一强度阈值(例如,轻按压强度阈值ITL或深按压强度阈值ITD)时所满足的标准。例如,如参照图5C至图5F描述的,当接触5026的特征强度增加到高于深按压强度阈值ITD时,标准得到满足,如强度水平计5028所指示。在一些实施方案中,根据确定接触满足用于识别另一类型的手势(例如,轻击)的标准,在保持显示虚拟对象的同时,设备还执行除触发AR用户界面之外的另一预定义功能。在一些实施方案中,第一标准要求第一输入不是轻击输入(例如,输入具有在接触的向下触摸与接触的抬离之间的持续时间,该持续时间大于轻击时间阈值)。根据接触的特征强度是否增加到高于第一强度阈值,确定在用相机的视场替换第一用户界面区域的至少一部分的显示时是否连续显示虚拟对象的表示,这使得多种不同类型的操作能够响应于输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the first criterion includes (810) when (eg, as determined according to) the characteristic strength of the contact increases above a first strength threshold (eg, light press strength threshold ITL or deep press strength threshold ITD ) standards to be met. For example, as described with reference to FIGS. 5C-5F , the criterion is met when the characteristic intensity of thecontact 5026 increases above the deep press intensity threshold ITD , as indicated by the intensity level meter 5028 . In some embodiments, upon determining that the contact satisfies the criteria for recognizing another type of gesture (eg, a tap), while maintaining the display of the virtual object, the device also performs another predefined function in addition to triggering the AR user interface Function. In some embodiments, the first criterion requires that the first input is not a tap input (eg, the input has a duration between the touch down of the contact and the lift off of the contact, the duration being greater than a tap time threshold). Determining whether to continuously display the representation of the virtual object while replacing the display of at least a portion of the first user interface area with the camera's field of view, based on whether the characteristic intensity of the contact has increased above a first intensity threshold, enables many different types of operations Can execute in response to input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,第一标准包括(812)当(例如,根据以下确定)接触的移动满足预定义的移动标准(例如,接触在触敏表面上移动超出预定义的阈值位置(例如,与第一用户界面区域的边界对应的位置、与接触的原始位置相距阈值距离的位置等),接触以大于预定义阈值速度的速度移动,接触的移动在按压输入下结束,等等)时所满足的标准。在一些实施方案中,在接触的移动的初始部分期间,通过接触拖动虚拟对象的表示,并且在接触的移动即将满足预定义的定义移动标准时,虚拟对象在接触下停止移动,以指示第一标准即将得到满足;并且,如果接触的移动继续且接触的继续移动使预定义的移动标准得到满足,则开始转变到显示第二用户界面区域且在增强现实视图内显示虚拟对象。在一些实施方案中,当在第一输入的初始部分期间拖动虚拟对象时,对象尺寸和观看视角不会变化,并且一旦显示增强现实视图,并且虚拟对象下落到增强现实视图中的位置处,则显示具有取决于由增强现实视图中的虚拟对象的下落位置表示的物理位置的尺寸和观看视角的虚拟对象。根据接触的移动是否满足预定义的移动标准,确定在用相机的视场替换第一用户界面区域的至少一部分的显示时是否连续显示虚拟对象的表示,这使得多种不同类型的操作能够响应于输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the first criterion includes (812) when (eg, determined according to) the movement of the contact satisfies a predefined movement criterion (eg, the contact moves beyond a predefined threshold position on the touch-sensitive surface (eg, with the position corresponding to the boundary of the first user interface area, the position at a threshold distance from the original position of the contact, etc.), the contact moves at a velocity greater than a predefined threshold velocity, the movement of the contact ends with a press input, etc.) standard. In some embodiments, the representation of the virtual object is dragged by the contact during the initial part of the movement of the contact, and when the movement of the contact is about to meet a predefined defined movement criterion, the virtual object stops moving under the contact to indicate the first The criterion is about to be met; and, if the movement of the contact continues and the continued movement of the contact causes the predefined movement criteria to be met, a transition to displaying the second user interface area and displaying the virtual object within the augmented reality view is initiated. In some embodiments, when the virtual object is dragged during the initial portion of the first input, the object size and viewing angle do not change, and once the augmented reality view is displayed and the virtual object falls into position in the augmented reality view, The virtual object is then displayed having a size and viewing angle that depends on the physical location represented by the falling position of the virtual object in the augmented reality view. Determining whether to continuously display the representation of the virtual object while replacing the display of at least a portion of the first user interface area with the camera's field of view, based on whether the movement of the contact satisfies predefined movement criteria, enables many different types of operations to respond to Enter to execute. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,响应于检测到通过接触进行的第一输入,根据确定通过接触进行的第一输入已满足第一标准,具有一个或多个触觉输出发生器167的设备输出(814)触觉输出,该触觉输出指示第一输入满足第一标准(例如,如参照图5F描述的触觉输出5032或如参照图5AH描述的触觉输出5088)。在一些实施方案中,在一个或多个相机的视场出现在显示器上之前生成触感。例如,该触感指示触发一个或多个相机的激活并且随后触发一个或多个相机的视场中的平面检测的第一标准得到满足。由于激活相机以及使得视场可显示需要时间,该触感为用户充当指示设备已检测到必要的输入并且设备一准备好就呈现增强现实用户界面的非视觉信号。In some embodiments, in response to detecting the first input by contact, the device having one or morehaptic output generators 167 outputs (814) a haptic based on determining that the first input by contact has satisfied the first criterion An output that indicates that the first input meets a first criterion (eg,haptic output 5032 as described with reference to FIG. 5F orhaptic output 5088 as described with reference to FIG. 5AH). In some embodiments, the haptic sensation is generated before the field of view of the one or more cameras appears on the display. For example, the haptic indication triggering activation of one or more cameras and subsequently triggering detection of a plane in the field of view of the one or more cameras has met the first criteria. Since it takes time to activate the camera and make the field of view displayable, this haptic sensation acts as a non-visual signal to the user that the device has detected the necessary input and that the device is ready to present the augmented reality user interface.

输出指示标准(例如,用于用相机的视场替换用户界面的至少一部分的显示)得到满足的触觉输出为用户提供了指示所提供的输入满足标准的反馈。提供改进的触觉反馈增强了设备的可操作性(例如,通过帮助用户提供适当的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。A haptic output indicating that a criterion (eg, for replacing the display of at least a portion of the user interface with the camera's field of view) is met provides the user with feedback indicating that the provided input meets the criterion. Providing improved haptic feedback enhances the operability of the device (for example, by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), which in turn enables the user to use the device more quickly and efficiently This reduces power usage and extends the battery life of the device.

在一些实施方案中,响应于检测到第一输入的至少初始部分(例如,包括:检测到接触;或检测到满足相应的预定义标准而不满足第一标准的通过接触进行的输入;或检测到满足第一标准的输入),设备分析(816)一个或多个相机的视场,以检测一个或多个相机的视场中的一个或多个平面(例如,地板表面5038、桌面5046、墙壁等)。在一些实施方案中,响应于检测到第一输入的至少初始部分而激活一个或多个相机,并且在激活相机的同时发起平面检测。在一些实施方案中,在激活一个或多个相机之后延迟一个或多个相机的视场的显示(例如,从一个或多个相机被激活的时间延迟到在相机的视场中检测到至少一个平面的时间)。在一些实施方案中,在一个或多个相机被激活的时间处发起一个或多个相机的视场的显示,并且在视场在显示器上(例如,在第二用户界面区域中)已经可见之后完成平面检测。在一些实施方案中,在一个或多个相机的视场中检测到相应的平面之后,设备基于相应平面相对于一个或多个相机的视场的位置来确定虚拟对象的表示的尺寸和/或位置。在一些实施方案中,当电子设备移动时,随着一个或多个相机的视场相对于相应平面的位置改变,更新虚拟对象的表示的尺寸和/或位置(例如,参照图5K至图5L描述的)。基于在相机的视场中检测到的相应平面的位置来确定虚拟对象的表示的尺寸和/或位置(例如,不需要进一步的用户输入,以相对于相机的视场为虚拟对象定尺寸并且/或者对其进行定位)增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to detecting at least an initial portion of the first input (eg, including: detecting a contact; or detecting an input by contact that satisfies a corresponding predefined criterion but not the first criterion; or detecting to meet the first criterion), the device analyzes (816) the field of view of the one or more cameras to detect one or more planes in the field of view of the one or more cameras (eg,floor surface 5038,table top 5046, walls, etc.). In some embodiments, one or more cameras are activated in response to detecting at least an initial portion of the first input, and plane detection is initiated while the cameras are activated. In some embodiments, the display of the field of view of the one or more cameras is delayed after the activation of the one or more cameras (eg, from the time the one or more cameras are activated to the detection of at least one camera in the camera's field of view) plane time). In some embodiments, display of the field of view of the one or more cameras is initiated at the time the one or more cameras are activated, and after the field of view has been visible on the display (eg, in the second user interface area) Complete plane detection. In some embodiments, after detecting a corresponding plane in the field of view of the one or more cameras, the device determines the size and/or size of the representation of the virtual object based on the position of the corresponding plane relative to the field of view of the one or more cameras Location. In some embodiments, as the electronic device moves, the size and/or position of the representation of the virtual object is updated as the position of the field of view of one or more cameras changes relative to the corresponding plane (eg, see FIGS. 5K-5L ). describe). The size and/or position of the representation of the virtual object is determined based on the detected position of the corresponding plane in the camera's field of view (eg, no further user input is required to size the virtual object relative to the camera's field of view and/or or locating it) enhances the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,响应于在触敏表面上与显示器上的虚拟对象的表示对应的位置处检测到接触(例如,响应于在触摸屏112上与虚拟椅子5020对应的位置处检测到接触5026),发起(818)对一个或多个相机的视场的分析来检测一个或多个相机的视场中的一个或多个平面。例如,在第一输入满足第一标准之前(例如,在接触5026的特征强度增加到高于深按压强度阈值ITD之前,如参照图5F描述的),以及在显示第二用户界面区域之前,开始相机的激活以及对相机的视场中的平面的检测。通过在检测到与虚拟对象的任何交互时开始检测平面,可在AR触发标准得到满足之前完成平面检测,因此,用户在观看以下过程时不存在视觉延迟:当第一输入满足AR触发标准时,虚拟对象转变进入增强现实视图。响应于在虚拟对象的表示的位置处检测到接触,发起分析来检测相机的视场中的一个或多个平面(例如,不需要进一步的用户输入来发起对相机的视场的分析)提高了设备的效率,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to detecting a contact at a location on the touch-sensitive surface that corresponds to a representation of the virtual object on the display (eg, in response to detectingcontact 5026 at a location ontouch screen 112 that corresponds to virtual chair 5020) , initiating (818) an analysis of the field of view of the one or more cameras to detect one or more planes in the field of view of the one or more cameras. For example, before the first input satisfies the first criterion (eg, before the characteristic intensity of thecontact 5026 increases above the deep-press intensity threshold ITD , as described with reference to FIG. 5F ), and before the second user interface area is displayed, Begins activation of the camera and detection of planes in the camera's field of view. By starting the detection of planes when any interaction with the virtual object is detected, plane detection can be completed before the AR trigger criteria are met, so there is no visual delay for the user watching: When the first input meets the AR trigger criteria, the virtual The object transforms into an augmented reality view. Initiating analysis to detect one or more planes in the camera's field of view in response to detecting a contact at the location of the representation of the virtual object (eg, no further user input is required to initiate analysis of the camera's field of view) improves The efficiency of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,响应于检测到通过接触进行的第一输入满足第一标准(例如,响应于检测到接触5026的特征强度增加到高于深按压强度阈值ITD,如参照图5F描述的),发起(820)对一个或多个相机的视场的分析来检测一个或多个相机的视场中的一个或多个平面。例如,当第一输入满足第一标准时,开始相机的激活以及对相机的视场中的平面的检测,并且在平面检测完成之前,显示相机的视场。通过在AR触发标准得到满足时开始相机激活和平面检测,不会不必要地激活和保持运行相机和平面检测,这节省了电池电量并延长了电池寿命和相机寿命。In some embodiments, the first criterion is met in response to detecting the first input through the contact (eg, in response to detecting that the characteristic intensity of thecontact 5026 increases above the deep press intensity threshold ITD , as described with reference to FIG. 5F ) ), initiating (820) an analysis of the field of view of the one or more cameras to detect one or more planes in the field of view of the one or more cameras. For example, when the first input satisfies the first criterion, activation of the camera and detection of planes in the camera's field of view is initiated, and the camera's field of view is displayed until the plane detection is complete. By starting camera activation and plane detection when AR trigger criteria are met, cameras and plane detection are not activated and kept running unnecessarily, which saves battery power and extends battery and camera life.

在一些实施方案中,响应于检测到第一输入的初始部分满足平面检测触发标准而不满足第一标准,发起(822)对一个或多个相机的视场的分析来检测一个或多个相机的视场中的一个或多个平面。例如,当第一输入的初始部分满足一些标准(例如,不如AR触发标准严格的标准)时,开始相机的激活以及对相机的视场中的平面的检测,并且在平面检测完成之前,任选地显示相机的视场。通过在某些标准得到满足之后而不是在检测到接触时开始相机激活和平面检测,不会不必要地激活和保持运行相机和平面检测,这节省了电池电量并延长了电池寿命和相机寿命。通过在AR触发标准得到满足之前开始相机激活和平面检测,减少针对在第一输入满足AR触发标准时显示转变到增强现实视图中的虚拟对象的延迟(由相机激活和平面检测引起的)。In some embodiments, in response to detecting that the initial portion of the first input satisfies the plane detection trigger criteria but does not meet the first criteria, an analysis of the field of view of the one or more cameras is initiated (822) to detect the one or more cameras one or more planes in the field of view. For example, activation of the camera and detection of planes in the camera's field of view begins when the initial portion of the first input meets some criteria (eg, less stringent than the AR trigger criteria), and until the plane detection is complete, optionally to display the camera's field of view. By starting camera activation and plane detection after certain criteria are met, rather than when a contact is detected, the camera and plane detection are not activated and kept running unnecessarily, which saves battery power and extends battery and camera life. By starting camera activation and plane detection before the AR trigger criteria are met, the delay (caused by camera activation and plane detection) for displaying virtual objects transitioning into the augmented reality view when the first input meets the AR trigger criteria is reduced.

在一些实施方案中,设备以相应的方式在第二用户界面区域中显示(824)虚拟对象的表示,使得虚拟对象(例如,虚拟椅子5020)以相对于在一个或多个相机的视场5034中检测到的相应平面的预定义角度取向(例如,使得虚拟椅子5020的四条腿的下侧与地板表面5038之间没有距离(或存在最小距离))。例如,基于如二维图形用户界面中所示的虚拟对象的形状和取向来预定义虚拟对象相对于相应平面的取向和/或位置(例如,相应平面与可充当增强现实视图中的虚拟对象的三维表示的支撑表面的水平物理表面对应(例如,用于支撑花瓶的水平桌面),或者相应平面为可充当增强现实视图中的虚拟对象的三维表示的支撑表面的垂直物理表面(例如,用于悬挂虚拟画框的垂直墙壁))。在一些实施方案中,虚拟对象的取向和/或位置由虚拟对象的相应表面或边界(例如,底表面、底边界点、侧表面和/或侧边界点)限定。在一些实施方案中,与相应平面对应的锚定平面是虚拟对象的一组属性中的属性,并且该锚定平面根据虚拟对象应该表示的物理对象的性质来指定。在一些实施方案中,虚拟对象被放置在相对于在一个或多个相机的视场中检测到的多个平面的预定义取向下和/或位置处(例如,虚拟对象的多个相应侧面与在相机的视场中检测到的相应平面相关联)。在一些实施方案中,如果相对于虚拟对象的水平底平面定义为虚拟对象预定义的取向和/或位置,则在相机的视场中检测到的地板平面上显示虚拟对象的底平面(例如,虚拟对象的水平底平面与地板平面平行,且其与地板平面之间的距离为零)。在一些实施方案中,如果相对于虚拟对象的垂直后平面定义为虚拟对象预定义的取向和/或位置,则抵靠在一个或多个相机的视场中检测到的墙壁平面放置虚拟对象的后表面(例如,虚拟对象的垂直后平面与墙壁平面平行,且其与墙壁平面之间的距离为零)。在一些实施方案中,在与相应平面相距固定距离的位置处或者相对于相应平面成除零角度或直角之外的角度放置虚拟对象。相对于在相机的视场中检测到的平面来显示虚拟对象的表示(例如,不需要进一步的用户输入,以相对于相机的视场中的平面来显示虚拟对象)增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device displays (824) the representation of the virtual object in the second user interface area in a corresponding manner such that the virtual object (eg, virtual chair 5020) is positioned relative to the field ofview 5034 of the one or more cameras The predefined angular orientations of the corresponding planes detected in (eg, such that there is no distance (or there is a minimum distance) between the undersides of the four legs of thevirtual chair 5020 and the floor surface 5038). For example, the orientation and/or position of the virtual object relative to the corresponding plane is predefined based on the shape and orientation of the virtual object as shown in the two-dimensional graphical user interface (eg, the corresponding plane and the virtual object that can serve as the virtual object in the augmented reality view are predefined). A horizontal physical surface correspondence of a support surface for a three-dimensional representation (eg, a horizontal table top for supporting a vase), or the corresponding plane is a vertical physical surface that can serve as a support surface for a three-dimensional representation of a virtual object in an augmented reality view (eg, for Vertical walls to hang virtual picture frames)). In some embodiments, the orientation and/or position of the virtual object is defined by a corresponding surface or boundary (eg, bottom surface, bottom boundary point, side surface, and/or side boundary point) of the virtual object. In some embodiments, the anchor plane corresponding to the respective plane is an attribute in a set of attributes of the virtual object, and the anchor plane is specified according to the properties of the physical object that the virtual object should represent. In some embodiments, virtual objects are placed at predefined orientations and/or locations relative to multiple planes detected in the field of view of one or more cameras (eg, multiple respective sides of the virtual object are associated with the corresponding planes detected in the camera's field of view). In some embodiments, the bottom plane of the virtual object is displayed on the floor plane detected in the camera's field of view if it is defined as a predefined orientation and/or position of the virtual object relative to the horizontal bottom plane of the virtual object (eg, The horizontal bottom plane of the virtual object is parallel to the floor plane, and the distance between it and the floor plane is zero). In some embodiments, the virtual object is placed against a wall plane detected in the field of view of the one or more cameras if the virtual object's predefined orientation and/or position is defined relative to the virtual object's vertical back plane The back surface (eg, the vertical back plane of the virtual object is parallel to the wall plane and its distance from the wall plane is zero). In some embodiments, virtual objects are placed at a fixed distance from the respective plane or at an angle other than zero or right angles relative to the respective plane. Displaying representations of virtual objects relative to planes detected in the camera's field of view (eg, without further user input to display virtual objects relative to planes in the camera's field of view) enhances device operability , which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,响应于在一个或多个相机的视场中检测到相应的平面,具有一个或多个触觉输出发生器167的设备输出(826)触觉输出,该触觉输出指示在一个或多个相机的视场中检测到相应的平面。在一些实施方案中,为相机的视场中检测到的每个平面(例如,地板表面5038和/或桌面5046)生成相应的触觉输出。在一些实施方案中,在完成平面检测时生成触觉输出。在一些实施方案中,触觉输出伴随有示出在第二用户界面部分中的视场中的视场平面的视觉指示(例如,已检测到的视场平面的瞬时突出显示)。输出指示在相机的视场中检测到平面的触觉输出向用户提供了指示已检测到该平面的反馈。提供改进的触觉反馈增强了设备的可操作性(例如,通过帮助用户提供适当的输入并减少用于放置虚拟对象的不必要的另外输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, a device having one or morehaptic output generators 167 outputs (826) a haptic output indicating that the corresponding plane is detected in the field of view of the one or more cameras. Corresponding planes are detected in the field of view of multiple cameras. In some embodiments, a corresponding haptic output is generated for each plane detected in the camera's field of view (eg,floor surface 5038 and/or table top 5046). In some embodiments, the haptic output is generated upon completion of the plane detection. In some embodiments, the haptic output is accompanied by a visual indication of the field of view plane shown in the field of view in the second user interface portion (eg, a momentary highlighting of the field of view plane that has been detected). A haptic output indicating that a plane has been detected in the camera's field of view provides feedback to the user that the plane has been detected. Providing improved haptic feedback enhances the operability of the device (eg, by helping the user provide appropriate input and reducing unnecessary additional input for placing virtual objects), which in turn enables the user to use the device more quickly and efficiently This reduces power usage and extends the battery life of the device.

在一些实施方案中,在从显示第一用户界面区域切换到显示第二用户界面区域时,设备显示(828)虚拟对象的表示转变(例如,移动、旋转、重新定尺寸和/或以不同的风格重新渲染等)到第二用户界面区域中相对于相应平面的预定义位置时的动画(例如,如图5F至图5I所示),并且结合显示相对于相应平面成预定义角度(例如,在相对于相应平面的预定义取向下和/或位置处,以及其待示出在增强现实视图中的到达最终状态的尺寸、旋转角度和外观)的虚拟对象的表示,具有一个或多个触觉输出发生器167的设备输出触觉输出,该触觉输出指示虚拟对象相对于相应平面成预定义角度显示在第二用户界面区域中。例如,如图5I所示,结合显示相对于地板表面5038成预定义角度的虚拟椅子5020,设备输出触觉输出5036。在一些实施方案中,生成的触觉输出被配置为具有反映虚拟对象或虚拟对象所表示的物理对象的以下属性的特征(例如,频率、循环数量、调制、幅度、伴随音频波等):重量(例如,重与轻)、材料(例如,金属、棉、木材、大理石、液体、橡胶、玻璃)、尺寸(例如,大与小)、形状(例如,薄与厚、长与短、圆与尖等)、弹性(例如,弹性与刚性)、性质(例如,俏皮与庄严、温和与强劲等)以及其他属性。例如,触觉输出使用图4F至图4K所示的触觉输出模式中的一者或多者。在一些实施方案中,包括一个或多个特征随时间推移的一个或多个变化的预设分布与虚拟对象(例如,表情符号)对应。例如,为“微笑”表情符号虚拟对象提供“弹跳”触觉输出分布。输出指示虚拟对象的表示相对于相应平面的放置的触觉输出为用户提供了指示虚拟对象的表示已相对于相应平面自动放置的反馈。提供改进的触觉反馈增强了设备的可操作性(例如,通过帮助用户提供适当的输入并减少用于放置虚拟对象的不必要的另外输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, upon switching from displaying the first user interface area to displaying the second user interface area, the device displays (828) a representational transition (eg, moves, rotates, resizes, and/or in a different manner) of the virtual object. style re-rendering, etc.) to a predefined position in the second user interface area relative to the corresponding plane (e.g., as shown in Figures 5F-5I), combined with the display at a pre-defined angle relative to the corresponding plane (e.g., A representation of a virtual object with one or more haptics at a predefined orientation and/or position relative to the corresponding plane, and its size, rotation angle and appearance to its final state to be shown in the augmented reality view) The device of theoutput generator 167 outputs a haptic output indicating that the virtual object is displayed in the second user interface area at a predefined angle relative to the corresponding plane. For example, as shown in FIG. 5I, the device outputs ahaptic output 5036 in conjunction with displaying avirtual chair 5020 at a predefined angle relative to afloor surface 5038. In some embodiments, the generated haptic output is configured to have characteristics (eg, frequency, number of cycles, modulation, amplitude, accompanying audio waves, etc.) that reflect the following properties of the virtual object or the physical object represented by the virtual object: Weight ( For example, heavy versus light), material (for example, metal, cotton, wood, marble, liquid, rubber, glass), size (for example, large versus small), shape (for example, thin versus thick, long versus short, round versus pointed) etc.), elasticity (eg, springy vs. rigid), nature (eg, playful vs. solemn, mild vs. strong, etc.), and other attributes. For example, the haptic output uses one or more of the haptic output modes shown in Figures 4F-4K. In some embodiments, a preset distribution that includes one or more changes in one or more characteristics over time corresponds to a virtual object (eg, an emoji). For example, providing a "bounce" haptic output distribution for a "smile" emoji virtual object. Outputting a haptic output indicating placement of the representation of the virtual object relative to the corresponding plane provides feedback to the user indicating that the representation of the virtual object has been automatically placed relative to the corresponding plane. Providing improved haptic feedback enhances the operability of the device (eg, by helping the user provide appropriate input and reducing unnecessary additional input for placing virtual objects), which in turn enables the user to use the device more quickly and efficiently This reduces power usage and extends the battery life of the device.

在一些实施方案中(830),触觉输出具有与虚拟对象的特征(例如,模拟物理特性,诸如尺寸、密度、质量和/或材料)对应的触觉输出分布。在一些实施方案中,触觉输出分布具有基于虚拟对象的一个或多个特征(例如,重量、材料、尺寸、形状和/或弹性)而变化的特征(例如,频率、循环数量、调制、幅值、伴随音频波等)。例如,触觉输出使用图4F至图4K所示的触觉输出模式中的一者或多者。在一些实施方案中,随虚拟对象的尺寸、重量和/或质量增加,触觉输出的幅值和/或持续时间也增加。在一些实施方案中,基于构成虚拟对象的虚拟材料来选择触觉输出模式。输出具有与虚拟对象的特征对应的分布的触觉输出为用户提供了指示关于虚拟对象的特征的信息的反馈。提供改进的触觉反馈增强了设备的可操作性(例如,通过帮助用户提供适当的输入;通过减少用于放置虚拟对象的不必要的额外输入;以及通过提供允许用户感知虚拟对象的特征,但不使具有显示的关于这些特征的信息的用户界面混乱),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments (830), the haptic output has a haptic output distribution corresponding to the characteristics of the virtual object (eg, simulating physical properties such as size, density, mass, and/or material). In some embodiments, the haptic output distribution has characteristics (eg, frequency, number of cycles, modulation, amplitude) that vary based on one or more characteristics of the virtual object (eg, weight, material, size, shape, and/or elasticity) , accompanying audio waves, etc.). For example, the haptic output uses one or more of the haptic output modes shown in Figures 4F-4K. In some embodiments, as the size, weight, and/or mass of the virtual object increases, the magnitude and/or duration of the haptic output also increases. In some embodiments, the haptic output mode is selected based on the virtual material comprising the virtual object. Outputting a haptic output having a distribution corresponding to the characteristics of the virtual object provides the user with feedback indicative of information about the characteristics of the virtual object. Providing improved haptic feedback enhances the operability of the device (e.g., by helping users provide appropriate input; by reducing unnecessary additional input for placing virtual objects; and by providing features that allow users to perceive virtual objects, but not cluttering the user interface with displayed information about these features), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,当在第二用户界面区域中显示虚拟对象的表示时,设备检测(832)调整一个或多个相机的视场5034(例如,如图5K至图5L所示)的设备移动(例如,设备的横向移动和/或旋转),并且响应于检测到设备的移动,在调整一个或多个相机的视场时,根据虚拟对象与一个或多个相机的视场中的相应平面(例如,地板表面5038)之间的固定空间关系(例如,取向和/或位置)(例如,虚拟对象以使得虚拟对象的表示与平面之间的固定角度得以保持(例如,虚拟对象看起来保持在平面上的固定位置处或者沿视场平面滚动)的取向和位置显示在显示器上),设备调整第二用户界面区域中的虚拟对象(例如,虚拟椅子5020)的表示。例如,在图5K至图5L中,当设备100移动时,包括相机的视场5034的第二用户界面区域中的虚拟椅子5020相对于地板表面5038保持固定的取向和位置。在一些实施方案中,虚拟对象看起来相对于周围物理环境5002静止且无变化,也就是说,当一个或多个相机的视场随设备相对于周围物理环境移动而变化时,虚拟对象的表示在显示器上的尺寸、位置和/或取向随设备位置和/或取向变化而变化。根据虚拟对象与相应平面之间的固定关系来调整虚拟对象的表示(例如,不需要进一步的用户输入来保持虚拟对象相对于相应平面的位置)增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, when the representation of the virtual object is displayed in the second user interface area, the device detects ( 832 ) a device that adjusts the field ofview 5034 of one or more cameras (eg, as shown in FIGS. 5K-5L ) Movement (eg, lateral movement and/or rotation of the device), and in response to detecting movement of the device, when adjusting the field of view of the one or more cameras, according to the virtual object corresponding to the one or more cameras in the field of view A fixed spatial relationship (eg, orientation and/or position) between planes (eg, floor surface 5038 ) (eg, virtual objects such that a fixed angle between the representation of the virtual object and the plane is maintained (eg, the virtual object looks Remaining at a fixed position on the plane or scrolling along the field of view plane) orientation and position displayed on the display), the device adjusts the representation of the virtual object (eg, virtual chair 5020) in the second user interface area. For example, in FIGS. 5K-5L, thevirtual chair 5020 in the second user interface area including the camera's field ofview 5034 maintains a fixed orientation and position relative to thefloor surface 5038 as thedevice 100 moves. In some embodiments, virtual objects appear stationary and unchanging relative to the surroundingphysical environment 5002, that is, as the field of view of one or more cameras changes as the device moves relative to the surrounding physical environment, the representation of the virtual object The size, position and/or orientation on the display varies with device position and/or orientation. Adjusting the representation of the virtual object according to the fixed relationship between the virtual object and the corresponding plane (eg, no further user input is required to maintain the position of the virtual object relative to the corresponding plane) enhances the operability of the device, which in turn enables the user to The device can be used more quickly and efficiently reducing power usage and extending the battery life of the device.

在一些实施方案中,(例如,在与用一个或多个相机的视场的表示替换第一用户界面区域的至少一部分的显示对应的时间处),设备显示(834)在从显示第一用户界面区域切换到显示第二用户界面区域时连续显示虚拟对象(例如,虚拟椅子5020)的表示的动画(例如,移动、围绕一个或多个轴的旋转和/或缩放)(例如,如图5F至图5I所示)。例如,动画包括从在显示第一用户界面区域时显示虚拟对象的二维表示到在显示第二用户界面区域时显示虚拟对象的三维表示的转变。在一些实施方案中,虚拟对象的三维表示具有相对于捕获在一个或多个相机的视场中的物理环境的一部分的当前取向来预定义的取向。在一些实施方案中,当转变到增强现实视图时,虚拟对象的表示被移动、被重新定尺寸并且被重新取向,以使虚拟对象从显示器上的初始位置到达显示器上的新位置(例如,增强现实视图的中心或增强现实视图中的另一预定义位置),并且在移动期间或者在移动结束时,对虚拟对象进行重新取向,使得虚拟对象相对于在相机的视场中检测到的平面(例如,可支撑虚拟对象的表示的物理表面,诸如,垂直墙壁或水平地板表面)成固定角度。在一些实施方案中,当发生动画过渡时,调整虚拟对象的照明和/或虚拟对象投射的阴影(例如,以匹配在一个或多个相机的视场中检测到的环境照明)。显示当虚拟对象的表示从显示第一用户界面区域切换到显示第二用户界面区域时的动画为用户提供了指示第一输入满足第一标准的反馈。提供改进的反馈增强了设备的可操作性(例如,通过帮助用户提供适当的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, (eg, at a time corresponding to replacing the display of at least a portion of the first user interface area with a representation of the field of view of the one or more cameras), the device displays (834) the display of the first user from the display (834). An animation (eg, movement, rotation and/or scaling about one or more axes) of the representation of the virtual object (eg, virtual chair 5020) is continuously displayed when the interface area switches to display the second user interface area (eg, as shown in FIG. 5F ). to Figure 5I). For example, the animation includes a transition from displaying a two-dimensional representation of the virtual object when the first user interface area is displayed to displaying a three-dimensional representation of the virtual object when the second user interface area is displayed. In some embodiments, the three-dimensional representation of the virtual object has a predefined orientation relative to the current orientation of a portion of the physical environment captured in the field of view of the one or more cameras. In some embodiments, when transitioning to an augmented reality view, representations of virtual objects are moved, resized, and reoriented such that the virtual objects go from their original positions on the display to new positions on the display (eg, augmented center of the reality view or another predefined location in the augmented reality view), and during or at the end of the movement, the virtual object is reoriented so that the virtual object is relative to the plane detected in the camera's field of view ( For example, a physical surface that may support a representation of a virtual object, such as a vertical wall or a horizontal floor surface, is at a fixed angle. In some embodiments, when the animation transition occurs, the lighting of the virtual objects and/or the shadows cast by the virtual objects are adjusted (eg, to match ambient lighting detected in the field of view of one or more cameras). Displaying an animation when the representation of the virtual object switches from displaying the first user interface area to displaying the second user interface area provides feedback to the user indicating that the first input satisfies the first criterion. Providing improved feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), which in turn improves the user's ability to use the device more quickly and efficiently Power usage is reduced and device battery life is extended.

在一些实施方案中,当在显示器上显示第二用户界面区域时,设备检测(836)通过第二接触(例如,接触5040)进行的第二输入,其中第二输入包括(任选地,通过第二接触进行的选择虚拟对象的表示的按压或触摸输入以及)第二接触沿在显示器上的第一路径的移动(例如,如图5N至图5P所示),并且响应于检测到通过第二接触进行的第二输入,设备沿与第一路径对应(例如,与第一路径相同或者受到第一路径的约束)的第二路径移动第二用户界面区域中的虚拟对象(例如,虚拟椅子5020)的表示。在一些实施方案中,第二接触不同于第一接触且在第一接触抬离之后被检测到(例如,如图5N至图5P中的接触5040所示,其在图5C至图5F中的接触5026抬离之后被检测到)。在一些实施方案中,第二接触与连续保持在触敏表面上的第一接触相同(例如,如通过接触5086进行的输入所示,该输入满足AR触发标准,并且然后在触摸屏112上移动,以移动虚拟灯5084)。在一些实施方案中,虚拟对象上的轻扫输入使虚拟对象旋转,而虚拟对象的移动任选地受到相机的视场中的平面的约束(例如,轻扫输入使椅子的表示在相机的视场中的地板平面上旋转)。响应于检测到输入而移动虚拟对象的表示为用户提供了指示所显示的虚拟对象的位置可响应于用户输入而移动的反馈。提供改进的反馈增强了设备的可操作性(例如,通过帮助用户提供适当的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, while the second user interface area is displayed on the display, the device detects (836) a second input via a second contact (eg, contact 5040), wherein the second input includes (optionally, via A press or touch input by the second contact to select the representation of the virtual object and) movement of the second contact along a first path on the display (eg, as shown in FIGS. 5N-5P ), and in response to detection of For a second input by two touches, the device moves a virtual object (eg, a virtual chair) in the second user interface area along a second path corresponding to (eg, the same as or constrained by) the first path. 5020) representation. In some implementations, the second contact is different from the first contact and is detected after the first contact lifts off (eg, as shown bycontact 5040 in FIGS. 5N-5P , which is in FIGS. 5C-5F detected aftercontact 5026 lifts off). In some embodiments, the second contact is the same as the first contact that is continuously maintained on the touch-sensitive surface (eg, as shown by the input made throughcontact 5086, which meets AR trigger criteria, and then moves on thetouch screen 112, to move the virtual light 5084). In some embodiments, a swipe input on a virtual object rotates the virtual object, and the movement of the virtual object is optionally constrained by a plane in the camera's field of view (eg, a swipe input causes a representation of a chair to be in the camera's field of view) rotation on the floor plane in the field). The representation of moving the virtual object in response to detecting the input provides the user with feedback indicating that the position of the displayed virtual object can be moved in response to the user input. Providing improved feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), which in turn improves the user's ability to use the device more quickly and efficiently Power usage is reduced and device battery life is extended.

在一些实施方案中,当虚拟对象的表示基于接触的移动和与虚拟对象对应的相应平面沿第二路径移动时,设备调整(838)虚拟对象的表示的尺寸(例如,基于从虚拟对象的表示到用户的虚拟距离,以保持虚拟对象在视场中的准确视角)。例如,在图5N至图5P中,当虚拟椅子更深入相机的视场5034、远离设备100以及朝向桌子5004移动时,虚拟椅子5020的尺寸减小。在虚拟对象的表示基于接触的移动以及与虚拟对象对应的平面沿第二路径移动时调整虚拟对象的表示的尺寸(例如,不需要进一步的用户输入来调整虚拟对象的表示的尺寸,以将虚拟对象的表示保持在相对于相机的视场中的环境的逼真尺寸下),这增强了设备的可操作性,并且又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device resizes (838) the representation of the virtual object (eg, based on the representation from the virtual object) as the representation of the virtual object moves along the second path based on the contact-based movement and the corresponding plane corresponding to the virtual object. virtual distance to the user to maintain an accurate perspective of virtual objects in the field of view). For example, in Figures 5N-5P, as the virtual chair moves deeper into the camera's field ofview 5034, away from thedevice 100, and toward the table 5004, the size of thevirtual chair 5020 decreases. The representation of the virtual object is resized as the representation of the virtual object moves based on the contact and the plane corresponding to the virtual object moves along the second path (eg, no further user input is required to resize the representation of the virtual object to fit the virtual object The representation of the object remains at a realistic size relative to the environment in the camera's field of view), which enhances the operability of the device, and in turn reduces power usage and prolongs the Device battery life.

在一些实施方案中,当虚拟对象的表示沿第二路径移动时,设备保持(840)虚拟对象(例如,虚拟灯5084)的表示的第一尺寸(例如,如图5AI至图5AL所示),设备检测通过第二接触进行的第二输入的终止(例如,包括检测第二接触的抬离,如图5AL至图5AM所示),并且响应于检测到通过第二接触进行的第二输入终止,设备将虚拟对象的表示放置在第二用户界面区域中的放下位置(例如,桌面5046上)处,并且在第二用户界面区域中的放下位置处显示具有第二尺寸的虚拟对象的表示,第二尺寸不同于第一尺寸(例如,图5AM中通过接触5086进行的输入终止之后的虚拟灯5084的尺寸不同于图5AL中通过接触5086进行的输入终止之前的虚拟灯5084的尺寸)。例如,在被接触拖动时,对象的尺寸和观看视角不会变化,并且当对象下落在增强现实视图中的最终位置处时,显示具有基于物理环境中与相机的视场中示出的虚拟对象的下落位置对应的物理位置来确定的尺寸和观看视角的对象,使得根据确定下落位置为相机的视场中的第一位置,对象具有第二尺寸,以及根据确定下降位置为相机的视场中的第二位置,对象具有与第二尺寸不同的第三尺寸,其中基于下落位置与一个或多个相机之间的距离来选择第二尺寸和第三尺寸。响应于检测到移动虚拟对象的第二输入的终止而显示具有改变的尺寸的虚拟对象的表示(例如,不需要进一步的用户输入来调整虚拟对象的尺寸,以将虚拟对象保持在相对于相机的视场中的环境的逼真尺寸下)增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device maintains (840) the first size (eg, as shown in Figures 5AI-5AL) of the representation of the virtual object (eg, virtual light 5084) as the representation of the virtual object moves along the second path. , the device detects termination of the second input through the second contact (eg, including detecting lift-off of the second contact, as shown in FIGS. 5AL-5AM), and in response to detecting the second input through the second contact terminating, the device places the representation of the virtual object at the drop position (eg, on the desktop 5046) in the second user interface area, and displays the representation of the virtual object with the second size at the drop position in the second user interface area , the second size is different from the first size (eg, the size of virtual light 5084 after termination of input viacontact 5086 in FIG. 5AM is different from the size of virtual light 5084 before termination of input viacontact 5086 in FIG. 5AL). For example, the size and viewing angle of the object does not change when being contact-dragged, and when the object falls at its final location in the augmented reality view, the display has a virtual environment based on the physical environment shown in the camera's field of view The size and viewing angle of the object are determined according to the physical position corresponding to the falling position of the object, so that the falling position is determined to be the first position in the field of view of the camera, the object has a second size, and the falling position is determined to be the field of view of the camera. In a second position in the object, the object has a third size different from the second size, wherein the second size and the third size are selected based on the distance between the drop position and the one or more cameras. Displaying a representation of the virtual object with the changed size in response to detecting termination of the second input to move the virtual object (eg, no further user input is required to adjust the size of the virtual object to keep the virtual object at a position relative to the camera) The realistic size of the environment in the field of view) enhances the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,根据确定第二接触沿显示器上的第一路径的移动满足第二标准(例如,在第一路径的末端,接触在阈值距离内,或者在显示器的边缘(例如,底边缘、顶边缘和/或侧边缘)或第二用户界面区域的边缘之外),设备(842):停止显示包括一个或多个相机的视场的表示的第二用户界面区域,并且重新显示具有虚拟对象的表示的(完整)第一用户界面区域(例如,如果先前第一用户界面区域的一部分与第二用户界面区域同时显示,那么在不再显示第二用户界面区域之后,设备显示完整的第一用户界面区域)。例如,响应于将虚拟椅子5054拖动到触摸屏112的边缘的接触5054的移动,如图5V至图5X所示,停止显示相机的视场5034,并且重新显示完整的即时消息用户界面5008,如图5Y至图5AD所示。在一些实施方案中,随着接触接近显示器的边缘或第二用户界面区域的边缘,第二用户界面区域淡出(例如,如图5X至图5Y所示),并且/或者第一用户界面区域(其未显示或被阻挡的部分)淡入(例如,如图5Z至图5AA所示)。在一些实施方案中,用于从非AR视图(例如,第一用户界面区域)转变到AR视图(例如,第二用户界面区域)的手势与用于从AR视图转变到非AR视图的手势相同。例如,虚拟对象上超出当前显示的用户界面中的阈值位置(例如,在当前显示的用户界面区域的边界的阈值距离内,或者超出当前显示的用户界面区域的边界)的拖动手势使得从当前显示的用户界面区域转变到对应的用户界面区域(例如,从显示第一用户界面区域转变到显示第二用户界面区域,或另选地,从显示第二用户界面区域转变到显示第一用户界面区域)。在一些实施方案中,在第一标准/第二标准得到满足之前示出视觉指示(例如,淡出当前显示的用户界面区域以及淡入对应的用户界面),并且在检测到输入的终止(例如,接触的抬离)之前,如果输入继续且第一标准/第二标准未得到满足,该视觉指示是可逆的。响应于检测到满足输入标准的输入而重新显示第一用户界面提供了另外的控件选项,但不会使具有另外显示的控件(例如,用于从第二用户界面显示第一用户界面的控件)的第二用户界面混乱。提供另外的控件选项但不会使具有另外显示的控件的第二用户界面混乱增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the movement of the second contact along the first path on the display satisfies a second criterion (eg, at the end of the first path, the contact is within a threshold distance, or at an edge of the display (eg, the bottom edge) , top edge and/or side edge) or the edge of the second user interface area), device (842): Stop displaying the second user interface area including the representation of the field of view of the one or more cameras, and redisplay the second user interface area with The (complete) first user interface area of the representation of the virtual object (eg, if a portion of the first user interface area was previously displayed at the same time as the second user interface area, after the second user interface area is no longer displayed, the device displays the complete first user interface area). For example, in response to the movement of thecontact 5054 dragging thevirtual chair 5054 to the edge of thetouch screen 112, as shown in FIGS. 5V-5X, the display of the camera's field ofview 5034 is stopped, and the full instantmessaging user interface 5008 is redisplayed, as shown in FIG. 5Y to 5AD. In some embodiments, as the contact approaches the edge of the display or the edge of the second user interface area, the second user interface area fades out (eg, as shown in FIGS. 5X-5Y), and/or the first user interface area ( The portion thereof that is not shown or blocked) fades in (eg, as shown in Figures 5Z-5AA). In some embodiments, the gestures used to transition from a non-AR view (eg, a first user interface area) to an AR view (eg, a second user interface area) are the same as the gestures used to transition from an AR view to a non-AR view . For example, a drag gesture on the virtual object beyond a threshold location in the currently displayed user interface (eg, within a threshold distance of the boundaries of the currently displayed user interface area, or beyond the boundaries of the currently displayed user interface area) causes the The displayed user interface area transitions to the corresponding user interface area (eg, transitions from displaying the first user interface area to displaying the second user interface area, or alternatively, transitioning from displaying the second user interface area to displaying the first user interface area). In some embodiments, a visual indication (eg, fading out the currently displayed user interface area and fading in the corresponding user interface) is shown before the first/second criterion is met, and after the termination of the input is detected (eg, contact The visual indication is reversible if the input continues and the first/second criterion is not met before the lift off). Redisplaying the first user interface in response to detecting an input that meets the input criteria provides additional control options, but does not result in having additionally displayed controls (eg, controls for displaying the first user interface from the second user interface) The second UI is confusing. Providing additional control options without cluttering the second user interface with additionally displayed controls enhances the operability of the device, which in turn reduces power usage and prolongs the device by enabling the user to use the device more quickly and efficiently battery life.

在一些实施方案中,在与重新显示第一用户界面区域对应的时间处,设备显示(844)从在第二用户界面区域中显示虚拟对象的表示到在第一用户界面区域中显示虚拟对象的表示的动画过渡(例如,移动、围绕一个或多个轴的旋转和/或缩放)(例如,如图5AB至图5AD中的虚拟椅子5020的动画所示)。显示从在第二用户界面中显示虚拟对象的表示到在第一用户界面中显示虚拟对象的表示的动画过渡(例如,不需要进一步的用户输入来重新定位第一用户界面中的虚拟对象)增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, at a time corresponding to redisplaying the first user interface area, the device displays (844) from displaying the representation of the virtual object in the second user interface area to displaying the representation of the virtual object in the first user interface area Animated transitions (eg, movement, rotation, and/or scaling) of the representation (eg, as shown in the animation ofvirtual chair 5020 in FIGS. 5AB-5AD). Display animation transitions from displaying representations of virtual objects in the second user interface to displaying representations of virtual objects in the first user interface (eg, no further user input is required to reposition the virtual objects in the first user interface) enhancements The operability of the device is improved, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,当第二接触沿第一路径移动时,设备改变(846)在一个或多个相机的视场中识别到的一个或多个相应平面的视觉外观(例如,突出显示、标记、勾勒和/或以其他方式在视觉上改变一个或多个平面的外观),该一个或多个相应平面与接触的当前位置对应。例如,当接触5042沿如图5O至图5P中的箭头5042和5044所示的路径拖动虚拟椅子5020时,地板表面5038被突出显示(例如,与图5M相比,在接触5042移动之前)。在一些实施方案中,根据确定接触在与在相机的视场中检测到的第一平面对应的位置处,突出显示第一平面。根据确定接触已移动到与在相机的视场中检测到的第二平面对应的位置(例如,如图5S至图5U所示),停止突出显示第一平面(例如,地板表面5038),并且突出显示第二平面(例如,桌面5046)。在一些实施方案中,同时突出显示多个平面。在一些实施方案中,以与在视觉上改变其他平面的方式不同的方式在视觉上改变多个在视觉上改变的平面中的第一平面,以指示接触在与第一平面对应的位置处。改变在相机的视场中识别到的一个或多个相应平面的视觉外观为用户提供了指示已识别到该平面(例如,虚拟对象可相对于该平面定位)的反馈。提供改进的视觉反馈增强了设备的可操作性(例如,通过帮助用户提供适当的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, as the second contact moves along the first path, the device changes (846) the visual appearance of one or more corresponding planes identified in the field of view of the one or more cameras (eg, highlighted, mark, outline, and/or otherwise visually alter the appearance of one or more planes) that correspond to the current location of the contact. For example, when contact 5042 dragsvirtual chair 5020 along the path shown byarrows 5042 and 5044 in Figures 5O-5P,floor surface 5038 is highlighted (eg, before contact 5042 moves, compared to Figure 5M) . In some embodiments, the first plane is highlighted upon determining that the contact is at a location corresponding to the first plane detected in the camera's field of view. Upon determining that the contact has moved to a position corresponding to the second plane detected in the camera's field of view (eg, as shown in Figures 5S-5U), the highlighting of the first plane (eg, floor surface 5038) is stopped, and The second plane (eg, desktop 5046) is highlighted. In some embodiments, multiple planes are highlighted simultaneously. In some embodiments, a first plane of the plurality of visually altered planes is visually altered in a different manner than the other planes are visually altered to indicate that the contact is at a location corresponding to the first plane. Changing the visual appearance of one or more corresponding planes identified in the camera's field of view provides the user with feedback that the plane has been identified (eg, relative to which the virtual object can be positioned). Providing improved visual feedback enhances the operability of the device (e.g., by helping users provide appropriate input and reducing user errors when operating/interacting with the device), which in turn enables users to use the device more quickly and efficiently This reduces power usage and extends the battery life of the device.

在一些实施方案中,响应于检测到通过接触进行的第一输入,根据确定通过接触进行的第一输入满足第三(例如,登台用户界面显示)标准(例如,登台用户界面显示标准是被配置为识别轻扫输入、触摸保持输入、按压输入、轻击输入或强度高于预定义的强度阈值的用力按压的标准),设备在显示器上显示(848)第三用户界面区域,这包括替换第一用户界面区域的至少一部分的显示(例如,包括替换虚拟对象的2D图像的虚拟对象的3D模型)。在一些实施方案中,在显示登台用户界面(例如,如参照图6I描述的登台用户界面6010)时,设备基于检测到的与登台用户界面对应的输入来更新虚拟对象的表示的外观(例如,如下文参考方法900更详细地描述)。在一些实施方案中,当在虚拟对象显示在登台用户界面中时检测到另一输入且该输入满足用于转变到显示第二用户界面区域的标准时,设备用第二用户界面区域替换登台用户界面的显示,同时连续显示虚拟对象。相对于方法900描述了更多细节。根据确定第一输入满足第三标准显示第三用户界面提供了另外的控件选项,但不会使具有另外显示的控件(例如,用于从第一用户界面显示第三用户界面的控件)的第一用户界面混乱。提供另外的控件选项但不会使具有另外显示的控件的第二用户界面混乱增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to detecting the first input via the contact, based on determining that the first input via the contact satisfies a third (eg, staging user interface display) criterion (eg, the staging user interface display criterion is configured To identify a swipe input, a touch hold input, a press input, a tap input, or a criterion for a hard press with an intensity above a predefined intensity threshold), the device displays (848) a third user interface area on the display, which includes replacing the first Display of at least a portion of a user interface area (eg, including a 3D model of a virtual object that replaces a 2D image of the virtual object). In some embodiments, upon displaying a staging user interface (eg, staginguser interface 6010 as described with reference to FIG. 6I ), the device updates the appearance of the representation of the virtual object based on the detected input corresponding to the staging user interface (eg, as described in more detail below with reference to method 900). In some embodiments, the device replaces the staging user interface with the second user interface area when another input is detected while the virtual object is displayed in the staging user interface and the input satisfies the criteria for transitioning to displaying the second user interface area , while continuously displaying virtual objects. More details are described with respect tomethod 900 . Displaying the third user interface based on the determination that the first input meets the third criterion provides additional control options, but does not enable the third user interface with additionally displayed controls (eg, controls for displaying the third user interface from the first user interface) A cluttered user interface. Providing additional control options without cluttering the second user interface with additionally displayed controls enhances the operability of the device, which in turn reduces power usage and prolongs the device by enabling the user to use the device more quickly and efficiently battery life.

在一些实施方案中,根据确定通过接触进行的第一输入(例如,与滚动第一用户界面区域对应的轻扫输入或与显示对应于第一用户界面区域中的内容的网页或电子邮件的请求对应的轻击输入)不满足第一(例如,AR触发)标准,设备保持(850)第一用户界面区域的显示,而不用一个或多个相机的视场的表示替换第一用户界面区域的至少一部分的显示(例如,如参照图6B至图6C描述的)。使用第一标准来确定在用一个或多个相机的视场替换第一用户界面区域的至少一部分的显示时是否保持第一用户界面区域的显示或者是否连续显示虚拟对象的表示,这使得多种不同类型的操作能够响应于输入而执行。响应于输入而使多种不同类型的操作能够执行(例如,通过用一个或多个相机的视场替换用户界面的至少一部分的显示,或者通过保持第一用户界面区域的显示,而不用一个或多个相机的视场的表示替换第一用户界面区域的至少一部分的显示)提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the first input via the contact is determined based on the determination (eg, a swipe input corresponding to scrolling the first user interface area or a request to display a web page or email corresponding to content in the first user interface area). The corresponding tap input) does not meet the first (eg, AR trigger) criterion, the device maintains (850) the display of the first user interface area without replacing the representation of the first user interface area with a representation of the field of view of the one or more cameras. Display of at least a portion (eg, as described with reference to FIGS. 6B-6C ). Using a first criterion to determine whether to maintain the display of the first user interface area or whether to continuously display the representation of the virtual object while replacing the display of at least a portion of the first user interface area with the field of view of one or more cameras enables a variety of Different types of operations can be performed in response to input. enabling a variety of different types of operations to be performed in response to the input (eg, by replacing the display of at least a portion of the user interface with the field of view of one or more cameras, or by maintaining the display of the first user interface area without using one or more cameras) The representation of the fields of view of the multiple cameras (replacing the display of at least a portion of the first user interface area) increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn enhances the operability of the device by enabling the user to more quickly and efficiently. Using the device reduces power usage and extends the battery life of the device.

应当理解,对图8A至8E中已进行描述的操作的具体次序仅仅是示例性的,并非旨在表明所述次序是可以执行这些操作的唯一次序。本领域的普通技术人员会想到多种方式来对本文所述的操作进行重新排序。另外,应当注意,本文相对于本文所述的其他方法(例如,方法900和1000)所述的其他过程的细节同样以类似的方式适用于上文相对于图8A至图8E所述的方法800。例如,上文参考方法800所述的接触、输入、虚拟对象、用户界面区域、强度阈值、触觉输出、视场、移动和/或动画任选地具有本文参考本文所述的其他方法(例如,方法900、1000、16000、17000、18000、19000和20000)所述的接触、输入、虚拟对象、用户界面区域、强度阈值、触觉输出、视场、移动和/或动画的特征中的一者或多者。为了简明起见,此处不再重复这些细节。It should be understood that the specific order in which the operations described in Figures 8A-8E have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize numerous ways to reorder the operations described herein. Additionally, it should be noted that the details of other processes described herein with respect to other methods described herein (eg,methods 900 and 1000 ) also apply in a similar manner tomethod 800 described above with respect to FIGS. 8A-8E . . For example, the touches, inputs, virtual objects, user interface areas, intensity thresholds, haptic outputs, fields of view, movements, and/or animations described above with reference tomethod 800 optionally have other methods described herein with reference to (eg, one of the characteristics of contact, input, virtual object, user interface area, intensity threshold, haptic output, field of view, movement and/or animation described inmethods 900, 1000, 16000, 17000, 18000, 19000 and 20000) or many. For the sake of brevity, these details are not repeated here.

图9A至图9D是示出根据一些实施方案的方法900的流程图,该方法用于在第一用户界面区域中显示虚拟对象的第一表示、在第二用户界面区域中显示虚拟对象的第二表示以及显示具有一个或多个相机的视场的表示的虚拟对象的第三表示。方法900在具有显示器、触敏表面以及一个或多个相机(例如,设备上与显示器和触敏表面相对的一侧上的一个或多个后向相机)的电子设备(例如,图3中的设备300或图1A中的便携式多功能设备100)处执行。在一些实施方案中,显示器是触摸屏显示器,并且触敏表面在显示器上或与显示器集成。在一些实施方案中,显示器与触敏表面是分开的。方法900中的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。9A-9D are flowcharts illustrating amethod 900 for displaying a first representation of a virtual object in a first user interface area and displaying a second representation of the virtual object in a second user interface area, according to some embodiments. A second representation and a third representation of the virtual object displaying a representation of the field of view with one or more cameras. Themethod 900 is performed in an electronic device (eg, the device 300 or portablemultifunction device 100 in FIG. 1A ). In some embodiments, the display is a touch screen display and the touch-sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations inmethod 900 are optionally combined, and/or the order of some operations is optionally changed.

如下文所述,方法900涉及检测在设备的触敏表面处通过接触进行的输入,该输入用于在第一用户界面(例如,二维图形用户界面)中显示虚拟对象的表示。响应于第一输入,设备使用标准来确定是否在第二用户界面(例如,登台用户界面,在该登台用户界面中,可移动虚拟对象的三维表示、可为虚拟对象的三维表示重新定尺寸并且/或者可对虚拟对象的三维表示进行重新取向)中显示虚拟对象的第二表示。当在第二用户界面中显示虚拟对象的第二表示时,响应于第二输入,设备基于第二输入改变虚拟对象的第二表示的显示属性,或者在包括设备的一个或多个相机的视场的第三用户界面中显示虚拟对象的第三表示。使得多种不同类型的操作能够响应于输入而执行(例如,通过改变虚拟对象的显示属性或在第三用户界面中显示虚拟对象)提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。As described below,method 900 involves detecting input by contact at a touch-sensitive surface of a device for displaying a representation of a virtual object in a first user interface (eg, a two-dimensional graphical user interface). In response to the first input, the device uses the criteria to determine whether the three-dimensional representation of the virtual object can be moved, the three-dimensional representation of the virtual object can be resized, and /Or a second representation of the virtual object may be displayed in the three-dimensional representation of the virtual object (reorientation). When displaying the second representation of the virtual object in the second user interface, in response to the second input, the device changes a display attribute of the second representation of the virtual object based on the second input, or in response to a view of one or more cameras including the device A third representation of the virtual object is displayed in the third user interface of the field. Enabling many different types of operations to be performed in response to input (for example, by changing display properties of virtual objects or displaying virtual objects in a third user interface) increases the efficiency with which users can perform these operations, thereby enhancing the usability of the device. operability, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

设备在显示器112上的第一用户界面区域(例如,二维图形用户界面或其一部分(例如,家具图像的可浏览列表、包含一个或多个可选对象的图像等))中显示(902)虚拟对象的第一表示(例如,三维对象的图形表示,诸如,虚拟椅子5020、虚拟灯5084、鞋子、家具、手工工具、装饰品、人、表情符号、游戏角色、虚拟家具等)。例如,第一用户界面区域为如图6A所示的即时消息用户界面5008。在一些实施方案中,除设备周围的物理环境的图像之外,第一用户界面区域还包括背景(例如,第一用户界面区域的背景为预选的背景颜色/图案或背景图像,该背景图像不同于由一个或多个相机同时捕获的输出图像,并且不同于一个或多个相机的视场中的实时内容)。The device is displayed on thedisplay 112 in a first user interface area (eg, a two-dimensional graphical user interface or a portion thereof (eg, a browsable list of furniture images, images containing one or more selectable objects, etc.)) (902) A first representation of a virtual object (eg, a graphical representation of a three-dimensional object, such asvirtual chair 5020, virtual light 5084, shoes, furniture, hand tools, decorations, people, emoji, game characters, virtual furniture, etc.). For example, the first user interface area is the instantmessaging user interface 5008 shown in FIG. 6A. In some embodiments, the first user interface area includes a background in addition to an image of the physical environment surrounding the device (eg, the background of the first user interface area is a preselected background color/pattern or a background image, the background image being different due to output images captured simultaneously by one or more cameras and differing from real-time content in the field of view of one or more cameras).

当在显示器上的第一用户界面区域中显示虚拟对象的第一表示时,设备检测(904)在触敏表面上与显示器上的虚拟对象的第一表示对应的位置处通过第一接触进行的第一输入(例如,在触摸屏显示器上的虚拟对象的第一表示上检测到第一接触,或者在与虚拟对象的第一表示同时显示在第一用户界面区域中的示能表示(例如,切换控件6018)上检测到第一接触,该示能表示被配置为在被第一接触调用时触发AR视图(例如,相机的视场6036)和/或包括虚拟对象(例如,虚拟椅子5020)的表示的登台用户界面6010的显示)。例如,第一输入为如参照图6E至图6I描述的通过接触6006进行的输入。While the first representation of the virtual object is displayed in the first user interface area on the display, the device detects (904) a first contact at a location on the touch-sensitive surface that corresponds to the first representation of the virtual object on the display A first input (eg, a first contact detected on a first representation of a virtual object on a touchscreen display, or an affordance displayed in a first user interface area concurrently with the first representation of a virtual object (eg, switching control 6018), the affordance is configured to trigger an AR view (eg, the camera's field of view 6036) and/or an AR view (eg, the virtual chair 5020) that includes a virtual object (eg, virtual chair 5020) when invoked by the first contact display of thestaging user interface 6010 represented). For example, the first input is an input throughcontact 6006 as described with reference to FIGS. 6E-6I .

响应于检测到通过第一接触进行的第一输入,并且根据确定通过第一接触进行的第一输入满足第一(例如,登台触发)标准(例如,登台触发标准是被配置为识别轻扫输入、触摸保持输入、按压输入、轻击输入、接触的向下触摸、接触的初始移动或另一类型的预定义输入手势,该登台触发标准与触发相机的激活和/或触发相机的视场中的视场平面的检测相关联),设备在第二用户界面区域中显示(906)虚拟对象的第二表示,第二用户界面区域与第一用户界面区域不同(例如,第二用户界面区域为登台用户界面6010,其不包括相机的视场并且包括模拟三维空间,在该模拟三维空间中可响应于用户输入操纵(例如,旋转或移动)虚拟对象的三维表示)。例如,在图6E至图6H中,根据确定通过接触6006进行的输入具有增加到高于深按压强度阈值ITD的特征强度,虚拟椅子对象5020被显示在登台用户界面6010中(例如,如图6I所示),该登台用户界面不同于即时消息用户界面5008(例如,如图6E所示)。In response to detecting the first input through the first contact, and upon determining that the first input through the first contact satisfies a first (eg, staging trigger) criterion (eg, the staging trigger criterion is configured to recognize a swipe input , touch hold input, press input, tap input, touch down of the contact, initial movement of the contact, or another type of predefined input gesture, the staging trigger criterion is related to the activation of the trigger camera and/or in the field of view of the trigger camera associated with the detection of the field of view plane), the device displays (906) a second representation of the virtual object in a second user interface area that is different from the first user interface area (e.g., the second user interface area is A staginguser interface 6010, which does not include the camera's field of view and includes a simulated three-dimensional space in which a three-dimensional representation of a virtual object can be manipulated (eg, rotated or moved) in response to user input. For example, in FIGS. 6E-6H , upon determining that the input viacontact 6006 has a characteristic intensity that increases above the deep-press intensity threshold ITD , avirtual chair object 5020 is displayed in the staging user interface 6010 (eg, as shown in FIG. 6I), the staging user interface is different from the instant messaging user interface 5008 (eg, as shown in FIG. 6E).

在一些实施方案中,响应于检测到第一输入,并且根据确定第一输入满足登台触发标准,设备显示第一动画过渡,第一动画过渡示出了从如第一用户界面区域中所示的第一取向(例如,如图6E中的即时消息用户界面5008中所示的虚拟椅子5020的第一取向)被移动并重新取向为第二取向(例如,基于台架平面6014来确定的虚拟椅子5020的第二取向,如图6I所示)的虚拟对象的三维表示,其中第二取向基于显示器上的虚拟平面来确定,该虚拟平面独立于设备相对于设备周围物理环境的当前取向而取向。例如,虚拟对象的三维表示具有相对于平面的预定义取向和/或距离(例如,基于如二维图形用户界面中所示的虚拟对象的形状和取向),并且当转变到登台视图(例如,登台用户界面6010)时,三维表示被移动、被重新定尺寸并且被重新取向,以使虚拟对象从显示器上的原始位置到达显示器上的新位置(例如,虚拟台架6014的中心),并且在移动期间或者在移动结束时,对三维表示进行重新取向,使得虚拟对象相对于预定义的登台虚拟平面6014成固定角度,该预定义的登台虚拟平面独立于设备周围的物理环境而定义。In some embodiments, in response to detecting the first input, and upon determining that the first input satisfies the staging trigger criteria, the device displays a first animation transition, the first animation transition showing a transition from the display as shown in the first user interface area The first orientation (eg, the first orientation of thevirtual chair 5020 as shown in the instantmessaging user interface 5008 in FIG. 6E ) is moved and reoriented to the second orientation (eg, the virtual chair determined based on the gantry plane 6014 ) 5020 is a three-dimensional representation of the virtual object in a second orientation, as shown in FIG. For example, a three-dimensional representation of a virtual object has a predefined orientation and/or distance relative to a plane (eg, based on the shape and orientation of the virtual object as shown in a two-dimensional graphical user interface), and when transitioning to a staging view (eg, When staging the user interface 6010), the three-dimensional representation is moved, resized, and reoriented so that the virtual object goes from its original location on the display to a new location on the display (eg, the center of the virtual gantry 6014), and During or at the end of the movement, the three-dimensional representation is reoriented so that the virtual object is at a fixed angle relative to a predefined stagingvirtual plane 6014, which is defined independently of the physical environment surrounding the device.

当在第二用户界面区域中显示虚拟对象的第二表示时,设备检测(908)第二输入(例如,如图6Q至图6T所示的通过接触6034进行的输入)。在一些实施方案中,检测第二输入包括:检测触摸屏上与虚拟对象的第二表示对应的位置处的一个或多个第二接触;检测示能表示上的第二接触,该示能表示被配置为在被第二接触调用时触发设备周围的物理环境的增强现实视图的显示;检测第二接触的移动;并且/或者检测第二接触的抬离。在一些实施方案中,第二输入是通过相同接触进行的第一输入的继续(例如,第二输入是在如图6E至图6I所示的通过接触6006进行的第一输入之后的如图6Q至图6T所示的通过接触6034进行的输入(例如,接触未抬离)),或者是通过完全不同接触进行的独立输入(例如,第二输入是在如图6E至图6I所示的通过接触6006进行的第一输入之后的如图6Q至图6T所示的通过接触6034进行的输入(例如,接触抬离)),或者是通过另外接触进行的输入的继续(例如,第二输入是在如图6E至图6I所示的通过接触6006进行的第一输入之后的如图6J至图6L所示的通过接触6006进行的输入)。例如,第二输入可为轻扫输入的继续、第二轻击输入、第二按压输入、在第一输入之后的按压输入、第二触摸保持输入、从第一输入继续的持续触摸等。When the second representation of the virtual object is displayed in the second user interface area, the device detects (908) a second input (eg, input viacontact 6034 as shown in Figures 6Q-6T). In some embodiments, detecting the second input comprises: detecting one or more second contacts on the touch screen at locations corresponding to the second representation of the virtual object; detecting the second contact on the affordance that is is configured to, when invoked by the second contact, trigger display of an augmented reality view of the physical environment surrounding the device; detect movement of the second contact; and/or detect lift-off of the second contact. In some embodiments, the second input is a continuation of the first input through the same contact (eg, the second input is after the first input throughcontact 6006 as shown in FIGS. 6E-6I , FIG. 6Q ) 6T through the contact 6034 (eg, the contact is not lifted off)), or a separate input through a completely different contact (eg, the second input is through the contact as shown in FIGS. 6E-6I ) The first input bycontact 6006 follows the input bycontact 6034 as shown in FIGS. 6Q-6T (eg, contact lift off)), or is a continuation of input by another contact (eg, the second input is The input throughcontact 6006 as shown in FIGS. 6J to 6L after the first input throughcontact 6006 as shown in FIGS. 6E through 6I ). For example, the second input may be a continuation of a swipe input, a second tap input, a second press input, a press input after the first input, a second touch hold input, a continuous touch continuing from the first input, and the like.

响应于检测到第二输入(910):根据确定第二输入与在第二用户界面区域中(例如,而不转变到增强现实视图)操纵虚拟对象的请求对应,设备基于第二输入改变虚拟对象的第二表示在第二用户界面区域内的显示属性,并且根据确定第二输入与在增强现实环境中显示虚拟对象的请求对应,设备显示具有一个或多个相机的视场的表示的虚拟对象的第三表示(例如,设备显示包括一个或多个相机的视场6036的第三用户界面,并且将虚拟对象(例如,虚拟椅子5020)的三维表示放置在与设备周围物理环境5002中的物理平面(例如,地板)对应的相机的视场内检测到的虚拟平面(例如,地板表面5038)上)。In response to detecting the second input (910): upon determining that the second input corresponds to a request to manipulate the virtual object in the second user interface area (eg, without transitioning to the augmented reality view), the device changes the virtual object based on the second input the display properties of the second representation within the second user interface area, and upon determining that the second input corresponds to a request to display the virtual object in the augmented reality environment, the device displays the virtual object having the representation of the field of view of the one or more cameras A third representation of the device (eg, the device displays a third user interface that includes a field ofview 6036 of one or more cameras, and a three-dimensional representation of a virtual object (eg, a virtual chair 5020 ) is placed in aphysical environment 5002 that is related to the device's surrounding physical environment The plane (eg, the floor) corresponds to a virtual plane (eg, floor surface 5038 ) detected within the camera's field of view).

在一些实施方案中,与在第二用户界面区域中操纵虚拟对象的请求对应的第二输入为在触敏表面上与第二用户界面区域中的虚拟对象的第二表示对应的位置处通过第二接触进行的捏合或轻扫。例如,第二输入为如图6J至图6L所示的通过接触6006进行的输入或如图6N至图6O所示的通过接触6026和6030进行的输入。In some implementations, the second input corresponding to the request to manipulate the virtual object in the second user interface area is passing a second input on the touch-sensitive surface at a location corresponding to the second representation of the virtual object in the second user interface area Two touches are performed by pinching or swiping. For example, the second input is the input throughcontact 6006 as shown in FIGS. 6J-6L or the input throughcontacts 6026 and 6030 as shown in FIGS. 6N-6O.

在一些实施方案中,与在增强现实环境中显示虚拟对象的请求对应的第二输入为在触敏表面上与第二用户界面区域中的虚拟对象的表示对应的位置处或者从触敏表面上与第二用户界面区域中的虚拟对象的表示对应的位置进行的轻击输入、按压输入或者触摸保持或按压输入以及随后的拖动输入。例如,第二输入为如图6Q至图6T所示的通过接触6034进行的深按压输入。In some embodiments, the second input corresponding to the request to display the virtual object in the augmented reality environment is at or from the touch-sensitive surface at a location corresponding to the representation of the virtual object in the second user interface area A tap input, a press input, or a touch hold or press input and a subsequent drag input at a location corresponding to the representation of the virtual object in the second user interface area. For example, the second input is a deep press input throughcontact 6034 as shown in FIGS. 6Q-6T.

在一些实施方案中,基于第二输入改变虚拟对象的第二表示在第二用户界面区域内的显示属性包括围绕一个或多个轴旋转(例如,通过垂直和/或水平轻扫)、重新定尺寸(例如,捏合以重新定尺寸)、围绕一个或多个轴倾斜(例如,通过倾斜设备)、改变视角(例如,通过水平移动设备,在一些实施方案中,这用于分析一个或多个相机的视场,以检测一个或多个视场平面)以及/或改变虚拟对象的表示的颜色。例如,改变虚拟对象的第二表示的显示属性包括,响应于如图6J至图6K所示的通过接触6006进行的水平轻扫手势旋转虚拟椅子5020;响应于如图6K至图6L所示的通过接触6006进行的对角轻扫手势旋转虚拟椅子5020;或者响应于如图6N至图6O所示的通过接触6026和6030进行的分离手势增大虚拟椅子5020的尺寸。在一些实施方案中,虚拟对象的第二表示的显示属性的变化量与第二输入的属性(例如,接触的移动距离或移动速度、接触强度、接触的持续时间等)的变化量相关联。In some embodiments, changing display properties of the second representation of the virtual object within the second user interface area based on the second input includes rotating about one or more axes (eg, by swiping vertically and/or horizontally), reorienting size (eg, pinch to resize), tilt about one or more axes (eg, by tilting the device), change the viewing angle (eg, by moving the device horizontally), in some embodiments, this is used to analyze one or more field of view of the camera to detect one or more field of view planes) and/or change the color of the representation of virtual objects. For example, changing the display properties of the second representation of the virtual object includes rotating thevirtual chair 5020 in response to a horizontal swipe gesture throughcontact 6006 as shown in FIGS. 6J-6K; A diagonal swipe gesture bycontact 6006 rotatesvirtual chair 5020; or increases the size ofvirtual chair 5020 in response to a separation gesture bycontacts 6026 and 6030 as shown in Figures 6N-6O. In some embodiments, the amount of change in display properties of the second representation of the virtual object is associated with the amount of change in properties of the second input (eg, distance or speed of movement of the contact, strength of the contact, duration of the contact, etc.).

在一些实施方案中,根据确定第二输入与在增强现实环境中(例如,在一个或多个相机的视场6036中,如参照图6T描述的)显示虚拟对象的请求对应,设备显示第二动画过渡,第二动画过渡示出从相对于显示器上的虚拟平面的相应取向(例如,图6R所示的虚拟椅子5020的取向)被重新取向为第三取向(例如,图6T所示的虚拟椅子5020的取向)的虚拟对象的三维表示,第三取向基于物理环境中被捕获在一个或多个相机的视场中的部分的当前取向来确定。例如,对虚拟对象的三维表示进行重新取向,使得虚拟对象的三维表示相对于在捕获在相机的视场中的物理环境5002(例如,可支撑虚拟对象的三维表示的物理表面,诸如垂直墙壁或水平地板表面)的实时图像中识别到的预定义平面(例如,地板表面5038)成固定角度。在一些实施方案中,在至少一个方面,增强现实视图中的虚拟对象的取向受到登台用户界面中的虚拟对象的取向的约束。例如,当将虚拟对象从登台用户界面转变到增强现实视图时,虚拟对象围绕三维坐标系的至少一个轴的旋转角度得以保持(例如,如参照图6Q至图6U描述的,如参照图6J至图6K描述的虚拟椅子5020的旋转得以保持)。在一些实施方案中,投射在第二用户界面区域中的虚拟对象的表示上的光源为虚拟光源。在一些实施方案中,第三用户界面区域中的虚拟对象的第三表示被真实世界光源照亮(例如,如在一个或多个相机的视场中检测到并且/或者通过一个或多个相机的视场来确定)。In some embodiments, upon determining that the second input corresponds to a request to display a virtual object in the augmented reality environment (eg, in the field ofview 6036 of one or more cameras, as described with reference to FIG. 6T ), the device displays the second input An animated transition, a second animated transition showing reorientation from a corresponding orientation relative to a virtual plane on the display (eg, the orientation of thevirtual chair 5020 shown in FIG. 6R ) to a third orientation (eg, thevirtual chair 5020 shown in FIG. 6T ) The orientation of the chair 5020) is a three-dimensional representation of the virtual object, and the third orientation is determined based on the current orientation of the portion of the physical environment that is captured in the field of view of the one or more cameras. For example, the three-dimensional representation of the virtual object is reoriented such that the three-dimensional representation of the virtual object is relative to thephysical environment 5002 captured in the camera's field of view (eg, a physical surface, such as a vertical wall or A predefined plane (eg, floor surface 5038) identified in the live image of a horizontal floor surface) is at a fixed angle. In some embodiments, in at least one aspect, the orientation of the virtual object in the augmented reality view is constrained by the orientation of the virtual object in the staging user interface. For example, when transitioning a virtual object from a staging user interface to an augmented reality view, the rotation angle of the virtual object about at least one axis of the three-dimensional coordinate system is maintained (eg, as described with reference to FIGS. 6Q-6U , as described with reference to FIGS. The rotation of thevirtual chair 5020 depicted in Figure 6K is maintained). In some implementations, the light source projected on the representation of the virtual object in the second user interface area is a virtual light source. In some embodiments, the third representation of the virtual object in the third user interface area is illuminated by a real-world light source (eg, as detected in the field of view of the one or more cameras and/or by the one or more cameras field of view).

在一些实施方案中,第一标准包括(912)当(例如,根据以下确定)第一输入包括在触敏表面上与虚拟对象指示符5022对应的位置处通过第一接触进行的轻击输入时所满足的标准(例如,重叠和/或邻近显示器上的虚拟对象的表示的指示符,诸如,图标)。例如,虚拟对象指示符5022提供与该虚拟对象指示符对应的虚拟对象在登台视图(例如,登台用户界面6010)和增强现实视图(例如,相机的视场6036)中可见的指示(例如,如下文参考方法1000更详细地描述)。根据第一输入是否包括轻击输入确定是否在第二用户界面区域中显示虚拟对象的第二表示,这使得多种不同类型的操作能够响应于第一输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the first criterion includes (912) when (eg, as determined from the following) the first input includes a tap input through a first contact at a location on the touch-sensitive surface that corresponds tovirtual object indicator 5022 Criteria that are met (eg, indicators that overlap and/or be adjacent to representations of virtual objects on the display, such as icons). For example, thevirtual object indicator 5022 provides an indication that the virtual object corresponding to the virtual object indicator is visible in the staging view (eg, the staging user interface 6010 ) and the augmented reality view (eg, the camera's field of view 6036 ) (eg, as follows) described in more detail herein with reference to method 1000). Determining whether to display the second representation of the virtual object in the second user interface area based on whether the first input includes a tap input enables a variety of different types of operations to be performed in response to the first input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,第一标准包括(914)当(例如,根据以下确定)第一接触在触敏表面上与虚拟对象的第一表示对应的位置处以小于阈值移动量的移动保持至少预定义阈值时间量(例如,长按压时间阈值)时所满足的标准。例如,通过触摸保持输入满足第一标准。在一些实施方案中,第一标准包括要求在第一接触在触敏表面上与虚拟对象的表示对应的位置处以小于阈值移动量的移动保持至少预定义的阈值时间量之后移动第一接触以便满足标准的标准。例如,通过触摸保持输入以及随后的拖动输入来满足第一标准。根据接触在触敏表面上与虚拟对象的表示对应的位置处是否以小于阈值移动量的移动保持至少预定义时间量,确定是否在第二用户界面区域中显示虚拟对象的第二表示,这使得多种不同类型的操作能够响应于第一输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the first criterion includes (914) when (eg, determined according to) the first contact remains at least a predefined amount of movement at a location on the touch-sensitive surface corresponding to the first representation of the virtual object with a movement less than a threshold movement amount Criteria that are met at a threshold amount of time (eg, long press time threshold). For example, the first criterion is met by the touch hold input. In some implementations, the first criterion includes requiring that the first contact be moved at a location on the touch-sensitive surface corresponding to the representation of the virtual object after the first contact is maintained for at least a predefined threshold amount of time with a movement less than a threshold amount of movement in order to satisfy standard standard. For example, the first criterion is satisfied by a touch hold input followed by a drag input. Whether to display the second representation of the virtual object in the second user interface area is determined based on whether the contact remains at a location on the touch-sensitive surface that corresponds to the representation of the virtual object with movement less than a threshold amount of movement for at least a predefined amount of time, such that A number of different types of operations can be performed in response to the first input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,第一标准包括(916)当(例如,根据以下确定)第一接触的特征强度增加到高于第一强度阈值(例如,深按压强度阈值ITD)时所满足的标准。例如,如参照图6Q至图6T描述的,当接触6034的特征强度增加到高于深按压强度阈值ITD时,标准得到满足,如强度水平计5028所指示。在一些实施方案中,根据确定接触满足用于识别另一类型的手势(例如,轻击)的标准,在保持显示虚拟对象的同时,设备还执行除触发第二(例如,登台)用户界面之外的另一预定义功能。在一些实施方案中,第一标准要求第一输入不是轻击输入(例如,在接触的初始向下触摸的轻击时间阈值内检测到接触抬离之前的强度达到高于阈值强度的用力轻击输入)。在一些实施方案中,第一标准包括要求在第一接触的强度超过第一强度阈值之后移动第一接触的标准,以便满足标准。例如,通过按压输入以及随后的拖动输入来满足第一标准。根据接触的特征强度是否增加到高于第一强度阈值,确定是否在第二用户界面区域中显示虚拟对象使得多种不同类型的操作能够响应于第一输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the first criterion includes (916) a criterion that is satisfied when (eg, determined according to) the characteristic strength of the first contact increases above a first strength threshold (eg, a deep press strength threshold ITD ) . For example, as described with reference to FIGS. 6Q-6T, the criterion is met when the characteristic intensity of thecontact 6034 increases above the deep press intensity threshold ITD , as indicated by the intensity level meter 5028. In some embodiments, upon determining that the contact satisfies the criteria for recognizing another type of gesture (eg, a tap), while maintaining the display of the virtual object, the device also performs other than triggering a second (eg, staging) user interface another predefined function. In some implementations, the first criterion requires that the first input is not a tap input (eg, a hard tap with an intensity above a threshold intensity before the contact lift-off is detected within a tap time threshold of the initial down-touch of the contact enter). In some embodiments, the first criterion includes a criterion requiring that the first contact be moved after the intensity of the first contact exceeds a first intensity threshold in order to satisfy the criterion. For example, the first criterion is satisfied by a press input followed by a drag input. Depending on whether the characteristic intensity of the contact increases above the first intensity threshold, determining whether to display the virtual object in the second user interface area enables a variety of different types of operations to be performed in response to the first input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,响应于检测到通过第一接触进行的第一输入,并且根据确定通过第一接触进行的第一输入满足第二标准(例如,界面滚动标准),设备在与第一接触的移动方向对应的方向上滚动(918)第一用户界面区域(以及虚拟对象的表示)(例如,第一标准未得到满足,并且放弃在第二用户界面区域中显示虚拟对象的表示),其中第二标准要求第一输入包括第一接触在跨触敏表面的方向上的移动大于阈值距离(例如,通过轻扫手势来满足第二标准,诸如,垂直轻扫或水平手势)。例如,如参照图6B至图6C描述的,通过接触6002进行的向上垂直轻扫手势使即时消息用户界面5008和虚拟椅子5020向上滚动。在一些实施方案中,第一标准还要求第一输入包括第一接触的移动大于阈值距离,以便满足第一标准,并且设备基于第一输入的初始部分(例如,虚拟对象的表示上的触摸保持或按压)是否满足对象选择标准来确定第一输入是否满足第一标准(例如,登台触发标准)或第二标准(例如,界面滚动标准)。在一些实施方案中,在虚拟对象以及虚拟对象的AR图标的位置之外的触摸位置处发起的轻扫输入满足第二标准。根据第一输入是否满足第二标准,响应于第一输入确定是否滚动第一用户界面区域,这使得多种不同类型的操作能够响应于第一输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to detecting the first input through the first contact, and upon determining that the first input through the first contact satisfies a second criterion (eg, an interface scrolling criterion), the device is in contact with the first contact Scroll (918) the first user interface area (and the representation of the virtual object) in a direction corresponding to the direction of movement of the The second criterion requires that the first input includes movement of the first contact in a direction across the touch-sensitive surface greater than a threshold distance (eg, the second criterion is satisfied by a swipe gesture, such as a vertical swipe or a horizontal gesture). For example, as described with reference to Figures 6B-6C, an up vertical swipe gesture bycontact 6002 causes instantmessaging user interface 5008 andvirtual chair 5020 to scroll up. In some embodiments, the first criterion further requires that the first input includes movement of the first contact greater than a threshold distance in order to satisfy the first criterion, and the device maintains the touch based on an initial portion of the first input (eg, a touch on the representation of the virtual object) or press) to determine whether the first input satisfies a first criterion (eg, a staging trigger criterion) or a second criterion (eg, an interface scrolling criterion). In some embodiments, a swipe input initiated at a touch location other than the location of the virtual object and the virtual object's AR icon satisfies the second criterion. Determining whether to scroll the first user interface area in response to the first input is based on whether the first input satisfies the second criterion, which enables a number of different types of operations to be performed in response to the first input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,响应于检测到通过第一接触进行的第一输入,并且根据确定通过第一接触进行的第一输入满足第三(例如,AR触发)标准,设备显示(920)具有一个或多个相机的视场的表示的虚拟对象的第三表示。例如,如参照图6AD至图6AG描述的,通过接触6044进行的长触摸输入以及随后通过接触6044进行的拖动虚拟椅子5020的向上拖动输入使得相机的视场6036显示虚拟椅子5020。In some embodiments, the device displays (920) with a or a third representation of the virtual object that is a representation of the field of view of the plurality of cameras. For example, as described with reference to FIGS. 6AD-6AG, a long touch input viacontact 6044 followed by an upward drag input viacontact 6044 to dragvirtual chair 5020 causes the camera's field ofview 6036 to displayvirtual chair 5020.

在一些实施方案中,第三标准包括例如根据以下确定满足的标准:一个或多个相机处于活动状态;设备取向落入定义的范围内(例如,来自定义的原始取向、定义的围绕一个或多个轴的旋转角度的范围);通过接触进行的输入包括选择输入(例如,长触摸)以及随后的拖动输入(移动显示器上的虚拟对象的接触的移动)(例如,移动到与显示器的边缘相距预定距离的范围内);接触的特征强度增加到高于AR触发强度阈值(例如,轻按压阈值ITL或深按压阈值ITD);接触的持续时间增加到大于AR触发持续时间阈值(例如,长按压阈值);并且/或者接触移动的距离增加到大于AR触发距离阈值(例如,长轻扫阈值)。在一些实施方案中,用于在第二用户界面区域(例如,登台用户界面6010)中显示虚拟对象的表示的控件(例如,切换控件6018)显示在包括虚拟对象的表示以及一个或多个相机的视场6036的用户界面(例如,替换第二用户界面区域的至少一部分的第三用户界面区域)中。In some embodiments, the third criterion includes a criterion that is satisfied, eg, determined from: one or more cameras are active; device orientation falls within a defined range (eg, from a defined native orientation, defined around one or more range of rotation angles for each axis); input by contact includes select input (eg, a long touch) followed by a drag input (movement of a contact that moves a virtual object on the display) (eg, moving to the edge of the display within a predetermined distance); the characteristic intensity of the contact increases above the AR trigger intensity threshold (e.g., the light press threshold ITL or the deep press threshold ITD ); the duration of the contact increases above the AR trigger duration threshold (e.g. , long-press threshold); and/or the distance the contact moved increases to be greater than the AR trigger distance threshold (eg, long-swipe threshold). In some embodiments, controls (eg, toggle controls 6018 ) for displaying representations of virtual objects in a second user interface area (eg, staging user interface 6010 ) are displayed on a display that includes representations of virtual objects and one or more cameras In the user interface of the field of view 6036 (eg, a third user interface area that replaces at least a portion of the second user interface area).

在一些实施方案中,当从第一用户界面区域(例如,非AR、非登台、触摸屏UI视图)直接转变到第三用户界面区域(例如,增强现实视图)时,设备显示动画过渡,该动画过渡示出了从显示器上的触摸屏UI(例如,非AR、非登台视图)中表示的相应取向被重新取向为相对于捕获在一个或多个相机的视场中的物理环境的一部分的当前取向来预定义的取向的虚拟对象的三维表示。例如,如图6AD至图6AJ所示,当从第一用户界面区域(例如,即时消息用户界面5008,如图6AD所示)直接转变到第三用户界面区域(例如,包括相机的视场6036的增强现实用户界面,如图6AJ所示),虚拟椅子5020从如图6AD至图6AH所示的第一取向改变为相对于如捕获在相机的视场6036中的物理环境5002中的地板表面5038的预定义取向(例如,如图6AJ所示)。例如,对虚拟对象的三维表示进行重新取向,使得虚拟对象的三维表示相对于在物理环境5002的实时图像中识别到的预定义平面(例如,可支撑虚拟对象的三维表示的物理表面,诸如,垂直墙壁或水平地板表面(例如,地板表面5038))成固定角度。根据第一输入是否满足第三标准,响应于第一输入确定是否显示具有相机的视场的虚拟对象的第三表示,这使得多种不同类型的操作能够响应于第一输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, when transitioning directly from a first user interface area (eg, non-AR, off-stage, touchscreen UI view) to a third user interface area (eg, augmented reality view), the device displays an animated transition that animates The transition shows the reorientation from the corresponding orientation represented in the touchscreen UI (eg, non-AR, off-stage view) on the display to the current orientation relative to a portion of the physical environment captured in the field of view of one ormore cameras 3D representation of virtual objects to predefined orientations. For example, as shown in Figures 6AD-6AJ, when transitioning directly from a first user interface area (eg, instantmessaging user interface 5008, as shown in Figure 6AD) to a third user interface area (eg, including the camera's field of view 6036) 6AJ), thevirtual chair 5020 changes from the first orientation shown in FIGS. 6AD-6AH to the floor surface relative to thephysical environment 5002 as captured in the camera's field ofview 6036 5038's predefined orientation (eg, as shown in Figure 6AJ). For example, the three-dimensional representation of the virtual object is reoriented such that the three-dimensional representation of the virtual object is relative to a predefined plane (eg, a physical surface that can support the three-dimensional representation of the virtual object, such as, A vertical wall or horizontal floor surface (eg, floor surface 5038)) is at a fixed angle. Determining in response to the first input whether to display a third representation of a virtual object having a field of view of the camera based on whether the first input satisfies a third criterion enables a number of different types of operations to be performed in response to the first input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,响应于检测到通过第一接触进行的第一输入,设备通过一个或多个设备取向传感器来确定(922)设备的当前设备取向(例如,相对于设备周围物理环境的取向),并且第三标准(例如,AR触发标准)要求当前设备取向在第一取向范围内,以便满足第三标准(例如,当设备与地面之间的角度小于阈值角度时,第二标准得到满足,这指示设备与地面足够平行(以绕过间隙状态))。在一些实施方案中,第一标准(例如,登台触发标准)要求当前设备取向在第二取向范围内,以便满足第一标准(例如,当设备与地面之间的角度在阈值内且达90度时,第一标准得到满足,这指示设备相对于地面足够竖直,以首先进入间隙状态)。根据设备取向是否在取向范围内,响应于第一输入确定是否显示具有相机的视场的虚拟对象的第三表示,这使得多种不同类型的操作能够响应于第一输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to detecting the first input through the first contact, the device determines (922) a current device orientation of the device (eg, an orientation relative to the physical environment surrounding the device) via one or more device orientation sensors ), and a third criterion (eg, AR trigger criterion) requires the current device orientation to be within the first orientation range in order to satisfy the third criterion (eg, the second criterion is met when the angle between the device and the ground is less than a threshold angle) , which indicates that the device is sufficiently parallel to the ground (to bypass the gap state). In some embodiments, a first criterion (eg, a staging trigger criterion) requires the current device orientation to be within a second orientation range in order to satisfy the first criterion (eg, when the angle between the device and the ground is within a threshold and up to 90 degrees) , the first criterion is met, which indicates that the device is sufficiently upright relative to the ground to enter the clearance state first). Determining whether to display the third representation of the virtual object having the camera's field of view in response to the first input is based on whether the device orientation is within the orientation range, which enables a number of different types of operations to be performed in response to the first input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,虚拟对象的第二表示的至少一个显示属性(例如,尺寸、形状、围绕偏航、俯仰和滚动轴的相应角度等)被应用(924)于虚拟对象的第三表示。例如,如参照图6Q至图6U描述的,当虚拟椅子5020的第三表示显示在包括相机的视场6036的增强现实视图中时(例如,如图6U所示),如参照图6J至图6K描述的应用在登台用户界面6010中的虚拟椅子5020的第二表示的旋转得以保持。在一些实施方案中,在至少一个方面,增强现实视图中的虚拟对象的取向受到登台用户界面中的虚拟对象的取向的约束。例如,当将虚拟对象从登台视图转变到增强现实视图时,虚拟对象围绕预定义的三维坐标系的至少一个轴(例如,偏航、俯仰和滚动轴)的旋转角度得以保持。在一些实施方案中,如果已经通过用户输入以一些方式(例如,改变尺寸、形状、纹理、取向等)操纵虚拟对象的第二表示,则虚拟对象的第二表示的至少一个显示属性仅应用于虚拟对象的第三表示。换句话讲,当对象示出在增强现实视图中或用于以一种或多种方式来约束增强现实视图中的对象外观时,在登台视图中作出的改变得以保持。将虚拟对象的第二表示的至少一个显示属性应用于虚拟对象的第三表示(例如,不需要进一步的用户输入来将相同的显示属性应用于虚拟对象的第二表示和虚拟对象的第三表示)增强了设备的可操作性(例如,通过允许用户在虚拟对象的大版本显示在第二用户界面中时将旋转应用于第二虚拟对象,并且将旋转应用于显示的具有一个或多个相机的视场的表示的虚拟对象的第三表示),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, at least one display attribute of the second representation of the virtual object (eg, size, shape, corresponding angles about yaw, pitch and roll axes, etc.) is applied (924) to the third representation of the virtual object. For example, when the third representation of thevirtual chair 5020 is displayed in an augmented reality view that includes the camera's field ofview 6036, as described with reference to Figures 6Q-6U (eg, as shown in Figure 6U), as described with reference to Figures 6J-6U The rotation of the second representation of thevirtual chair 5020 described in 6K applied in thestaging user interface 6010 is maintained. In some embodiments, in at least one aspect, the orientation of the virtual object in the augmented reality view is constrained by the orientation of the virtual object in the staging user interface. For example, when transitioning a virtual object from a staging view to an augmented reality view, the rotation angle of the virtual object about at least one axis (eg, yaw, pitch, and roll axes) of a predefined three-dimensional coordinate system is preserved. In some embodiments, at least one display property of the second representation of the virtual object only applies if the second representation of the virtual object has been manipulated in some way (eg, changing size, shape, texture, orientation, etc.) through user input The third representation of the virtual object. In other words, changes made in the staging view are preserved when the object is shown in the augmented reality view or used to constrain the appearance of the object in the augmented reality view in one or more ways. Applying at least one display property of the second representation of the virtual object to the third representation of the virtual object (eg, no further user input is required to apply the same display property to the second representation of the virtual object and the third representation of the virtual object ) enhances the operability of the device (for example, by allowing the user to apply rotation to a second virtual object when a large version of the virtual object is displayed in the second user interface, and to apply rotation to a display with one or more cameras) The third representation of the virtual object of the representation of the field of view), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,响应于检测到通过第一接触(926)进行的第一输入的至少初始部分(例如,包括:检测到第一接触;或检测到满足相应的预定义标准而不满足第一标准的通过第一接触进行的输入;或检测到满足第一标准的输入):设备激活一个或多个相机(例如,激活相机,而不在显示器上立即显示相机的视场),并且设备分析一个或多个相机的视场,以检测一个或多个相机的视场中的一个或多个平面。在一些实施方案中,在激活一个或多个相机之后延迟显示一个或多个相机的视场6036(例如,直到检测到与在增强现实环境中显示虚拟对象的请求对应的第二输入,直到检测到至少一个视场平面,或者直到检测到与为虚拟对象定义的锚定平面对应的视场平面)。在一些实施方案中,在与一个或多个相机的激活对应的时间处(例如,在一个或多个相机激活的同时)显示一个或多个相机的视场6036。在一些实施方案中,在一个或多个相机的视场中检测到平面之前显示一个或多个相机的视场6036(例如,响应于检测到通过接触进行的第一输入并且根据确定,显示一个或多个相机的视场)。响应于检测到第一输入的初始部分(例如,在显示具有一个或多个相机的视场的表示的虚拟对象的第三表示之前)激活相机并通过分析相机的视场来检测一个或多个视场平面,这提高了设备的效率(例如,通过减少确定虚拟对象的第三表示相对于相机的视场中的相应平面的位置和/或取向所需的时间量),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to detecting at least an initial portion of the first input through the first contact (926) (eg, comprising: detecting the first contact; or detecting that the corresponding predefined criteria are met without the first A standard input through a first contact; or an input that meets a first criterion is detected): the device activates one or more cameras (eg, activates the cameras without immediately displaying the camera's field of view on the display), and the device analyzes Field of view of one or more cameras to detect one or more planes in the field of view of one or more cameras. In some embodiments, displaying the field of view of the one or more cameras is delayed 6036 after activating the one or more cameras (eg, until a second input corresponding to a request to display a virtual object in the augmented reality environment is detected, until a second input is detected to at least one field of view plane, or until a field of view plane corresponding to the anchor plane defined for the virtual object is detected). In some embodiments, the field ofview 6036 of the one or more cameras is displayed at a time corresponding to the activation of the one or more cameras (eg, while the one or more cameras are activated). In some embodiments, the field of view of the one or more cameras is displayed 6036 before the plane is detected in the field of view of the one or more cameras (eg, in response to detecting the first input by contact and upon determination, displaying a or fields of view of multiple cameras). Activate the camera in response to detecting an initial portion of the first input (eg, before displaying a third representation of the virtual object having a representation of the field of view of the one or more cameras) and detect the one or more the field of view plane, which increases the efficiency of the device (eg, by reducing the amount of time required to determine the position and/or orientation of the third representation of the virtual object relative to the corresponding plane in the camera's field of view), which in turn makes the user The device can be used more quickly and efficiently reducing power usage and extending the battery life of the device.

在一些实施方案中,响应于在一个或多个相机的视场中检测到相应的平面(例如,地板表面5038),具有一个或多个触觉输出发生器167的设备输出(928)触觉输出,该触觉输出指示在一个或多个相机的视场中检测到相应的平面。在一些实施方案中,可在识别到视场平面之前示出视场6036。在一些实施方案中,在检测到至少一个视场平面之后或者在识别到所有视场平面之后,在视场中的真实世界图像上覆盖另外的用户界面控件和/或图标。输出指示在相机的视场中检测到平面的触觉输出向用户提供了指示已检测到该平面的反馈。提供改进的触觉反馈增强了设备的可操作性(例如,通过帮助用户提供适当的输入并减少用于放置虚拟对象的不必要的另外输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, a device having one or morehaptic output generators 167 outputs (928) a haptic output in response to detecting a corresponding plane (eg, floor surface 5038) in the field of view of the one or more cameras, The haptic output indicates that corresponding planes are detected in the field of view of the one or more cameras. In some implementations, the field ofview 6036 may be shown before the field of view plane is identified. In some embodiments, after at least one field of view plane is detected or after all field of view planes are identified, additional user interface controls and/or icons are overlaid on the real-world image in the field of view. A haptic output indicating that a plane has been detected in the camera's field of view provides feedback to the user that the plane has been detected. Providing improved haptic feedback enhances the operability of the device (eg, by helping the user provide appropriate input and reducing unnecessary additional input for placing virtual objects), which in turn enables the user to use the device more quickly and efficiently This reduces power usage and extends the battery life of the device.

在一些实施方案中,基于虚拟对象的模拟真实世界尺寸以及一个或多个相机与一个或多个相机的视场6036中与虚拟对象(例如,虚拟椅子5020)的第三表示具有固定的空间关系的位置(例如,附接虚拟对象的平面,诸如,地面表面5038)之间的距离,确定(930)显示器上的虚拟对象的第三表示的尺寸。在一些实施方案中,虚拟对象的第三表示的尺寸受到约束,使得虚拟对象的第三表示的尺寸相对于一个或多个相机的视场的比例得以保持。在一些实施方案中,为虚拟对象定义一个或多个物理尺寸参数(例如,长度、宽度、深度和/或半径)。在一些实施方案中,在第二用户界面(例如,登台用户界面)中,虚拟对象不受其定义的物理尺寸参数(例如,虚拟对象的尺寸可响应于用户输入而变化)的约束。在一些实施方案中,虚拟对象的第三表示受到其定义的尺寸参数的约束。当检测到用于改变增强现实视图中虚拟对象相对于视场中表示的物理环境的位置的用户输入时,或者当检测到用于改变视场的缩放水平的用户输入时,或者当检测到用于相对于设备周围的物理环境移动的用户输入时,虚拟对象的外观(例如,尺寸、观看视角)将以受到以下因素的约束的方式改变:虚拟对象与物理环境之间的固定空间关系(例如,如由虚拟对象的锚定平面与增强现实环境中的平面之间的固定空间关系所表示)以及基于虚拟对象的预定义尺寸参数和物理环境的实际尺寸的固定比例。基于虚拟对象的模拟真实世界尺寸以及一个或多个相机与相机的视场中的位置之间的距离来确定虚拟对象的第三表示的尺寸(例如,不需要进一步的用户输入来为虚拟对象的第三表示重新定尺寸,以模拟虚拟对象的真实世界尺寸),这增强了设备的可操作性,并且又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, a third representation of the virtual object (eg, virtual chair 5020 ) in the field ofview 6036 of the one or more cameras has a fixed spatial relationship with the virtual object (eg, virtual chair 5020 ) based on the simulated real world size of the virtual object The distance between the locations (eg, the plane to which the virtual object is attached, such as the ground surface 5038), determines (930) the size of the third representation of the virtual object on the display. In some embodiments, the size of the third representation of the virtual object is constrained such that the scale of the size of the third representation of the virtual object relative to the field of view of the one or more cameras is maintained. In some embodiments, one or more physical size parameters (eg, length, width, depth, and/or radius) are defined for the virtual object. In some embodiments, in a second user interface (eg, a staging user interface), the virtual object is not constrained by its defined physical size parameters (eg, the size of the virtual object may vary in response to user input). In some embodiments, the third representation of the virtual object is constrained by its defined size parameters. When user input is detected for changing the position of a virtual object in the augmented reality view relative to the physical environment represented in the field of view, or when user input is detected for changing the zoom level of the field of view, or when Upon user input moving relative to the physical environment around the device, the appearance of the virtual object (e.g., size, viewing angle) will change in a manner constrained by the fixed spatial relationship between the virtual object and the physical environment (e.g., , as represented by the fixed spatial relationship between the virtual object's anchor plane and the plane in the augmented reality environment) and a fixed scale based on the virtual object's predefined size parameters and the actual size of the physical environment. The size of the third representation of the virtual object is determined based on the simulated real-world size of the virtual object and the distance between the one or more cameras and the position in the camera's field of view (eg, no further user input is required to set the size of the virtual object's The third representation is resized to simulate the real world size of virtual objects), which enhances the operability of the device and in turn reduces power usage and extends the battery of the device by enabling the user to use the device more quickly and efficiently life.

在一些实施方案中,与在增强现实环境中显示虚拟对象的请求对应的第二输入包括(932)(选择并)拖动虚拟对象的第二表示(例如,拖动增加到超过距离阈值的距离,拖动到超出定义的边界和/或拖动到在显示器或第二用户界面区域的边缘(例如,底边缘、顶边缘和/或侧边缘)的阈值距离内的位置处)的输入。响应于检测到与在增强现实环境中显示虚拟对象的请求对应的第二输入,用相机的视场的表示显示虚拟对象的第三表示,这提供另外的控件选项,但不会使具有另外显示的控件(例如,用于从第二用户界面显示增强现实环境的控件)的第二用户界面混乱。提供另外的控件选项但不会使具有另外显示的控件的第二用户界面混乱增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the second input corresponding to the request to display the virtual object in the augmented reality environment includes (932) (selecting and) dragging the second representation of the virtual object (eg, dragging to increase the distance beyond a distance threshold) , input dragging beyond the defined boundaries and/or dragging to a position within a threshold distance of an edge (eg, bottom edge, top edge, and/or side edge) of the display or second user interface area. In response to detecting the second input corresponding to the request to display the virtual object in the augmented reality environment, displaying a third representation of the virtual object with a representation of the camera's field of view, which provides additional control options but does not enable having the additional display The second user interface of the controls (eg, controls for displaying the augmented reality environment from the second user interface) is cluttered. Providing additional control options without cluttering the second user interface with additionally displayed controls enhances the operability of the device, which in turn reduces power usage and prolongs the device by enabling the user to use the device more quickly and efficiently battery life.

在一些实施方案中,当在第二用户界面区域(例如,如图6Z所示的登台用户界面6010)中显示虚拟对象的第二表示时,设备检测(934)满足用于重新显示第一用户界面区域的相应标准的第四输入(例如,在触敏表面上与虚拟对象的第二表示对应的位置处或在触敏表面上的另一位置处(例如,第二用户界面区域的底部或边缘)的轻击、用力按压或触摸保持并拖动输入,以及/或者在触敏表面上与用于返回到第一用户界面区域的控件对应的位置处的输入),并且响应于检测到第四输入,设备停止在第二用户界面区域中显示虚拟对象的第二表示,并且设备重新在第一用户界面区域中显示虚拟对象的第一表示。例如,如图6Z至图6AC所示,响应于在与显示在登台用户界面6010中的后退控件6016对应的位置处通过接触6042进行的输入,设备停止在第二用户界面区域(例如,登台用户界面6010)中显示虚拟椅子5020的第二表示,并且设备重新在第一用户界面区域(例如,即时消息用户界面5008)中显示虚拟椅子5020的第一表示。在一些实施方案中,虚拟对象的第一表示显示在第一用户界面区域中,其与在转变到登台视图和/或增强现实视图之前示出的那些具有相同的外观、位置和/或取向。例如,在图6AC中,虚拟椅子5020显示在即时消息用户界面5008中,其与显示在图6A中的即时消息用户界面5008中的虚拟椅子5020具有相同的取向。在一些实施方案中,当转变回在第一用户界面区域中显示虚拟对象时,设备在屏幕上连续显示虚拟对象。例如,在图6Y至图6C中,在从显示登台用户界面6010到显示即时消息用户界面5008的转变期间,连续显示虚拟椅子5020。根据当在第二用户界面中显示虚拟对象的第二表示时检测到的第四输入是否满足用于重新显示第一用户界面的标准,确定是否重新在第一用户界面中显示虚拟对象的第一表示,这使得多种不同类型的操作能够响应于第四输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, when the second representation of the virtual object is displayed in the second user interface area (eg, the staginguser interface 6010 shown in FIG. 6Z ), device detection ( 934 ) is sufficient for redisplaying the first user The corresponding standard fourth input of the interface area (eg, at a location on the touch-sensitive surface that corresponds to the second representation of the virtual object or at another location on the touch-sensitive surface (eg, the bottom of the second user interface area or edge) tap, hard press, or touch hold and drag input, and/or input at a location on the touch-sensitive surface that corresponds to a control for returning to the first user interface area), and in response to detecting the first Four inputs, the device stops displaying the second representation of the virtual object in the second user interface area, and the device redisplays the first representation of the virtual object in the first user interface area. For example, as shown in FIGS. 6Z-6AC, in response to an input viacontact 6042 at a location corresponding to theback control 6016 displayed in thestaging user interface 6010, the device stops in the second user interface area (eg, the staging user The second representation ofvirtual chair 5020 is displayed in interface 6010), and the device redisplays the first representation ofvirtual chair 5020 in the first user interface area (eg, instant messaging user interface 5008). In some embodiments, the first representation of the virtual object is displayed in the first user interface area with the same appearance, position and/or orientation as those shown prior to transitioning to the staging view and/or augmented reality view. For example, in Figure 6AC,virtual chair 5020 is displayed in instantmessaging user interface 5008 in the same orientation asvirtual chair 5020 displayed in instantmessaging user interface 5008 in Figure 6A. In some embodiments, when transitioning back to displaying the virtual object in the first user interface area, the device continuously displays the virtual object on the screen. For example, in Figures 6Y-6C, during the transition from displaying thestaging user interface 6010 to displaying the instantmessaging user interface 5008, thevirtual chair 5020 is displayed continuously. Whether to redisplay the first representation of the virtual object in the first user interface is determined based on whether the fourth input detected when the second representation of the virtual object is displayed in the second user interface satisfies the criteria for redisplaying the first user interface Indicates that this enables many different types of operations to be performed in response to the fourth input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,当用一个或多个相机的视场5036的表示显示虚拟对象的第三表示时(例如,如图6U所示),设备检测(936)满足用于重新显示第二用户界面区域的相应标准的第五输入(例如,在触敏表面上与虚拟对象的第三表示对应的位置处或在触敏表面上的另一位置处的轻击、用力按压或触摸并拖动输入,以及/或者在触敏表面上与用于返回到显示第二用户界面区域的控件对应的位置处的输入),并且响应于检测到第五输入,设备停止显示虚拟对象的第三表示和一个或多个相机的视场的表示,并且重新在第二用户界面区域中显示虚拟对象的第二表示。例如,如图6V至图6Y所示,响应于在与显示在包括相机的视场6036的第三用户界面中的切换控件6018对应的位置处通过接触6040进行的输入,设备停止显示相机的视场6036并重新显示登台用户界面6010。在一些实施方案中,虚拟对象的第二表示显示在第二用户界面区域中,其与示出在增强现实视图中的虚拟对象的第二表示具有相同的取向。在一些实施方案中,当转变回在第二用户界面区域中显示虚拟对象时,设备在屏幕上连续显示虚拟对象。例如,在图6V至图6Y中,在从显示相机的视场6036到显示登台用户界面6010的转变期间,连续显示虚拟椅子5020。根据当用相机的视场显示虚拟对象的第三表示时检测到的第五输入是否满足用于重新显示第二用户界面的标准,确定是否重新在第二用户界面中显示虚拟对象的第二表示,这使得多种不同类型的操作能够响应于第五输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, when a third representation of the virtual object is displayed with a representation of the field ofview 5036 of the one or more cameras (eg, as shown in FIG. 6U ), device detection (936) is sufficient for redisplaying the second user The corresponding standard fifth input of the interface area (eg, a tap, hard press, or touch and drag at a location on the touch-sensitive surface that corresponds to the third representation of the virtual object or at another location on the touch-sensitive surface input, and/or input on the touch-sensitive surface at a location corresponding to the control for returning to displaying the second user interface area), and in response to detecting the fifth input, the device ceases to display the third representation of the virtual object and a representation of the field of view of the one or more cameras, and redisplaying the second representation of the virtual object in the second user interface area. For example, as shown in Figures 6V-6Y, in response to an input viacontact 6040 at a location corresponding to togglecontrol 6018 displayed in a third user interface that includes the camera's field ofview 6036, the device stops displaying the camera's view.field 6036 and redisplays the staginguser interface 6010. In some embodiments, the second representation of the virtual object is displayed in the second user interface area in the same orientation as the second representation of the virtual object shown in the augmented reality view. In some embodiments, when transitioning back to displaying the virtual object in the second user interface area, the device continuously displays the virtual object on the screen. For example, in Figures 6V-6Y, during the transition from displaying the camera's field ofview 6036 to displaying thestaging user interface 6010, thevirtual chair 5020 is displayed continuously. Whether to redisplay the second representation of the virtual object in the second user interface is determined based on whether the fifth input detected when the third representation of the virtual object is displayed with the camera's field of view satisfies the criteria for redisplaying the second user interface , which enables many different types of operations to be performed in response to the fifth input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,在用一个或多个相机的视场的表示6036显示虚拟对象的第三表示时,设备检测(938)满足用于重新显示第一用户界面区域(例如,即时消息用户界面5008)的相应标准的第六输入,并且响应于检测到第六输入,设备停止显示虚拟对象(例如,虚拟椅子5020)的第三表示以及一个或多个相机的视场6036的表示(例如,如图6U所示),并且设备重新在第一用户界面区域中显示虚拟对象的第一表示(例如,如图6AC所示)。在一些实施方案中,第六输入为例如在触敏表面上与虚拟对象的第三表示对应的位置处或在触敏表面上的另一位置处的轻击、用力按压或触摸并拖动输入,以及/或者在触敏表面上与用于返回到显示第一用户界面区域的控件对应的位置处的输入。在一些实施方案中,虚拟对象的第一表示显示在第一用户界面区域中,其与在转变到登台视图和/或增强现实视图之前示出的那些具有相同的外观和位置。在一些实施方案中,当转变回在第一用户界面区域中显示虚拟对象时,设备在屏幕上连续显示虚拟对象。根据当用相机的视场显示虚拟对象的第三表示时检测到的第六输入是否满足用于重新显示第一用户界面的标准,确定是否重新在第一用户界面中显示虚拟对象的第一表示,这使得多种不同类型的操作能够响应于第六输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, while displaying the third representation of the virtual object with therepresentation 6036 of the field of view of the one or more cameras, the device detects (938) that it is sufficient to redisplay the first user interface area (eg, an instant messaging user interface). 5008), and in response to detecting the sixth input, the device stops displaying the third representation of the virtual object (eg, virtual chair 5020) and the representation of the field ofview 6036 of the one or more cameras (eg, 6U), and the device redisplays the first representation of the virtual object in the first user interface area (eg, as shown in FIG. 6AC). In some implementations, the sixth input is, for example, a tap, hard press, or touch and drag input at a location on the touch-sensitive surface that corresponds to the third representation of the virtual object or at another location on the touch-sensitive surface , and/or an input on the touch-sensitive surface at a location corresponding to a control for returning to displaying the first user interface area. In some embodiments, the first representation of the virtual object is displayed in the first user interface area with the same appearance and location as those shown prior to transitioning to the staging view and/or the augmented reality view. In some embodiments, when transitioning back to displaying the virtual object in the first user interface area, the device continuously displays the virtual object on the screen. Whether to redisplay the first representation of the virtual object in the first user interface is determined based on whether the sixth input detected when the third representation of the virtual object is displayed with the camera's field of view satisfies the criteria for redisplaying the first user interface , which enables many different types of operations to be performed in response to the sixth input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,响应于检测到通过第一接触进行的第一输入,并且根据确定通过第一接触进行的输入满足第一标准,设备在从显示第一用户界面区域(例如,即时消息用户界面5008)转变为显示第二用户界面区域(例如,登台用户界面6010)时连续显示(940)虚拟对象,这包括显示第一用户界面区域中的虚拟对象的第一表示转变为第二用户界面区域中的虚拟对象的第二表示的动画(例如,移动、围绕一个或多个轴的旋转和/或缩放)。例如,在图6E至图6I中,在从显示即时消息用户界面5008到显示登台用户界面6010的转变期间,连续显示并动画虚拟椅子5020(例如,虚拟椅子5020的取向改变)。在一些实施方案中,虚拟对象具有相对于相机的视场中的平面来定义的取向、位置和/或距离(例如,基于如第一用户界面区域中所示的虚拟对象的第一表示的形状和取向来定义),并且当转变到第二用户界面区域时,虚拟对象的第一表示被移动、被重新定尺寸并且/或者被重新取向为在显示器上的新位置(例如,第二用户界面区域中的虚拟登台平面的中心)处的虚拟对象的第二表示,并且在移动期间或者在移动结束时,对虚拟对象进行重新取向,使得虚拟对象相对于预定义的虚拟登台平面成预先确定的角度,该虚拟登台平面独立于设备周围的物理环境来定义。显示当第一用户界面中的虚拟对象的第一表示转变到第二用户界面中的虚拟对象的第二表示时的动画为用户提供了指示第一输入满足第一标准的反馈。提供改进的反馈增强了设备的可操作性(例如,通过帮助用户提供适当的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to detecting the first input through the first contact, and upon determining that the input through the first contact satisfies the first criterion, the device displays the first user interface area (eg, an instant message user) from Continuously displaying (940) virtual objects while interface 5008) transitions to displaying a second user interface area (eg, staging user interface 6010), including displaying a first representation of the virtual object in the first user interface area transitioning to the second user interface Animation (eg, movement, rotation and/or scaling about one or more axes) of the second representation of the virtual object in the region. For example, in Figures 6E-6I, during the transition from displaying instantmessaging user interface 5008 to displayingstaging user interface 6010,virtual chair 5020 is continuously displayed and animated (eg, the orientation ofvirtual chair 5020 changes). In some embodiments, the virtual object has an orientation, position, and/or distance defined relative to a plane in the camera's field of view (eg, based on the shape of the first representation of the virtual object as shown in the first user interface area) and orientation), and when transitioning to the second user interface area, the first representation of the virtual object is moved, resized, and/or reoriented to a new location on the display (eg, the second user interface the second representation of the virtual object at the center of the virtual staging plane in the area), and during the movement or at the end of the movement, the virtual object is reoriented such that the virtual object is predetermined relative to the predefined virtual staging plane angle, the virtual staging plane is defined independently of the physical environment surrounding the device. Displaying an animation when the first representation of the virtual object in the first user interface transitions to the second representation of the virtual object in the second user interface provides feedback to the user indicating that the first input satisfies the first criterion. Providing improved feedback enhances the operability of the device (for example, by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), which in turn improves the user's ability to use the device more quickly and efficiently Power usage is reduced and device battery life is extended.

在一些实施方案中,响应于检测到通过第二接触进行的第二输入,并且根据确定通过第二接触进行的第二输入与在增强现实环境中显示虚拟对象的请求对应,设备在从显示第二用户界面区域(例如,登台用户界面6010)转变为显示包括一个或多个相机的视场6036的第三用户界面区域时连续显示(942)虚拟对象,这包括显示第二用户界面区域中的虚拟对象的第二表示转变为包括一个或多个相机的视场的第三用户界面区域中的虚拟对象的第三表示的动画(例如,移动、围绕一个或多个轴的旋转和/或缩放)。例如,在图6Q至图6U中,在从显示登台用户界面6010到显示相机的视场6036的转变期间,连续显示并动画虚拟椅子5020(例如,虚拟椅子5020的位置和尺寸改变)。在一些实施方案中,对虚拟对象进行重新取向,使得虚拟对象处于相对于在一个或多个相机的视场中检测到的视场平面(例如,可支撑用户界面对象的三维表示的物理表面,诸如,垂直墙壁或水平地板表面)的预定义的取向下、位置处和/或距离处。显示当第二用户界面中的虚拟对象的第二表示转变到第三用户界面中的虚拟对象的第三表示时的动画为用户提供了指示第二输入与在增强现实环境中显示虚拟对象的请求对应的反馈。为用户提供改进的视觉反馈增强了设备的可操作性(例如,通过帮助用户提供适当的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some implementations, in response to detecting the second input through the second contact, and based on determining that the second input through the second contact corresponds to a request to display the virtual object in the augmented reality environment, the device starts from displaying the first A second user interface area (eg, the staging user interface 6010 ) continuously displays ( 942 ) virtual objects as it transitions to displaying a third user interface area that includes the field ofview 6036 of one or more cameras, which includes displaying 942 the virtual objects in the second user interface area. The animation of the second representation of the virtual object transitioning to a third representation of the virtual object in a third user interface area that includes the field of view of the one or more cameras (eg, movement, rotation and/or scaling about one or more axes) ). For example, in Figures 6Q-6U, during the transition from displaying thestaging user interface 6010 to displaying the camera's field ofview 6036, thevirtual chair 5020 is continuously displayed and animated (eg, the position and size of thevirtual chair 5020 changes). In some embodiments, the virtual object is reoriented such that the virtual object is in a plane relative to a field of view detected in the field of view of one or more cameras (eg, a physical surface that can support a three-dimensional representation of a user interface object, such as vertical walls or horizontal floor surfaces) at predefined orientations, locations and/or distances. Displaying an animation when the second representation of the virtual object in the second user interface transitions to the third representation of the virtual object in the third user interface provides the user with a request to indicate the second input and display the virtual object in the augmented reality environment corresponding feedback. Providing the user with improved visual feedback enhances the operability of the device (e.g., by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), which in turn by enabling the user to more quickly and efficiently Using the device reduces power usage and extends the battery life of the device.

应当理解,对图9A至9D中已进行描述的操作的具体次序仅仅是示例性的,并非旨在表明所述次序是可以执行这些操作的唯一次序。本领域的普通技术人员会想到多种方式来对本文所述的操作进行重新排序。另外,应当注意,本文相对于本文所述的其他方法(例如,方法800、900、16000、17000、18000、19000和20000)所述的其他过程的细节同样以类似的方式适用于上文相对于图9A至图9D所述的方法900。例如,上文参考方法900所述的接触、输入、虚拟对象、用户界面区域、强度阈值、视场、触觉输出、移动和/或动画任选地具有本文参考本文所述的其他方法(例如,方法800、900、16000、17000、18000、19000和20000)所述的接触、输入、虚拟对象、用户界面区域、强度阈值、视场、触觉输出、移动和/或动画的特征中的一者或多者。为了简明起见,此处不再重复这些细节。It should be understood that the specific order in which the operations described in Figures 9A-9D have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize numerous ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (eg,methods 800, 900, 16000, 17000, 18000, 19000, and 20000) also apply in a similar manner to those described above with respect to Themethod 900 is described in FIGS. 9A-9D . For example, the contacts, inputs, virtual objects, user interface areas, intensity thresholds, fields of view, haptic outputs, movements, and/or animations described above with reference tomethod 900 optionally have other methods described herein with reference to (eg, one of the characteristics of contact, input, virtual object, user interface area, intensity threshold, field of view, haptic output, movement and/or animation described inmethods 800, 900, 16000, 17000, 18000, 19000 and 20000) or many. For the sake of brevity, these details are not repeated here.

图10A至图10D是示出根据一些实施方案的显示具有指示项目与虚拟三维对象对应的视觉指示的项目的方法1000的流程图。方法1000在具有显示器和触敏表面(例如,同时充当显示器和触敏表面的触摸屏显示器)的电子设备(图3中的设备300,或图1A中的便携式多功能设备100)处执行。在一些实施方案中,显示器是触摸屏显示器,并且触敏表面在显示器上或与显示器集成。在一些实施方案中,显示器与触敏表面是分开的。方法1000中的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。10A-10D are flowcharts illustrating a method 1000 of displaying an item with a visual indication indicating that the item corresponds to a virtual three-dimensional object, according to some embodiments. Method 1000 is performed at an electronic device (device 300 in FIG. 3, or portablemultifunction device 100 in FIG. 1A) having a display and a touch-sensitive surface (eg, a touch screen display that acts as both a display and a touch-sensitive surface). In some embodiments, the display is a touch screen display and the touch-sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 1000 are optionally combined, and/or the order of some operations is optionally changed.

如下文所述,方法1000涉及在第一用户界面和第二用户界面中显示项目。根据项目是否与相应的虚拟三维对象对应,显示的每个项目具有指示项目与虚拟三维对象对应的视觉指示或者不具有该视觉指示。向用户提供项目是否为虚拟三维对象的指示提高了用户能够在第一项目上执行操作的效率(例如,通过来帮助用户根据项目是否为虚拟三维对象来提供适当的输入),从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。As described below, method 1000 involves displaying items in a first user interface and a second user interface. Depending on whether the item corresponds to a corresponding virtual three-dimensional object, each item displayed may or may not have a visual indication that the item corresponds to the virtual three-dimensional object. Providing the user with an indication of whether the item is a virtual 3D object increases the efficiency with which the user can perform operations on the first item (eg, by helping the user provide appropriate input based on whether the item is a virtual 3D object), thereby enhancing the device's Maneuverability, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

设备接收(1002)显示包括第一项目(例如,图标、缩略图、图像、表情符号、附件、贴纸、应用程序图标、头像等)的第一用户界面的请求。例如,在一些实施方案中,该请求为用于打开用于在与第一项目相关联的预定义环境中显示第一项目的表示的用户界面(例如,互联网浏览器用户界面5060,如图7B所示)的输入(例如,如参照图7A描述的)。预定义的环境任选地为应用程序的用户界面(例如,电子邮件应用程序、即时消息应用程序、浏览器应用程序、文字处理应用程序、电子阅读器应用程序等)或系统用户界面(例如,锁定屏幕、通知界面、建议界面、控制面板用户界面、home屏幕用户界面等)。The device receives (1002) a request to display a first user interface that includes a first item (eg, an icon, thumbnail, image, emoji, attachment, sticker, application icon, avatar, etc.). For example, in some embodiments, the request is to open a user interface for displaying a representation of the first item in a predefined environment associated with the first item (eg, Internetbrowser user interface 5060, as shown in FIG. 7B ) shown) input (eg, as described with reference to FIG. 7A ). The predefined environment is optionally a user interface of an application (eg, an email application, an instant messaging application, a browser application, a word processing application, an e-reader application, etc.) or a system user interface (eg, lock screen, notification screen, suggestion screen, control panel UI, home screen UI, etc.).

响应于显示第一用户界面的请求,设备显示(1004)具有第一项目的表示的第一用户界面(例如,如图7B所示的互联网浏览器用户界面5060)。根据确定第一项目与相应的虚拟三维对象对应,设备显示第一项目的表示,第一项目的表示具有指示第一项目与第一相应的虚拟三维对象对应的视觉指示(例如,显示在与第一项目的表示对应的位置处的图像,诸如,图标和/或背景面板;轮廓;以及/或者文本)。根据确定第一项目不与相应的虚拟三维对象对应,设备显示不具有视觉指示的第一项目的表示。例如,如图7B所示,在互联网浏览器用户界面5060中,显示的网络对象5068(包括虚拟三维灯对象5084的表示)具有指示虚拟灯8084为虚拟三维对象的视觉指示(虚拟对象指示符5080),并且显示的网络对象5074不具有视觉对象指示符,因为网络对象5074不包括与虚拟三维对象对应的项目。In response to the request to display the first user interface, the device displays ( 1004 ) a first user interface (eg, Internetbrowser user interface 5060 as shown in FIG. 7B ) having a representation of the first item. Based on the determination that the first item corresponds to the corresponding virtual three-dimensional object, the device displays a representation of the first item with a visual indication (eg, displayed on a display corresponding to the first item) that indicates that the first item corresponds to the first corresponding virtual three-dimensional object. An image of an item's representation at the corresponding location, such as an icon and/or a background panel; an outline; and/or text). Based on the determination that the first item does not correspond to the corresponding virtual three-dimensional object, the device displays a representation of the first item without a visual indication. For example, as shown in FIG. 7B, in Internetbrowser user interface 5060, displayed web objects 5068 (including representations of virtual three-dimensional light objects 5084) have a visual indication that virtual light 8084 is a virtual three-dimensional object (virtual object indicator 5080 ), and the displayedweb object 5074 does not have a visual object indicator because theweb object 5074 does not include an item corresponding to a virtual three-dimensional object.

在显示第一项目的表示之后,设备接收(1006)显示包括第二项目(例如,图标、缩略图、图像、表情符号、附件、贴纸、应用程序图标、头像等)的第二用户界面(例如,如图7M所示的即时消息用户界面5008)的请求(例如,如参照图7H至图7L描述的输入)。第二项目不同于第一项目,第二用户界面不同于第一用户界面。例如,在一些实施方案中,该请求为用于打开用于在与第二项目相关联的预定义环境中显示第二项目的表示的用户界面的另一输入。预定义的环境任选地为除用于示出第一项目的应用程序之外的应用程序的用户界面(例如,电子邮件应用程序、即时消息应用程序、浏览器应用程序、文字处理应用程序、电子阅读器应用程序等)或除用于示出第一项目的系统用户界面之外的系统用户界面(例如,锁定屏幕、通知界面、建议界面、控制面板用户界面、home屏幕用户界面等)。After displaying the representation of the first item, the device receives ( 1006 ) displaying a second user interface (eg, an icon, a thumbnail, an image, an emoji, an attachment, a sticker, an application icon, an avatar, etc.) , such as instantmessaging user interface 5008 shown in Figure 7M) (eg, input as described with reference to Figures 7H-7L). The second item is different from the first item, and the second user interface is different from the first user interface. For example, in some embodiments, the request is another input for opening a user interface for displaying a representation of the second item in a predefined environment associated with the second item. The predefined environment is optionally a user interface of an application other than the application used to show the first item (eg, an email application, an instant messaging application, a browser application, a word processing application, e-reader application, etc.) or a system user interface other than the system user interface used to show the first item (eg, lock screen, notification interface, suggestion interface, control panel user interface, home screen user interface, etc.).

响应于显示第二用户界面的请求,设备显示(1008)具有第二项目的表示的第二用户界面(例如,如图7M所示的即时消息用户界面5008)。根据确定第二项目与相应的虚拟三维对象对应,设备显示第二项目的表示,第二项目的表示具有指示第二项目与第二相应的虚拟三维对象对应的视觉指示(例如,与指示第一项目与虚拟三维对象对应的视觉指示相同的视觉指示)。根据确定第二项目不与相应的虚拟三维对象对应,设备显示不具有视觉指示的第二项目的表示。例如,如图7M所示,在即时消息用户界面5008中,显示的虚拟三维椅子对象5020具有指示虚拟椅子5020为虚拟三维对象的视觉指示(虚拟对象指示符5022),并且显示的表情符号7020不具有视觉对象指示符,因为表情符号7020不包括与虚拟三维对象对应的项目。In response to the request to display the second user interface, the device displays (1008) a second user interface (eg, instantmessaging user interface 5008 as shown in Figure 7M) having a representation of the second item. Based on the determination that the second item corresponds to the corresponding virtual three-dimensional object, the device displays a representation of the second item with a visual indication (eg, with a visual indication indicating that the second item corresponds to the second corresponding virtual three-dimensional object (eg, with the indication of the first item) Items have the same visual indications as the virtual three-dimensional objects correspond to). Based on the determination that the second item does not correspond to the corresponding virtual three-dimensional object, the device displays a representation of the second item without a visual indication. For example, as shown in FIG. 7M, in the instantmessaging user interface 5008, the displayed virtual three-dimensional chair object 5020 has a visual indication (virtual object indicator 5022) that thevirtual chair 5020 is a virtual three-dimensional object, and the displayedemoji 7020 does not There are visual object indicators becauseemoticons 7020 do not include items corresponding to virtual three-dimensional objects.

在一些实施方案中,显示具有指示第一项目与第一相应虚拟三维对象对应的视觉指示(例如,虚拟对象指示符5080)的第一项目(例如,虚拟灯5084)包括(1010):响应于检测到导致从第一设备取向到第二设备取向的变化的设备移动(例如,如通过取向传感器(例如,设备100的一个或多个加速度计168)检测到的),显示与从第一设备取向到第二设备取向的变化对应的第一项目的移动(例如,相对于第一用户界面的第一项目的倾斜和/或第一项目的移动)。例如,第一设备取向为如图7F1所示的设备100的取向,并且第二设备取向为如图7G1所示设备100的取向。响应于图7F1至图7G1所示的移动,第一项目(例如,虚拟灯5084)倾斜(例如,如图7F2至图7G2所示)。在一些实施方案中,如果第二对象与虚拟三维对象对应,第二对象还以上文所述的方式响应检测设备的移动(例如,以指示第二对象也与虚拟三维对象对应)。In some embodiments, displaying a first item (eg, virtual light 5084) with a visual indication (eg, virtual object indicator 5080) indicating that the first item corresponds to a first corresponding virtual three-dimensional object includes (1010): responsive to Device movement (eg, as detected by an orientation sensor (eg, one ormore accelerometers 168 of device 100 )) that results in a change from the first device orientation to the second device orientation is detected, displaying a The change in orientation to the orientation of the second device corresponds to movement of the first item (eg, tilt of the first item relative to the first user interface and/or movement of the first item). For example, the first device orientation is the orientation of thedevice 100 as shown in Figure 7F1 and the second device orientation is the orientation of thedevice 100 as shown in Figure 7G1. In response to the movement shown in Figures 7F1-7G1, the first item (eg, virtual light 5084) is tilted (eg, as shown in Figures 7F2-7G2). In some embodiments, if the second object corresponds to the virtual three-dimensional object, the second object also responds to movement of the detection device in the manner described above (eg, to indicate that the second object also corresponds to the virtual three-dimensional object).

显示与从第一设备取向到第二设备取向的变化对应的第一项目的移动为用户提供了指示虚拟三维对象的行为的视觉反馈。为用户提供改进的视觉反馈增强了设备的可操作性(例如,通过允许用户在不需要提供进一步的输入的情况下观看各取向下的虚拟三维对象),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。Displaying movement of the first item corresponding to the change from the first device orientation to the second device orientation provides the user with visual feedback indicative of the behavior of the virtual three-dimensional object. Providing the user with improved visual feedback enhances the operability of the device (for example, by allowing the user to view virtual 3D objects in various orientations without providing further input), which in turn enables the user to be more quickly and efficiently Use the device more efficiently, reducing power usage and extending the battery life of the device.

在一些实施方案中,显示具有指示第一项目与第一相应的虚拟三维对象对应的视觉指示的第一项目的表示包括(1012):响应于检测到通过第一接触进行的当第一项目的表示显示在第一用户界面中时滚动第一用户界面的第一输入(例如,在第一用户界面上在第一方向上的轻扫输入,或者在滚动条末端上的滚动按钮上的触摸保持输入):设备根据第一用户界面的滚动平移显示器上的第一项目的表示(例如,使第一项目的锚定位置在与滚动相反的方向上移动基于第一用户界面的滚动量的距离(例如,当通过在触敏表面上移动的接触向上拖动第一用户界面时,第一项目的表示在显示器上随第一用户界面向上移动)),并且设备根据第一用户界面滚动的方向相对于第一用户界面(或显示器)所定义的平面旋转第一项目的表示。例如,如图7C至图7D所示,响应于检测到通过接触7002进行的当虚拟灯5084的表示显示在互联网浏览器用户界面5060中时滚动互联网浏览器用户界面5060的输入,根据互联网浏览器用户界面5060的滚动平移虚拟灯5084,并且根据接触7002的移动路径的方向相对于显示器112旋转虚拟灯5084。在一些实施方案中,根据确定第一用户界面被向上拖动,第一项目的表示随第一用户界面向上移动,并且如第一用户界面上所示的第一项目的观看视角发生变化,就好像用户正从不同的视角(例如,较低的角度)观看第一项目。在一些实施方案中,根据确定第二用户界面被向上拖动,第二项目的表示随第二用户界面向上移动,并且如第二用户界面上所示的第二项目的观看视角发生变化,就好像用户正从不同的视角(例如,较低的角度)观看第二项目。In some embodiments, displaying a representation of the first item with a visual indication that the first item corresponds to the first corresponding virtual three-dimensional object includes (1012): in response to detecting a movement of the first item by the first contact Represents a first input that scrolls the first user interface when displayed in the first user interface (eg, a swipe input in a first direction on the first user interface, or a touch hold on a scroll button on the end of a scroll bar input): The device translates the representation of the first item on the display in accordance with the scrolling of the first user interface (eg, moves the anchor position of the first item in a direction opposite to the scrolling by a distance based on the amount of scrolling of the first user interface ( For example, when the first user interface is dragged up by a contact moving on the touch-sensitive surface, the representation of the first item moves up on the display with the first user interface)) and the device is relative to the direction in which the first user interface is scrolled The representation of the first item is rotated in a plane defined by the first user interface (or display). For example, as shown in FIGS. 7C-7D, in response to detecting an input viacontact 7002 to scroll the Internetbrowser user interface 5060 while the representation of thevirtual light 5084 is displayed in the Internetbrowser user interface 5060, according to the Internet browser Scrolling of theuser interface 5060 translates thevirtual light 5084 and rotates the virtual light 5084 relative to thedisplay 112 according to the direction of the movement path of thecontact 7002 . In some embodiments, upon determining that the first user interface is dragged upward, the representation of the first item moves upward with the first user interface, and the viewing angle of the first item as shown on the first user interface changes, the It appears that the user is viewing the first item from a different perspective (eg, a lower angle). In some embodiments, upon determining that the second user interface is dragged upward, the representation of the second item moves upward with the second user interface, and the viewing angle of the second item changes as shown on the second user interface, the It appears that the user is viewing the second item from a different perspective (eg, a lower angle).

显示与从第一设备取向到第二设备取向的变化对应的项目的移动为用户提供了指示设备取向的变化的视觉反馈。为用户提供改进的视觉反馈增强了设备的可操作性(例如,通过允许用户在不需要提供进一步的输入的情况下观看各取向下的虚拟三维对象),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。The movement of displaying the item corresponding to the change from the first device orientation to the second device orientation provides the user with visual feedback indicating the change in device orientation. Providing the user with improved visual feedback enhances the operability of the device (for example, by allowing the user to view virtual 3D objects in various orientations without providing further input), which in turn enables the user to be more quickly and efficiently Use the device more efficiently, reducing power usage and extending the battery life of the device.

在一些实施方案中,当在第一用户界面(例如,互联网浏览器用户界面5060,如图7B所示)中显示具有视觉指示(例如,虚拟对象指示符5080)的第一项目(例如,灯对象5084)的表示时,设备显示(1014)第三项目的表示,其中显示的第三项目的表示不具有视觉指示,以便指示第三项目不与虚拟三维对象对应(例如,第三项目不与可在增强现实环境中被渲染的任何三维对象对应)。例如,如图7B所示,在互联网浏览器用户界面5060中,显示的网络对象5074、5070和5076不具有视觉对象指示符,因为网络对象5074、5070和5076不与虚拟三维对象对应。In some embodiments, when a first item (eg, a light) is displayed with a visual indication (eg, virtual object indicator 5080 ) in a first user interface (eg, internetbrowser user interface 5060, as shown in FIG. 7B ) object 5084), the device displays (1014) a representation of the third item, wherein the displayed representation of the third item has no visual indication to indicate that the third item does not correspond to the virtual three-dimensional object (eg, the third item does not correspond to the virtual three-dimensional object). corresponds to any three-dimensional object that can be rendered in an augmented reality environment). For example, as shown in Figure 7B, in Internetbrowser user interface 5060,web objects 5074, 5070, and 5076 are displayed without visual object indicators because web objects 5074, 5070, and 5076 do not correspond to virtual three-dimensional objects.

在第一用户界面中显示具有指示第一项目为虚拟三维对象的视觉指示的第一项目以及不具有该视觉指示的第三项目提高了用户能够使用第一用户界面来执行操作的效率(例如,通过帮助用户根据用户与其进行交互的项目是否为虚拟三维对象来提供适当的输入),从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。Displaying the first item with the visual indication that the first item is a virtual three-dimensional object and the third item without the visual indication in the first user interface improves the efficiency with which the user can perform operations using the first user interface (eg, Enhances the operability of the device by helping the user to provide appropriate input based on whether the item the user is interacting with is a virtual three-dimensional object), which in turn reduces power usage by enabling the user to use the device more quickly and efficiently and Extends the battery life of the device.

在一些实施方案中,当在第二用户界面(例如,即时消息用户界面5008,如图7M所示)中显示具有视觉指示(例如,虚拟对象指示符5022)的第二项目(例如,虚拟椅子5020)的表示时,设备显示(1016)第四项目(例如,表情符号7020)的表示,其中显示的第四项目的表示不具有视觉指示,以便指示第四项目不与相应的虚拟三维对象对应。In some embodiments, when a second item (eg, a virtual chair) is displayed with a visual indication (eg, virtual object indicator 5022) in a second user interface (eg, instantmessaging user interface 5008, as shown in FIG. 7M ) 5020), the device displays (1016) a representation of a fourth item (eg, emoji 7020), wherein the displayed representation of the fourth item has no visual indication to indicate that the fourth item does not correspond to a corresponding virtual three-dimensional object .

在第二用户界面中显示具有指示第二项目为虚拟三维对象的视觉指示的第二项目以及不具有该视觉指示的第四项目提高了用户能够使用第二用户界面来执行操作的效率(例如,通过帮助用户根据用户与其进行交互的项目是否为虚拟三维对象来提供适当的输入),从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。Displaying the second item with the visual indication that the second item is a virtual three-dimensional object and the fourth item without the visual indication in the second user interface improves the efficiency with which the user can perform operations using the second user interface (eg, Enhances the operability of the device by helping the user to provide appropriate input based on whether the item the user is interacting with is a virtual three-dimensional object), which in turn reduces power usage by enabling the user to use the device more quickly and efficiently and Extends the battery life of the device.

在一些实施方案中(1018),第一用户界面(例如,互联网浏览器用户界面5060,如图7B所示)与第一应用程序(例如,互联网浏览器应用程序)对应,第二用户界面(例如,即时消息用户界面5008,如图7M所示)与不同于第一应用程序的第二应用程序(例如,即时消息应用程序)对应,并且显示的具有视觉指示(例如,虚拟对象指示符5080)的第一项目(例如,灯对象5084)的表示和显示的具有视觉指示(例如,虚拟对象指示符5022)的第二项目(例如,虚拟椅子5020)的表示共享一组预定义的视觉特征和/或行为特征(例如,使用相同的指示符图标,或者在被预定义类型的输入调用时具有相同的纹理或渲染风格和/或行为)。例如,用于虚拟对象指示符5080的图标和用于虚拟对象指示符5022的图标包括相同的符号。In some embodiments (1018), the first user interface (eg, internetbrowser user interface 5060, shown in FIG. 7B) corresponds to the first application (eg, internet browser application), the second user interface (eg, internet browser application) For example, instantmessaging user interface 5008, shown in FIG. 7M ) corresponds to a second application (eg, an instant messaging application) different from the first application and is displayed with a visual indication (eg, virtual object indicator 5080 ) ) of a first item (eg, light object 5084) and a displayed representation of a second item (eg, virtual chair 5020) with a visual indication (eg, virtual object indicator 5022) share a set of predefined visual characteristics and/or behavioral characteristics (eg, using the same indicator icon, or having the same texture or rendering style and/or behavior when invoked by a predefined type of input). For example, the icon forvirtual object indicator 5080 and the icon forvirtual object indicator 5022 include the same symbols.

在第一应用程序的第一用户界面中显示具有视觉指示的第一项目以及在第二应用程序的第二用户界面中显示具有视觉指示的第二项目,使得第一项目的视觉指示和第二项目的视觉指示共享一组预定义的视觉特征和/或行为特征,这提高了用户能够使用第二用户界面来执行操作的效率(例如,通过帮助用户根据用户与其进行交互的项目是否为虚拟三维对象来提供适当的输入),从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。A first item with a visual indication is displayed in a first user interface of the first application and a second item with a visual indication is displayed in a second user interface of the second application such that the visual indication of the first item and the second item are The visual indication of the item shares a set of predefined visual characteristics and/or behavioral characteristics, which improves the efficiency with which the user is able to perform actions using the second user interface (for example, by helping the user based on whether the item the user is interacting with is virtual three-dimensional or not) object to provide appropriate input), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,第一用户界面为(1020)互联网浏览器应用程序用户界面(例如,互联网浏览器用户界面5060,如图7B所示),并且第一项目为网页的元素(例如,第一项目在网页中表示为嵌入的图像、超链接、小应用程序、表情符号、嵌入的媒体对象等)。例如,第一项目为网络对象5068的虚拟灯对象5084。In some embodiments, the first user interface is (1020) an internet browser application user interface (eg, internetbrowser user interface 5060, as shown in FIG. 7B ), and the first item is an element of a web page (eg, the first item An item is represented in a web page as an embedded image, hyperlink, applet, emoji, embedded media object, etc.). For example, the first item is thevirtual light object 5084 of thenetwork object 5068.

显示具有指示网页元素为虚拟三维对象的视觉指示的网页元素提高了用户能够使用互联网浏览器应用程序来执行操作的效率(例如,通过帮助用户根据用户与其进行交互的网页元素是否为虚拟三维对象来提供适当的输入),从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。Displaying a web page element with a visual indication that the web page element is a virtual three-dimensional object improves the efficiency with which a user can perform operations using an Internet browser application (for example, by helping the user to determine whether the web page element the user interacts with is a virtual three-dimensional object or not). provide appropriate input), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,第一用户界面为(1022)电子邮件应用程序用户界面(例如,电子邮件用户界面7052,如图7P所示),并且第一项目为电子邮件的附件(例如,附件7060)。In some embodiments, the first user interface is (1022) an email application user interface (eg,email user interface 7052, as shown in Figure 7P), and the first item is an attachment to the email (eg, attachment 7060) ).

显示具有指示电子邮件附件为虚拟三维对象的视觉指示的电子邮件附件提高了用户能够使用电子邮件应用程序用户界面来执行操作的效率(例如,通过帮助用户根据用户与其进行交互的电子邮件附件是否为虚拟三维对象来提供适当的输入),从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。Displaying email attachments with a visual indication that the email attachment is a virtual three-dimensional object improves the efficiency with which users can perform actions using the email application user interface (for example, by helping users determine whether the email attachment the user is interacting with is virtual three-dimensional objects to provide appropriate input), enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,第一用户界面为(1024)即时消息应用程序用户界面(例如,即时消息用户界面5008,如图7M所示),并且第一项目为消息中的附件或元素(例如,虚拟椅子5020)(例如,第一项目为图像、超链接、迷你程序、表情符号、媒体对象等)。In some embodiments, the first user interface is (1024) an instant messaging application user interface (eg, instantmessaging user interface 5008, as shown in Figure 7M), and the first item is an attachment or element in the message (eg, virtual chair 5020) (eg, the first item is an image, hyperlink, mini-program, emoji, media object, etc.).

显示具有指示消息附件或元素为虚拟三维对象的视觉指示的消息附件或元素提高了用户能够使用即时消息用户界面来执行操作的效率(例如,通过帮助用户根据用户与其进行交互的消息附件或元素是否为虚拟三维对象来提供适当的输入),从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。Displaying a message attachment or element with a visual indication that the message attachment or element is a virtual three-dimensional object increases the efficiency with which a user can perform actions using the instant messaging user interface (for example, by helping the user to to provide appropriate input for virtual three-dimensional objects), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,第一用户界面为(1026)文件管理应用程序用户界面(例如,文件管理用户界面7036,如图7O所示),并且第一项目为文件预览对象(例如,文件信息区域7046中的文件预览对象7045)。In some embodiments, the first user interface is (1026) a file management application user interface (eg, filemanagement user interface 7036, as shown in FIG. 70 ), and the first item is a file preview object (eg, a file information area) 7046 in the file preview object 7045).

显示具有指示文件预览对象为虚拟三维对象的视觉指示的文件预览对象提高了用户能够使用文件管理应用程序用户界面来执行操作的效率(例如,通过帮助用户根据用户与其进行交互的文件预览对象是否为虚拟三维对象来提供适当的输入),从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。Displaying a file preview object with a visual indication that the file preview object is a virtual three-dimensional object improves the efficiency with which the user can perform operations using the file management application user interface (for example, by helping the user based on whether the file preview object the user interacts with is a virtual three-dimensional object) virtual three-dimensional objects to provide appropriate input), enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,第一用户界面为(1028)地图应用程序用户界面(例如,地图应用程序用户界面7024),并且第一项目为地图中的兴趣点(例如,兴趣点对象7028)的表示(例如,与地图上的位置对应的特征的三维表示(例如,包括与地图上的位置对应的地形和/或结构的三维表示),或者在被致动时使得地图的三维表示显示的控件)。In some embodiments, the first user interface is (1028) a map application user interface (eg, map application user interface 7024), and the first item is a representation of a point of interest (eg, point of interest object 7028) in the map (eg, a three-dimensional representation of a feature corresponding to a location on a map (eg, including a three-dimensional representation of terrain and/or structures corresponding to a location on a map), or a control that, when actuated, causes a display of a three-dimensional representation of the map) .

显示地图中具有指示兴趣点的表示为虚拟三维对象的视觉指示的兴趣点的表示提高了用户能够使用地图应用程序用户界面来执行操作的效率(例如,通过帮助用户根据用户与其进行交互的兴趣点的表示是否为虚拟三维对象来提供适当的输入),从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。Displaying representations of POIs in a map with visual indications of virtual three-dimensional objects that indicate POIs improves the efficiency with which a user can perform operations using the map application user interface (e.g., by helping the user base the user on the POIs that the user interacts with). representation of virtual three-dimensional objects to provide appropriate input), thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,第一项目与相应的虚拟三维对象对应的视觉指示包括(1030)在不需要涉及相应三维对象的表示的输入的情况下发生的第一项目的动画(例如,随时间推移,应用于第一项目的连续移动或变化的视觉效果(例如,闪光、闪烁等))。In some embodiments, the visual indication that the first item corresponds to the corresponding virtual three-dimensional object includes (1030) an animation of the first item (eg, over time) that occurs without input involving the representation of the corresponding three-dimensional object. , a continuously moving or changing visual effect (eg, flashing, blinking, etc.) applied to the first item).

显示在没有涉及相应三维对象的表示的输入的情况下发生的第一项目的动画增强了设备的可操作性(例如,通过减少用户观看第一项目的三维方面所需的输入的数量),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。The animation showing the first item occurring without input involving the representation of the corresponding three-dimensional object enhances the operability of the device (eg, by reducing the amount of input required by the user to view the three-dimensional aspect of the first item), which This in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在显示具有指示第二项目与相应的虚拟三维对象对应的视觉指示(例如,虚拟对象指示符5022)的第二项目(例如,虚拟椅子5020)的表示时,设备检测(1032)在触敏表面上与第二项目的表示对应的位置处通过第二接触进行的第二输入(例如,如参照图5C至图5F描述的输入),并且响应于检测到通过第二接触进行的第二输入,以及根据确定通过第二接触进行的第二输入满足第一(例如,AR触发)标准,设备在显示器上显示第三用户界面区域,这包括用一个或多个相机的视场5036的表示替换第二用户界面(例如,即时消息用户界面5008)的至少一部分的显示(例如,参照图5F至图5I描述的)以及在从显示第二用户界面切换到显示第三用户界面区域时连续显示第二虚拟三维对象。(例如,如本文参考方法800更详细地描述)。在一些实施方案中,设备显示在从显示第二用户界面中具有一个或多个相机的视场的表示的部分切换时连续显示虚拟对象的表示的动画(例如,如本文参考操作834更详细地描述)。In some embodiments, when displaying a representation of a second item (eg, virtual chair 5020 ) with a visual indication (eg, virtual object indicator 5022 ) indicating that the second item corresponds to a corresponding virtual three-dimensional object, the device detects ( 1032) A second input via a second contact (eg, as described with reference to FIGS. 5C-5F ) at a location on the touch-sensitive surface that corresponds to the representation of the second item, and in response to detection of the via the second contact The second input made, and upon determining that the second input made through the second contact satisfies the first (eg, AR-triggered) criteria, the device displays a third user interface area on the display, which includes viewing with one or more cameras. The representation offield 5036 replaces the display of at least a portion of the second user interface (eg, instant messaging user interface 5008 ) (eg, as described with reference to FIGS. 5F-5I ) and upon switching from displaying the second user interface to displaying the third user interface The second virtual three-dimensional object is continuously displayed when the area is in the area. (eg, as described in greater detail herein with reference to method 800). In some embodiments, the device displays an animation that continuously displays the representation of the virtual object when switching from the portion that displays the representation of the field of view with the one or more cameras in the second user interface (eg, as described in more detail herein with reference to operation 834 ). describe).

使用第一标准来确定是否显示第三用户界面区域使得多种不同类型的操作能够响应于第二输入而执行。使得多种不同类型的操作能够响应于输入而执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。Using the first criterion to determine whether to display the third user interface area enables a variety of different types of operations to be performed in response to the second input. Enabling many different types of operations to be performed in response to input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces power usage by enabling the user to use the device more quickly and efficiently And it extends the battery life of the device.

在一些实施方案中,(例如,如本文参考方法900更详细地描述)在显示具有指示第二项目与相应的虚拟三维对象对应的视觉指示(例如,虚拟对象指示符5022)的第二项目(例如,虚拟椅子5020)时,设备检测(1034)在触敏表面上与第二项目的表示对应的位置处通过第三接触进行的第三输入(例如,如参照图6E至图6I描述的输入),并且响应于检测到通过第三接触进行的第三输入,以及根据确定通过第三接触进行的第三输入满足第一(例如,登台触发)标准,设备在第四用户界面中显示第二虚拟三维对象,第四用户界面与第二用户界面不同(例如,如参考方法900更详细地描述的登台用户界面6010)。在一些实施方案中,当在第四用户界面(例如,登台用户界面6010,如图6I所示)中显示第二虚拟三维对象时,设备检测第四输入,并且响应于检测到第四输入:根据确定第四输入与在第四用户界面中操纵第二虚拟三维对象的请求对应,设备基于第四输入改变第四用户界面内的第二虚拟三维对象的显示属性(例如,如参照图6J至图6M描述的并且/或者如参照图6N至图6P描述的),并且根据确定第四输入与在增强现实环境中显示第二虚拟对象的请求对应(例如,在触敏表面上与第二用户界面区域中的虚拟对象的表示对应的位置处或者从触敏表面上与第二用户界面区域中的虚拟对象的表示对应的位置进行的轻击输入、按压输入或者触摸保持或按压输入以及随后的拖动输入),设备显示具有一个或多个相机的视场的表示的第二虚拟三维对象(例如,如参照图6Q至图6U描述的)。In some implementations, (eg, as described in more detail herein with reference to method 900 ) a second item (eg, virtual object indicator 5022 ) is displayed with a visual indication (eg, virtual object indicator 5022 ) indicating that the second item corresponds to a corresponding virtual three-dimensional object. For example, virtual chair 5020), the device detects (1034) a third input (eg, as described with reference to Figures 6E-6I) through a third contact at a location on the touch-sensitive surface that corresponds to the representation of the second item ), and in response to detecting the third input through the third contact, and upon determining that the third input through the third contact satisfies the first (eg, staging trigger) criterion, the device displays the second in the fourth user interface A virtual three-dimensional object, the fourth user interface is distinct from the second user interface (eg, staginguser interface 6010 as described in more detail with reference to method 900). In some embodiments, when the second virtual three-dimensional object is displayed in a fourth user interface (eg, staginguser interface 6010, shown in FIG. 6I ), the device detects a fourth input, and in response to detecting the fourth input: Upon determining that the fourth input corresponds to a request to manipulate the second virtual three-dimensional object in the fourth user interface, the device changes display properties of the second virtual three-dimensional object within the fourth user interface based on the fourth input (eg, as described with reference to FIGS. 6M and/or as described with reference to FIGS. 6N-6P ), and upon determining that the fourth input corresponds to a request to display a second virtual object in the augmented reality environment (eg, on a touch-sensitive surface with a second user A tap input, a press input, or a touch hold or press input at or from a location on the touch-sensitive surface corresponding to the representation of the virtual object in the interface area drag input), the device displays a second virtual three-dimensional object having a representation of the field of view of one or more cameras (eg, as described with reference to FIGS. 6Q-6U).

当在第四用户界面(例如,登台用户界面6010)中显示第二三维对象时,响应于第四输入,设备基于第四输入改变第二三维对象的显示属性,或者显示具有设备的一个或多个相机的视场的表示的第二三维对象。使得多种不同类型的操作能够响应于输入而执行(例如,通过改变第二三维对象的显示属性或者用设备的一个或多个相机的视场的表示来显示第二三维对象)提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。When the second three-dimensional object is displayed in the fourth user interface (eg, staging user interface 6010), in response to the fourth input, the device changes display properties of the second three-dimensional object based on the fourth input, or displays one or more objects with the device A second three-dimensional object that represents the field of view of the camera. Enabling a variety of different types of operations to be performed in response to input (eg, by changing display properties of the second three-dimensional object or displaying the second three-dimensional object with a representation of the field of view of one or more cameras of the device) improves the user's ability to The efficiency with which these operations are performed, thereby enhancing the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

应当理解,对图10A至图10D中的操作进行描述的特定顺序仅仅是一个示例,并非旨在表明所述顺序是可以执行这些操作的唯一顺序。本领域的普通技术人员会想到多种方式来对本文所述的操作进行重新排序。另外,应当注意,本文相对于本文所述的其他方法(例如,方法800、900、16000、17000、18000、19000和20000)描述的其他过程的细节同样以类似的方式适用于上文相对于图10A至图10D所述的方法1000。例如,上文参考方法1000所述的接触、输入、虚拟对象、用户界面、用户界面区域、视场、移动和/或动画任选地具有本文参考本文所述的其他方法(例如,方法800、900、16000、17000、18000、19000和20000)所述的接触、输入、虚拟对象、用户界面、用户界面区域、视场、移动和/或动画的特征中的一者或多者。为了简明起见,此处不再重复这些细节。It should be understood that the particular order in which the operations in FIGS. 10A-10D are described is merely an example, and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize numerous ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (eg,methods 800, 900, 16000, 17000, 18000, 19000, and 20000) also apply in a similar manner to the above with respect to Figures The method 1000 described in 10A to 10D. For example, the touches, inputs, virtual objects, user interfaces, user interface regions, fields of view, movements, and/or animations described above with reference to method 1000 optionally have other methods described herein with reference to (eg,method 800, 900, 16000, 17000, 18000, 19000, and 20000) for one or more of the touch, input, virtual object, user interface, user interface area, field of view, movement and/or animation features. For the sake of brevity, these details are not repeated here.

图11A至图11V示出了用于根据对象放置标准是否得到满足来显示具有不同视觉属性的虚拟对象的示例用户界面。这些附图中的用户界面被用于示出下文所述的过程,包括图8A至图8E、图9A至图9D、图10A至图10D、图16A至图16G、图17A至图17D、图18A至图18I、图19A至图19H以及图20A至图20F中的过程。为了便于解释,将参考在具有触敏显示器系统112的设备上执行的操作来讨论实施方案中的一些实施方案。在此类实施方案中,焦点选择器为任选地:相应手指或触笔接触、对应于手指或触笔接触的代表点(例如,相应接触的重心或与相应接触相关联的点)、或在触敏显示系统112上所检测到的两个或更多个接触的重心。然而,响应于在显示附图中示出的在显示器450上的用户界面以及焦点选择器时检测触敏表面451上的接触,任选地在具有显示器450和独立的触敏表面451的设备上执行类似的操作。11A-11V illustrate example user interfaces for displaying virtual objects with different visual properties depending on whether object placement criteria are met. The user interfaces in these figures are used to illustrate the processes described below, including Figures 8A-8E, 9A-9D, 10A-10D, 16A-16G, 17A-17D, Processes in 18A-18I, 19A-19H, and 20A-20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch-sensitive display system 112 . In such embodiments, the focus selector is optionally: the corresponding finger or stylus contact, a representative point corresponding to the finger or stylus contact (eg, the center of gravity of the corresponding contact or a point associated with the corresponding contact), or The center of gravity of two or more contacts detected on the touch-sensitive display system 112 . However, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface shown in the figures on thedisplay 450 and the focus selector, optionally on a device having thedisplay 450 and a separate touch-sensitive surface 451 Do something similar.

图11A至图11E示出了在登台视图中显示虚拟对象的输入。例如,在用户界面(例如,电子邮件用户界面7052、文件管理用户界面7036、地图用户界面7022、即时消息用户界面5008、互联网浏览器用户界面5060或第三方应用程序用户界面)中显示三维对象的二维(例如,缩略的)表示时检测输入。Figures 11A-11E illustrate inputs to display virtual objects in a staging view. For example, displaying a three-dimensional object's Input is detected when a two-dimensional (eg, abbreviated) representation is present.

在图11A中,互联网浏览器用户界面5060包括三维虚拟对象11002(椅子)的二维表示。在与虚拟对象11002对应的位置处检测到通过接触11004进行的输入(例如,轻击输入)。响应于该轻击输入,登台用户界面6010的显示替换互联网浏览器用户界面5060的显示。In Figure 11A, Internetbrowser user interface 5060 includes a two-dimensional representation of a three-dimensional virtual object 11002 (a chair). An input (eg, a tap input) bycontact 11004 is detected at a location corresponding tovirtual object 11002 . In response to the tap input, the display of thestaging user interface 6010 replaces the display of the internetbrowser user interface 5060.

图11B至图11E示出了当登台用户界面6010的显示替换互联网浏览器用户界面5060的显示时发生的转变。在一些实施方案中,在转变期间,虚拟对象10002逐渐淡入视图,并且/或者登台用户界面6010的控件(例如,后退控件6016、切换控件6018和/或共享控件6020)逐渐淡入视图。例如,在虚拟对象11002淡入视图之后,登台用户界面6010的控件淡入视图(例如,以在显示器上渲染虚拟对象11002的三维表示所需的时间段内延迟控件的显示)。在一些实施方案中,虚拟对象11002的“淡入”包括显示虚拟对象11002的低分辨率、二维和/或全息版本,随后显示虚拟对象11002的最终三维表示。图11B至图11D示出了虚拟对象11002的逐渐淡入。在图11D中,显示虚拟对象11002的阴影11006。图11D至图11E示出了控件6016、6018和6020的逐渐淡入。FIGS. 11B-11E illustrate the transitions that occur when the display of thestaging user interface 6010 replaces the display of the internetbrowser user interface 5060 . In some embodiments, during the transition, virtual object 10002 fades into view, and/or controls of staging user interface 6010 (eg, backcontrol 6016,toggle control 6018, and/or share control 6020) gradually fade into view. For example, aftervirtual object 11002 fades into view, controls of staginguser interface 6010 fade into view (eg, to delay display of the controls for a period of time required to render a three-dimensional representation ofvirtual object 11002 on the display). In some embodiments, "fading in" ofvirtual object 11002 includes displaying a low-resolution, two-dimensional and/or holographic version ofvirtual object 11002 , followed by displaying the final three-dimensional representation ofvirtual object 11002 . 11B-11D illustrate the gradual fading ofvirtual object 11002 . In FIG. 11D , the shadow 11006 of thevirtual object 11002 is displayed. 11D-11E illustrate the gradual fade-in ofcontrols 6016, 6018, and 6020.

图11F至图11G示出了使得虚拟对象11002的三维表示被显示在包括设备100的一个或多个相机的视场6036的用户界面中的输入。在图11F中,在与切换控件6018对应的位置处检测到通过接触11008进行的输入。响应于该输入,包括相机的视场6036的用户界面的显示替换登台用户界面6010的显示,如图11G所示。FIGS. 11F-11G illustrate inputs that cause a three-dimensional representation ofvirtual object 11002 to be displayed in a user interface that includes field ofview 6036 of one or more cameras ofdevice 100 . In FIG. 11F , input throughcontact 11008 is detected at a location corresponding to togglecontrol 6018 . In response to this input, the display of the user interface including the camera's field ofview 6036 replaces the display of thestaging user interface 6010, as shown in Figure 11G.

如图11G至图11H所示,当最初显示相机的视场6036时,可显示虚拟对象的半透明表示(例如,当未在相机的视场6036中检测到与虚拟对象对应的平面时)。As shown in FIGS. 11G-11H , when the camera's field ofview 6036 is initially displayed, a translucent representation of the virtual object may be displayed (eg, when no plane corresponding to the virtual object is detected in the camera's field of view 6036 ).

图11G至图11H示出了显示在包括相机的视场6036的用户界面中的虚拟对象11002的半透明表示。在相对于显示器112的固定位置处显示虚拟对象11002的半透明表示。例如,从图11G到图11H,当设备100相对于物理环境5002移动(例如,如相机的视场6036中的桌子5004的改变的位置所指示)时,虚拟对象11002保持在相对于显示器112的固定位置处。11G-11H show semi-transparent representations ofvirtual objects 11002 displayed in a user interface that includes a field ofview 6036 of the camera. A semi-transparent representation ofvirtual object 11002 is displayed at a fixed position relative to display 112 . For example, from FIGS. 11G to 11H , asdevice 100 moves relative to physical environment 5002 (eg, as indicated by the changing position of table 5004 in camera's field of view 6036 ),virtual object 11002 remains at a position relative to display 112 fixed location.

在一些实施方案中,根据确定已经在相机的视场6036中检测到与虚拟对象对应的平面,将虚拟对象放置在检测到的平面上。In some embodiments, upon determining that a plane corresponding to the virtual object has been detected in the camera's field ofview 6036, the virtual object is placed on the detected plane.

在图11I中,已经在相机的视场6036中检测到与虚拟对象11002对应的平面,并且将虚拟对象11002放置在检测到的平面上。设备已生成如11010处所示的触觉输出(例如,以指示已经在相机的视场6036中检测到至少一个平面(例如,地板表面5038))。当虚拟对象11002被放置在相对于在相机的视场6036中检测到的平面的位置处时,虚拟对象11002保持在相对于一个或多个相机所捕获的物理环境5002的固定位置处。从图11I到图11J,当设备100相对于物理环境5002移动(例如,如所显示的相机的视场6036中的桌子5004的改变的位置所指示)时,虚拟对象11002保持在相对于物理环境5002的固定位置处。In Figure 11I, a plane corresponding tovirtual object 11002 has been detected in the camera's field ofview 6036, andvirtual object 11002 has been placed on the detected plane. The device has generated a haptic output as shown at 11010 (eg, to indicate that at least one plane (eg, floor surface 5038) has been detected in the camera's field of view 6036). When thevirtual object 11002 is placed at a position relative to a plane detected in the camera's field ofview 6036, thevirtual object 11002 remains at a fixed position relative to thephysical environment 5002 captured by one or more cameras. From FIGS. 11I-11J , as thedevice 100 moves relative to the physical environment 5002 (eg, as indicated by the changing position of the table 5004 in the displayed camera's field of view 6036 ), thevirtual object 11002 remains at a position relative to thephysical environment 5002 at the fixed location.

在一些实施方案中,在显示相机的视场6036时,停止显示控件(例如,后退控件6016、切换控件6018和/或共享控件6020)(例如,根据确定已经经过一段未接收到输入的时间)。在图11J至图11L中,控件6016、6018和6020逐渐淡出(例如,如图11K所示),这增大了显示器112中显示相机的视场6036的部分(例如,如图11L所示)。In some embodiments, when the camera's field ofview 6036 is displayed, the controls (eg, theback control 6016, thetoggle control 6018, and/or the share control 6020) are stopped from being displayed (eg, based on a determination that a period of time in which no input has been received) has elapsed. . In Figures 11J-11L, controls 6016, 6018, and 6020 gradually fade out (eg, as shown in Figure 11K), which increases the portion ofdisplay 112 that displays the camera's field of view 6036 (eg, as shown in Figure 11L) .

图11M至图11S示出了用于当虚拟对象11002显示在包括相机的视场6036的用户界面中时对其进行操纵的输入。Figures 11M-11S illustrate inputs for manipulatingvirtual object 11002 as it is displayed in a user interface that includes a camera's field ofview 6036.

在图11M至图11N中,检测到用于改变虚拟对象11002的模拟物理尺寸的通过接触11012和11014进行的输入(例如,展开手势)。响应于检测到输入,重新显示控件6016、6018和6020。当接触11012沿箭头11016所指示的路径移动且接触11014沿箭头11018所指示的路径移动时,虚拟对象11002的尺寸增大。In Figures 11M-11N, an input (eg, an expand gesture) throughcontacts 11012 and 11014 for changing the simulated physical size ofvirtual object 11002 is detected. In response to detecting the input, controls 6016, 6018, and 6020 are redisplayed. Ascontact 11012 moves along the path indicated byarrow 11016 and contact 11014 moves along the path indicated byarrow 11018,virtual object 11002 increases in size.

在图11N至图11P中,检测到用于改变虚拟对象11002的模拟物理尺寸的通过接触11012至1104进行的输入(例如,捏合手势)。当接触11012沿箭头11020所指示的路径移动且接触11014沿箭头11022所指示的路径移动时,虚拟对象11002的尺寸减小(如图11N至图11O以及图11O至图11P所示)。如图11O所示,在将虚拟对象11002的尺寸调整为其相对于物理环境5002的原始尺寸(例如,最初被放置在物理环境5002中所检测到的平面上时,虚拟对象11002的尺寸,如图11I所示)时,发生触觉输出(如11024处所示)(例如,以提供指示虚拟对象11002已返回到其原始尺寸的反馈)。在图11Q中,接触11012和11014已抬离触摸屏显示器112。In Figures 11N-11P, input (eg, a pinch gesture) through contacts 11012-1104 to change the simulated physical size ofvirtual object 11002 is detected. Ascontact 11012 moves along the path indicated byarrow 11020 and contact 11014 moves along the path indicated byarrow 11022,virtual object 11002 decreases in size (as shown in FIGS. 11N-11O and 110-11P). As shown in FIG. 110, whenvirtual object 11002 is resized to its original size relative to physical environment 5002 (eg, initially placed on a detected plane inphysical environment 5002, the size ofvirtual object 11002, such as 11I), a haptic output (shown at 11024) occurs (eg, to provide feedback indicating thatvirtual object 11002 has returned to its original size). In FIG. 11Q ,contacts 11012 and 11014 have been lifted offtouch screen display 112 .

在图11R中,检测到用于使虚拟对象11002返回到其相对于物理环境5002的原始尺寸的输入(例如,双击输入)。在与虚拟对象11002对应的位置处检测到输入,如接触11026所指示。响应于该输入,将虚拟对象11002从图11R所示的减小的尺寸调整为如图11S所指示的虚拟对象11002的原始尺寸。如图11S所示,在将虚拟对象11002的尺寸调整为其相对于物理环境5002的原始尺寸时,发生触觉输出(如11028处所示)(例如,以提供指示虚拟对象11002已返回到其原始尺寸的反馈)。In Figure 11R, an input (eg, a double tap input) is detected to returnvirtual object 11002 to its original size relative tophysical environment 5002. Input is detected at a location corresponding tovirtual object 11002, as indicated bycontact 11026. In response to this input, thevirtual object 11002 is resized from the reduced size shown in FIG. 11R to the original size of thevirtual object 11002 as indicated in FIG. 11S . As shown in Figure 11S, upon resizing thevirtual object 11002 to its original size relative to thephysical environment 5002, a haptic output (as shown at 11028) occurs (eg, to provide an indication that thevirtual object 11002 has returned to its original size) size feedback).

在图11T中,在与切换控件6018对应的位置处检测到通过接触11030进行的输入。响应于该输入,登台用户界面6010替换包括相机的视场6036的用户界面的显示,如图11U所示。In FIG. 11T, input bycontact 11030 is detected at a location corresponding to togglecontrol 6018. In response to this input, the staginguser interface 6010 replaces the display of the user interface including the camera's field ofview 6036, as shown in Figure 11U.

在图11U中,在与后退控件6016对应的位置处检测到通过接触11032进行的输入。响应于该输入,互联网浏览器用户界面5060替换登台用户界面6010的显示,如图11V所示。In FIG. 11U , input throughcontact 11032 is detected at a location corresponding to backcontrol 6016 . In response to this input, Internetbrowser user interface 5060 replaces the display of staginguser interface 6010, as shown in Figure 11V.

图12A至图12L示出了用于显示根据设备的一个或多个相机的移动而动态地动画的校准用户界面对象的示例用户界面。这些附图中的用户界面被用于示出下文所述的过程,包括图8A至图8E、图9A至图9D、图10A至图10D、图16A至图16G、图17A至图17D、图18A至图18I、图19A至图19H以及图20A至图20F中的过程。为了便于解释,将参考在具有触敏显示器系统112的设备上执行的操作来讨论实施方案中的一些实施方案。在此类实施方案中,焦点选择器为任选地:相应手指或触笔接触、对应于手指或触笔接触的代表点(例如,相应接触的重心或与相应接触相关联的点)、或在触敏显示系统112上所检测到的两个或更多个接触的重心。然而,响应于在显示附图中示出的在显示器450上的用户界面以及焦点选择器时检测触敏表面451上的接触,任选地在具有显示器450和独立的触敏表面451的设备上执行类似的操作。12A-12L illustrate example user interfaces for displaying calibration user interface objects that dynamically animate in accordance with movement of one or more cameras of the device. The user interfaces in these figures are used to illustrate the processes described below, including Figures 8A-8E, 9A-9D, 10A-10D, 16A-16G, 17A-17D, Processes in 18A-18I, 19A-19H, and 20A-20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch-sensitive display system 112 . In such embodiments, the focus selector is optionally: the corresponding finger or stylus contact, a representative point corresponding to the finger or stylus contact (eg, the center of gravity of the corresponding contact or a point associated with the corresponding contact), or The center of gravity of two or more contacts detected on the touch-sensitive display system 112 . However, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface shown in the figures on thedisplay 450 and the focus selector, optionally on a device having thedisplay 450 and a separate touch-sensitive surface 451 Do something similar.

根据一些实施方案,当接收到在包括一个或多个相机的视场的用户界面中显示虚拟对象的请求但还需要用于设备的校准的另外数据时,显示校准用户界面对象。According to some embodiments, a calibration user interface object is displayed when a request to display a virtual object in a user interface that includes the field of view of one or more cameras is received, but additional data is required for calibration of the device.

图12A示出了要求在包括一个或多个相机的视场6036的用户界面中显示虚拟对象11002的输入。在与切换控件6018对应的位置处检测到通过接触12002进行的输入。响应于该输入,包括相机的视场6036的用户界面的显示替换登台用户界面6010的显示,如图12B所示。在包括相机的视场6036的用户界面中显示虚拟对象11002的半透明表示。在需要校准时(例如,由于在相机的视场6036中未检测到与虚拟对象11002对应的平面),相机的视场6036被模糊(例如,以强调提示和/或校准对象的行为,如下文所述)。FIG. 12A shows an input requiring display ofvirtual object 11002 in a user interface that includes a field ofview 6036 of one or more cameras. Input bycontact 12002 is detected at the location corresponding to togglecontrol 6018. In response to this input, the display of the user interface including the camera's field ofview 6036 replaces the display of thestaging user interface 6010, as shown in Figure 12B. A translucent representation ofvirtual object 11002 is displayed in a user interface that includes the camera's field ofview 6036. When calibration is required (eg, because a plane corresponding tovirtual object 11002 is not detected in the camera's field of view 6036), the camera's field ofview 6036 is blurred (eg, to emphasize cues and/or to calibrate the behavior of the object, as described below) said).

图12B至图12D示出了提示用户移动设备的动画图像和文本(例如,根据需要校准的确定来显示)。动画图像包括设备100的表示12004、指示需要设备100左右移动的箭头12006和12008、平面的表示12010(例如,用于指示设备100必须相对于平面移动,以便检测与虚拟对象11002对应的平面)。文本提示12012提供关于校准所需的设备100的移动的信息。在图12B至图12C以及图12C至图12D中,相对于平面的表示12010调整设备100的表示12004和箭头12006,以提供校准所需的设备100的移动的指示。从图12C到图12D,设备100相对于物理环境5002移动(例如,如相机的视场6036中的桌子5004的改变的位置所指示)。作为设备100的移动的检测结果,显示校准用户界面对象12014(立方体的轮廓),如图12E-1所指示。12B-12D illustrate animated images and text prompting the user to move the device (eg, displayed upon determination that calibration is required). The animated image includes arepresentation 12004 of thedevice 100,arrows 12006 and 12008 indicating that thedevice 100 needs to be moved left and right, arepresentation 12010 of a plane (eg, to indicate that thedevice 100 must move relative to the plane in order to detect the plane corresponding to the virtual object 11002).Text prompt 12012 provides information about the movement ofdevice 100 required for calibration. In FIGS. 12B-12C and 12C-12D, therepresentation 12004 andarrows 12006 of thedevice 100 are adjusted relative to theplanar representation 12010 to provide an indication of the movement of thedevice 100 required for calibration. From Figures 12C to 12D, thedevice 100 moves relative to the physical environment 5002 (eg, as indicated by the changing position of the table 5004 in the camera's field of view 6036). As a result of the detection of movement of thedevice 100, a calibration user interface object 12014 (the outline of the cube) is displayed, as indicated in Figure 12E-1.

图12E-1至图12I-1示出了校准用户界面对象12014的行为,其分别与如图12E-2至图12I-2所示的设备100相对于物理环境5002的移动对应。响应于设备100的移动(例如,横向移动),动画校准用户界面对象12014(例如,立方体轮廓旋转)(例如,以向用户提供关于有助于校准的移动的反馈)。在图12E-1中,具有第一旋转角度的校准用户界面对象12014被示出在包括设备100的相机的视场6036的用户界面中。在图12E-2中,由用户的手5006握持的设备100被示出在相对于物理环境5002的第一位置处。从图12E-2到图12F-2,设备100相对于物理环境5002横向(向右)移动。作为移动的结果,如设备100显示的相机的视场6036被更新,并且校准用户界面对象12014已旋转(相对于其在图12E-1中的位置),如图12F-1所示。从图12F-2到图12G-2,设备100相对于物理环境5002继续向右移动。作为移动的结果,如设备100显示的相机的视场6036再次被更新,并且校准用户界面对象12014进一步旋转,如图12G-1所示。从图12G-2到图12H-2,设备100相对于物理环境5002向上移动。作为移动的结果,如设备100显示的相机的视场6036被更新。如图12G-1至图12H-1所示,校准用户界面对象12014不响应于图12G-2至图12H-2所示的设备的向上移动而旋转(例如,以向用户提供设备的垂直移动不会影响校准的指示)。从图12H-2到图12I-2,设备100相对于物理环境5002进一步向右移动。作为移动的结果,如设备100显示的相机的视场6036再次被更新,并且校准用户界面对象12014旋转,如图12I-1所示。Figures 12E-1 through 12I-1 illustrate the behavior of calibratinguser interface object 12014, which corresponds to movement ofdevice 100 relative tophysical environment 5002 as shown in Figures 12E-2 through 12I-2, respectively. In response to movement of device 100 (eg, lateral movement), calibrationuser interface object 12014 is animated (eg, cube outline rotation) (eg, to provide feedback to the user about movements that facilitate calibration). In FIG. 12E-1 , a calibrationuser interface object 12014 having a first rotation angle is shown in a user interface that includes the field ofview 6036 of the camera of thedevice 100 . In FIG. 12E-2, thedevice 100 held by the user'shand 5006 is shown in a first position relative to thephysical environment 5002. From Figures 12E-2 to 12F-2, thedevice 100 is moved laterally (to the right) relative to thephysical environment 5002. As a result of the movement, the camera's field ofview 6036 as displayed by thedevice 100 is updated, and the calibrationuser interface object 12014 has been rotated (relative to its position in FIG. 12E-1 ), as shown in FIG. 12F-1 . From Figures 12F-2 to 12G-2, thedevice 100 continues to move to the right relative to thephysical environment 5002. As a result of the movement, the field ofview 6036 of the camera as displayed by thedevice 100 is updated again, and the calibrationuser interface object 12014 is rotated further, as shown in Figure 12G-1. From Figure 12G-2 to Figure 12H-2, thedevice 100 moves upward relative to thephysical environment 5002. As a result of the movement, the field ofview 6036 of the camera as displayed by thedevice 100 is updated. As shown in FIGS. 12G-1 through 12H-1, the calibrationuser interface object 12014 does not rotate in response to upward movement of the device as shown in FIGS. 12G-2 through 12H-2 (eg, to provide the user with vertical movement of the device) does not affect the calibration indication). From Figure 12H-2 to Figure 12I-2, thedevice 100 is moved further to the right relative to thephysical environment 5002. As a result of the movement, the field ofview 6036 of the camera as displayed by thedevice 100 is updated again, and the calibrationuser interface object 12014 is rotated, as shown in Figure 12I-1.

在图12J中,设备100的移动(例如,如图12E至图12I所示)已满足所需的校准(例如,并且已在相机的视场6036中检测到与虚拟对象11002对应的平面)。将虚拟对象11002放置在所检测到的平面上,并且相机的视场6036停止模糊。触觉输出发生器输出指示已经在相机的视场6036中检测到平面(例如,地板表面5038)的触觉输出(如12016处所示)。地板表面5038被突出显示,以提供已检测到的平面的指示。In Figure 12J, the movement of the device 100 (eg, as shown in Figures 12E-12I) has satisfied the required calibration (eg, and a plane corresponding to thevirtual object 11002 has been detected in the camera's field of view 6036). Thevirtual object 11002 is placed on the detected plane and the camera's field ofview 6036 stops blurring. The haptic output generator outputs a haptic output (as shown at 12016 ) indicating that a plane (eg, floor surface 5038 ) has been detected in the camera's field ofview 6036 .Floor surface 5038 is highlighted to provide an indication of the detected plane.

当虚拟对象11002已被放置在相对于在相机的视场6036中检测到的平面的位置处时,虚拟对象11002保持在相对于一个或多个相机所捕获的物理环境5002的固定位置处。当设备100相对于物理环境5002移动时(如图12K-2至图12L-2所示),虚拟对象11002保持在相对于物理环境5002的固定位置处(如图12K-1至图12L-1所示)。When thevirtual object 11002 has been placed at a position relative to a plane detected in the camera's field ofview 6036, thevirtual object 11002 remains at a fixed position relative to thephysical environment 5002 captured by one or more cameras. As thedevice 100 moves relative to the physical environment 5002 (as shown in Figures 12K-2 to 12L-2), thevirtual object 11002 remains in a fixed position relative to the physical environment 5002 (as shown in Figures 12K-1 to 12L-1). shown).

图13A至图13M示出了用于约束虚拟对象围绕轴的旋转的示例用户界面。这些附图中的用户界面被用于示出下文所述的过程,包括图8A至图8E、图9A至图9D、图10A至图10D、图16A至图16G、图17A至图17D、图18A至图18I、图19A至图19H以及图20A至图20F中的过程。为了便于解释,将参考在具有触敏显示器系统112的设备上执行的操作来讨论实施方案中的一些实施方案。在此类实施方案中,焦点选择器为任选地:相应手指或触笔接触、对应于手指或触笔接触的代表点(例如,相应接触的重心或与相应接触相关联的点)、或在触敏显示系统112上所检测到的两个或更多个接触的重心。然而,响应于在显示附图中示出的在显示器450上的用户界面以及焦点选择器时检测触敏表面451上的接触,任选地在具有显示器450和独立的触敏表面451的设备上执行类似的操作。13A-13M illustrate example user interfaces for constraining the rotation of a virtual object about an axis. The user interfaces in these figures are used to illustrate the processes described below, including Figures 8A-8E, 9A-9D, 10A-10D, 16A-16G, 17A-17D, Processes in 18A-18I, 19A-19H, and 20A-20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch-sensitive display system 112 . In such embodiments, the focus selector is optionally: the corresponding finger or stylus contact, a representative point corresponding to the finger or stylus contact (eg, the center of gravity of the corresponding contact or a point associated with the corresponding contact), or The center of gravity of two or more contacts detected on the touch-sensitive display system 112 . However, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface shown in the figures on thedisplay 450 and the focus selector, optionally on a device having thedisplay 450 and a separate touch-sensitive surface 451 Do something similar.

在图13A中,虚拟对象11002示出在登台用户界面6010中。x轴、y轴和z轴相对于虚拟对象11002示出。In FIG. 13A,virtual object 11002 is shown in staginguser interface 6010. The x-axis, y-axis, and z-axis are shown relative tovirtual object 11002 .

图13B至图13C示出了使虚拟对象11002围绕图13A所指示的y轴旋转的输入。在图13B中,在与虚拟对象11002对应的位置处检测到通过接触13002进行的输入。输入沿箭头13004所指示的路径移动距离d1。当该输入沿该路径移动时,虚拟对象11002围绕y轴旋转(例如,旋转35度),到达图13B所指示的位置。在登台用户界面6010中,显示与虚拟对象11002对应的阴影13006。从图13B到图13C,阴影13006根据虚拟对象11002的改变的位置而改变。Figures 13B-13C illustrate the input to rotate thevirtual object 11002 about the y-axis indicated in Figure 13A. In FIG. 13B , an input bycontact 13002 is detected at a position corresponding tovirtual object 11002 . The input moves the distance d1 along the path indicated byarrow 13004 . As the input moves along the path, thevirtual object 11002 is rotated about the y-axis (eg, rotated 35 degrees) to the position indicated in Figure 13B. In thestaging user interface 6010, ashadow 13006 corresponding to thevirtual object 11002 is displayed. From FIGS. 13B to 13C , theshadow 13006 changes according to the changed position of thevirtual object 11002 .

在接触13002抬离触摸屏112之后,虚拟对象11002继续旋转,如图13C至图13D所示(例如,根据由接触13002的移动赋予的“动量”,以提供虚拟对象11002表现得像物理对象的印象)。Aftercontact 13002 lifts offtouch screen 112,virtual object 11002 continues to rotate, as shown in Figures 13C-13D (eg, based on the "momentum" imparted by the movement ofcontact 13002 to provide the impression thatvirtual object 11002 behaves like a physical object ).

图13E至图13F示出了使虚拟对象11002围绕图13A所指示的x轴旋转的输入。在图13E中,在与虚拟对象11002对应的位置处检测到通过接触13008进行的输入。输入沿箭头13010所指示的路径移动距离d1。当该输入沿该路径移动时,虚拟对象11002围绕x轴旋转(例如,旋转5度),到达图13F所指示的位置。尽管图13E至图13F中接触13008沿x轴移动的距离d1与图13B至图13C中接触13002移动的距离相同,但图13E至图13F中虚拟对象11002围绕x轴旋转的角度小于图13B至图13C中虚拟对象11002围绕y轴旋转的角度。Figures 13E-13F illustrate the input to rotate thevirtual object 11002 about the x-axis indicated in Figure 13A. In FIG. 13E, input bycontact 13008 is detected at a location corresponding tovirtual object 11002. The input moves the distance d1 along the path indicated byarrow 13010 . As the input moves along the path, thevirtual object 11002 is rotated about the x-axis (eg, by 5 degrees) to the position indicated in Figure 13F. Although the distance d1 that thecontact 13008 moves along the x-axis in FIGS. 13E-13F is the same distance that thecontact 13002 moves in FIGS. 13B-13C , thevirtual object 11002 in FIGS. 13E-13F is rotated by a smaller angle about the x-axis than in FIG. 13B To the angle by which thevirtual object 11002 is rotated around the y-axis in FIG. 13C.

图13F至图13G示出了使虚拟对象11002围绕图13A所指示的x轴旋转的进一步的输入。在图13F中,接触13008继续其移动,并且沿箭头13012所指示的路径移动距离d2(大于距离d1)。当该输入沿该路径移动时,虚拟对象11002围绕x轴旋转(旋转25度),到达图13G所指示的位置。如图13E至图13G所示,接触13008的移动距离d1+d2使得虚拟对象11002围绕x轴旋转30度,而在图13B至图13C中,接触13004的移动距离d1使得虚拟对象11002围绕y轴旋转35度。Figures 13F-13G illustrate further inputs to rotate thevirtual object 11002 about the x-axis indicated in Figure 13A. In FIG. 13F, contact 13008 continues its movement and travels distanced2 (greaterthan distance d1) along the path indicated byarrow 13012. As the input moves along the path, thevirtual object 11002 rotates (rotates 25 degrees) about the x-axis to the position indicated in Figure 13G. As shown in Figures 13E-13G, contact 13008 moves a distance d1 +d2 such thatvirtual object 11002 rotates 30 degrees around the x-axis, while in Figures 13B-13C, contact 13004 moves a distance d1 such thatvirtual object 11002 Rotate 35 degrees around the y-axis.

在接触13008抬离触摸屏112之后,虚拟对象11002在与由接触13008的移动引起的旋转的方向相反的方向上旋转,如图13G至图13H所示(例如,以指示接触13008的移动使得虚拟对象11002旋转超出旋转极限的量)。Aftercontact 13008 is lifted offtouch screen 112,virtual object 11002 rotates in the opposite direction to the rotation caused by the movement ofcontact 13008, as shown in Figures 13G-13H (eg, to indicate that movement ofcontact 13008 causes the virtual object to rotate) 11002 Rotation exceeds rotation limit).

在图13G至13I中,未示出阴影13006(例如,由于当从下方观看对象时虚拟对象11002不会投射阴影)。In Figures 13G to 13I, shadows 13006 are not shown (eg, becausevirtual objects 11002 do not cast shadows when the object is viewed from below).

在图13I中,检测用于使虚拟对象11002返回到其最初被显示时的视角(例如,如图13A所指示)的输入(例如,双击输入)。在与虚拟对象11002对应的位置处发生输入,如接触13014所指示。响应于该输入,虚拟对象11002围绕y轴旋转(以逆转图13E至图13H中发生的旋转)并围绕x轴旋转(以逆转图13B至图13D中发生的旋转)。在图13J中,通过接触13016进行的输入已使得虚拟对象11002返回到其最初被显示时的视角。In Figure 13I, an input (eg, a double-tap input) is detected to return thevirtual object 11002 to its viewing angle (eg, as indicated in Figure 13A) when it was originally displayed. Input occurs at a location corresponding tovirtual object 11002, as indicated bycontact 13014. In response to this input,virtual object 11002 is rotated about the y-axis (to reverse the rotation that occurred in Figures 13E-13H) and about the x-axis (to reverse the rotation that occurred in Figures 13B-13D). In Figure 13J, input throughcontact 13016 has causedvirtual object 11002 to return to the perspective from which it was originally displayed.

在一些实施方案中,在显示登台用户界面6010时接收用于调整虚拟对象11002的尺寸的输入。例如,调整虚拟对象11002的尺寸的输入为增大虚拟对象11002的尺寸的展开手势(例如,如参照图6N至图6O描述的)或减小虚拟对象11002的尺寸的捏合手势。In some implementations, input for resizingvirtual object 11002 is received while staginguser interface 6010 is displayed. For example, the input to adjust the size ofvirtual object 11002 is an expand gesture that increases the size of virtual object 11002 (eg, as described with reference to FIGS. 6N-6O ) or a pinch gesture that reduces the size ofvirtual object 11002 .

在图13J中,接收到用包括相机的视场6036的用户界面的显示替换登台用户界面6010的显示的输入。在与切换控件6018对应的位置处检测到通过接触13016进行的输入。响应于该输入,包括相机的视场6036的用户界面替换登台用户界面6010的显示,如图13K所示。In Figure 13J, input is received to replace the display of thestaging user interface 6010 with the display of the user interface including the field ofview 6036 of the camera. Input bycontact 13016 is detected at the location corresponding to togglecontrol 6018. In response to this input, a user interface including the camera's field ofview 6036 replaces the display of thestaging user interface 6010, as shown in Figure 13K.

在图13K中,在包括相机的视场6036的用户界面中显示虚拟对象11002。发生指示已经在相机的视场6036中检测到与虚拟对象11002对应的平面的触觉输出(如13018处所示)。包括相机的视场6036的用户界面中的虚拟对象11002的旋转角度与登台用户界面6010中的虚拟对象11002的旋转角度对应。In Figure 13K, avirtual object 11002 is displayed in a user interface that includes the camera's field ofview 6036. A haptic output (as shown at 13018 ) occurs indicating that a plane corresponding tovirtual object 11002 has been detected in the camera's field ofview 6036 . The rotation angle of thevirtual object 11002 in the user interface including the camera's field ofview 6036 corresponds to the rotation angle of thevirtual object 11002 in thestaging user interface 6010 .

当显示包括相机的视场6036的用户界面时,包括横向移动的输入使得包括相机的视场6036的用户界面中的虚拟对象11002横向移动,如图13L至图13M所示。在图13L中,在与虚拟对象11002对应的位置处检测到接触13020,并且该接触沿箭头13022所指示的路径移动。随着该接触移动,虚拟对象11002沿与接触13020的移动对应的路径从第一位置(如图13L所示)移动到第二位置(如图13M所示)。When the user interface including the camera's field ofview 6036 is displayed, the input including lateral movement causes thevirtual object 11002 in the user interface including the camera's field ofview 6036 to move laterally, as shown in Figures 13L-13M. In FIG. 13L, acontact 13020 is detected at a location corresponding tovirtual object 11002 and moves along the path indicated byarrow 13022. As the contact moves, thevirtual object 11002 moves from a first position (shown in Figure 13L) to a second position (shown in Figure 13M) along a path corresponding to the movement of thecontact 13020.

在一些实施方案中,在显示包括相机的视场6036的用户界面时所提供的输入可使得虚拟对象11002从第一平面(例如,地板平面5038)移动到第二平面(例如,桌面平面5046),如参照图5AJ至图5AM描述的。In some embodiments, input provided when displaying a user interface that includes the camera's field ofview 6036 can cause thevirtual object 11002 to move from a first plane (eg, floor plane 5038 ) to a second plane (eg, desktop plane 5046 ) , as described with reference to Figures 5AJ to 5AM.

图14A至图14Z示出了用于根据确定第一对象操纵行为满足第一阈值移动量值来增大第二对象操纵行为所需的第二阈值移动量值的示例用户界面。这些附图中的用户界面被用于示出下文所述的过程,包括图8A至图8E、图9A至图9D、图10A至图10D、图14AA至图14AD、图16A至图16G、图17A至图17D、图18A至图18I、图19A至图19H以及图20A至图20F中的过程。为了便于解释,将参考在具有触敏显示器系统112的设备上执行的操作来讨论实施方案中的一些实施方案。在此类实施方案中,焦点选择器为任选地:相应手指或触笔接触、对应于手指或触笔接触的代表点(例如,相应接触的重心或与相应接触相关联的点)、或在触敏显示系统112上所检测到的两个或更多个接触的重心。然而,响应于在显示附图中示出的在显示器450上的用户界面以及焦点选择器时检测触敏表面451上的接触,任选地在具有显示器450和独立的触敏表面451的设备上执行类似的操作。14A-14Z illustrate example user interfaces for increasing a second threshold movement amount required for a second object manipulation behavior based on a determination that the first object manipulation behavior satisfies the first threshold movement amount value. The user interfaces in these figures are used to illustrate the processes described below, including Figures 8A-8E, 9A-9D, 10A-10D, 14AA-14AD, 16A-16G, 17A-17D, 18A-18I, 19A-19H, and 20A-20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch-sensitive display system 112 . In such embodiments, the focus selector is optionally: the corresponding finger or stylus contact, a representative point corresponding to the finger or stylus contact (eg, the center of gravity of the corresponding contact or a point associated with the corresponding contact), or The center of gravity of two or more contacts detected on the touch-sensitive display system 112 . However, in response to detecting a contact on touch-sensitive surface 451 while displaying the user interface shown in the figures ondisplay 450 and the focus selector, optionally on adevice having display 450 and a separate touch-sensitive surface 451 Do something similar.

在图14A中,在包括相机的视场6036的用户界面中显示虚拟对象11002。如参照图14B至图14Z进一步描述的,平移移动计14002、缩放移动计14004和旋转移动计14006被用于指示与对象操纵行为对应的相应移动量值(例如,平移操作、缩放操作和/或旋转操作)。平移移动计14002指示在触摸屏显示器112上的一组接触的横向(例如,向左或向右)移动的量值。缩放移动计14004指示触摸屏显示器112上的一组接触中的各接触之间的增大或减小距离的量值(例如,捏合或展开手势的量值)。旋转移动计14006指示在触摸屏显示器112上的一组接触的旋转移动的量值。In Figure 14A, avirtual object 11002 is displayed in a user interface that includes the camera's field ofview 6036. As further described with reference to Figures 14B-14Z,translational movement meters 14002, zooming movement meters 14004, androtational movement meters 14006 are used to indicate respective movement magnitude values corresponding to object manipulation behaviors (eg, pan operations, zoom operations, and/or rotation operation). Thetranslational movement meter 14002 indicates the amount of lateral (eg, left or right) movement of a set of contacts on thetouch screen display 112 . The zoom gauge 14004 indicates the magnitude of the increased or decreased distance between contacts in a set of contacts on the touchscreen display 112 (eg, the magnitude of a pinch or spread gesture). Therotational movement meter 14006 indicates the magnitude of rotational movement of a set of contacts on thetouch screen display 112 .

图14B至图14E示出了用于在包括一个或多个相机的视场6036的用户界面中旋转虚拟对象11002的输入。用于旋转虚拟对象11002的输入包括其中第一接触14008沿箭头14010所指示的路径在顺时针方向上旋转移动且第二接触14012沿箭头14014所指示的路径在顺时针方向上旋转移动的手势。在图14B中,检测到与触摸屏112的接触14008和14012。在图14C中,接触14008沿箭头14010所指示的路径移动,并且接触14012沿箭头14012所指示的路径移动。由于在图14C中接触14008和接触14012的旋转移动的量值尚未达到阈值RT,虚拟对象11002未响应于输入而旋转。在图14D中,接触14008和接触14012的旋转移动的量值已增大到高于阈值RT,并且虚拟对象11002已响应于输入而旋转(相对于图14B所示的虚拟对象11002的位置)。当旋转移动的量值增大到高于阈值RT时,缩放虚拟对象11002所需的移动量值增大(例如,缩放阈值ST已从ST增大到ST′,如缩放移动计14004所指示),并且平移虚拟对象11002所需的移动量值增大(例如,平移阈值TT已从TT增大到TT′,如平移移动计14002所指示)。在图14E中,接触14008和接触14012分别沿箭头14010和箭头14014所指示的旋转路径继续移动,并且虚拟对象11002继续响应于输入而旋转。在图14F中,接触14008和14012已抬离触摸屏112。14B-14E illustrate inputs for rotating thevirtual object 11002 in a user interface that includes a field ofview 6036 of one or more cameras. The input for rotatingvirtual object 11002 includes a gesture in which thefirst contact 14008 moves rotationally in a clockwise direction along the path indicated byarrow 14010 and thesecond contact 14012 moves rotationally in a clockwise direction along the path indicated byarrow 14014 . In Figure 14B,contacts 14008 and 14012 withtouch screen 112 are detected. In FIG. 14C, contact 14008 moves along the path indicated byarrow 14010, and contact 14012 moves along the path indicated byarrow 14012. Since the magnitude of the rotational movement ofcontacts 14008 and 14012 in Figure 14C has not yet reached the threshold RT, thevirtual object 11002 does not rotate in response to the input. In Figure 14D, the magnitude of the rotational movement ofcontacts 14008 and 14012 has increased above the threshold RT, andvirtual object 11002 has rotated in response to the input (relative to the position ofvirtual object 11002 shown in Figure 14B). When the magnitude of the rotational movement increases above the threshold RT, the amount of movement required to scale thevirtual object 11002 increases (eg, the scaling threshold ST has increased from ST to ST', as indicated by the scaling movement meter 14004) , and the amount of movement required to translatevirtual object 11002 has increased in value (eg, translation threshold TT has increased from TT to TT', as indicated by translation movement meter 14002). In Figure 14E, contact 14008 and contact 14012 continue to move along the rotational paths indicated byarrows 14010 and 14014, respectively, andvirtual object 11002 continues to rotate in response to the input. In FIG. 14F ,contacts 14008 and 14012 have been lifted offtouch screen 112 .

图14G至图14I示出了用于在包括一个或多个相机的视场6036的用户界面中缩放虚拟对象11002(例如,增大其尺寸)的输入。用于增大虚拟对象11002的尺寸的输入包括其中第一接触14016沿箭头14018所指示的路径移动且第二接触14020沿箭头14022所指示的路径移动(例如,使得接触14016与接触14020之间的距离增大)的手势。在图14G中,检测到与触摸屏112的接触14016和14020。在图14H中,接触14016沿箭头14018所指示的路径移动,并且接触14020沿箭头14022所指示的路径移动。由于在图14H中接触14016远离接触14020移动的量值尚未达到阈值ST,因此未响应于输入调整虚拟对象11002的尺寸。在图14I中,接触14016和接触14020的缩放移动的量值已增大到高于阈值ST,并且响应于输入已增大(相对于图14H所示的虚拟对象11002的尺寸)虚拟对象11002的尺寸。当缩放移动的量值增大到高于阈值ST时,旋转虚拟对象11002所需的移动量值增大(例如,旋转阈值RT已从RT增大到RT′,如旋转移动计14006所指示),并且平移虚拟对象11002所需的移动量值增大(例如,平移阈值TT已从TT增大到TT′,如平移移动计14002所指示)。在图14J中,接触14016和14020已抬离触摸屏112。Figures 14G-14I illustrate inputs for scaling the virtual object 11002 (eg, increasing its size) in a user interface that includes a field ofview 6036 of one or more cameras. The input for increasing the size ofvirtual object 11002 includes where thefirst contact 14016 moves along the path indicated byarrow 14018 and thesecond contact 14020 moves along the path indicated by arrow 14022 (eg, such that the contact betweencontact 14016 and contact 14020 is moved. distance increase). In Figure 14G,contacts 14016 and 14020 withtouch screen 112 are detected. In FIG. 14H, contact 14016 moves along the path indicated byarrow 14018, and contact 14020 moves along the path indicated byarrow 14022. Since the magnitude of thecontact 14016 moving away from thecontact 14020 in Figure 14H has not yet reached the threshold ST, thevirtual object 11002 is not resized in response to the input. In Figure 14I, the magnitude of the zoom movement ofcontacts 14016 and 14020 has increased above threshold ST, and in response to the input has increased (relative to the size ofvirtual object 11002 shown in Figure 14H) the size ofvirtual object 11002 size. When the magnitude of the zoom movement increases above threshold ST, the amount of movement required to rotatevirtual object 11002 increases (eg, rotation threshold RT has increased from RT to RT', as indicated by rotation movement meter 14006) , and the amount of movement required to translatevirtual object 11002 has increased in value (eg, translation threshold TT has increased from TT to TT', as indicated by translation movement meter 14002). In FIG. 14J ,contacts 14016 and 14020 have been lifted offtouch screen 112 .

图14K至图14M示出了用于在包括一个或多个相机的视场6036的用户界面中平移虚拟对象11002(例如,向左移动虚拟对象11002)的输入。用于移动虚拟对象11002的输入包括其中第一接触14024沿箭头14026所指示的路径移动且第二接触14028沿箭头1430所指示的路径移动(例如,使得接触14024与接触14028均向左移动)的手势。在图14K中,检测到与触摸屏112的接触14024和14028。在图14L中,接触14024沿箭头14026所指示的路径移动,并且接触14028沿箭头14030所指示的路径移动。由于在图14L中接触14024和14028的向左移动的量值尚未达到阈值TT,虚拟对象11002未响应于输入而移动。在图14M中,接触14024和接触14028的向左移动的量值已增大到高于阈值TT,并且虚拟对象11002已在接触14024和14028的移动方向上移动。当平移移动的量值增大到高于阈值TT时,缩放虚拟对象11002所需的移动量值增大(例如,缩放阈值ST已从ST增大到ST′,如缩放移动计14004所指示),并且旋转虚拟对象11002所需的移动量值增大(例如,旋转阈值RT已从RT增大到RT′,如旋转移动计14006所指示)。在图14N中,接触14024和14028已抬离触摸屏112。Figures 14K-14M illustrate inputs for translating virtual object 11002 (eg, movingvirtual object 11002 to the left) in a user interface that includes field ofview 6036 of one or more cameras. The input for movingvirtual object 11002 includes where thefirst contact 14024 moves along the path indicated byarrow 14026 and thesecond contact 14028 moves along the path indicated by arrow 1430 (eg, so that bothcontacts 14024 and 14028 move to the left). gesture. In Figure 14K,contacts 14024 and 14028 withtouch screen 112 are detected. In FIG. 14L, contact 14024 moves along the path indicated byarrow 14026, and contact 14028 moves along the path indicated byarrow 14030. Since the magnitude of the leftward movement ofcontacts 14024 and 14028 in Figure 14L has not yet reached the threshold TT,virtual object 11002 did not move in response to the input. In Figure 14M, the magnitude of the leftward movement ofcontacts 14024 and 14028 has increased above the threshold TT, andvirtual object 11002 has moved in the direction of movement ofcontacts 14024 and 14028. When the magnitude of the translational movement increases above the threshold TT, the magnitude of the movement required to zoom thevirtual object 11002 increases (eg, the zoom threshold ST has increased from ST to ST', as indicated by the zoom movement meter 14004) , and the amount of movement required to rotatevirtual object 11002 has increased in value (eg, rotation threshold RT has increased from RT to RT', as indicated by rotation movement meter 14006). In FIG. 14N ,contacts 14024 and 14028 have been lifted offtouch screen 112 .

图14O至图14Z示出了包括用于平移虚拟对象11002(例如,向右移动虚拟对象11002)、缩放虚拟对象11002(例如,增大虚拟对象11002的尺寸)和旋转虚拟对象11002的手势的输入。在图14O中,检测到与触摸屏112的接触14032和14036。在图14O至图14P中,接触14032沿箭头14034所指示的路径移动,并且接触14036沿箭头14038所指示的路径移动。接触14032和14036的向右移动的量值已增大到高于阈值TT,并且虚拟对象11002已在接触14032和14036的移动方向上移动。由于接触14032和14036的移动满足阈值TT,缩放虚拟对象11002所需的移动量值增大到ST′,并且旋转虚拟对象11002所需的移动量值增大到RT′。在阈值TT得到满足之后(如图14Q中的平移移动计14002所示的高水位标记14043所指示),接触14032和14036的任何横向移动将引起虚拟对象11002的横向移动。FIGS. 14O-14Z illustrate inputs including gestures for panning virtual object 11002 (eg, movingvirtual object 11002 to the right), zooming virtual object 11002 (eg, increasing the size of virtual object 11002 ), and rotatingvirtual object 11002 . In Figure 14O,contacts 14032 and 14036 withtouch screen 112 are detected. In FIGS. 140-14P, contact 14032 moves along the path indicated byarrow 14034, and contact 14036 moves along the path indicated byarrow 14038. The magnitude of the rightward movement ofcontacts 14032 and 14036 has increased above the threshold TT, andvirtual object 11002 has moved in the direction of movement ofcontacts 14032 and 14036. Since the movement ofcontacts 14032 and 14036 satisfies the threshold TT, the amount of movement required to scalevirtual object 11002 increases to ST', and the amount of movement required to rotatevirtual object 11002 increases to RT'. After the threshold TT is satisfied (as indicated byhigh water mark 14043 as shown bytranslational movement meter 14002 in Figure 14Q), any lateral movement ofcontacts 14032 and 14036 will cause lateral movement ofvirtual object 11002.

在图14Q至图14R中,接触14032沿箭头14040所指示的路径移动,并且接触14036沿箭头14042所指示的路径移动。在图14R中,接触14032远离接触14036移动的量值已超过原始缩放阈值ST,但尚未达到增大的缩放阈值ST′。当增大的缩放移动阈值ST′有效时,直到接触14032远离接触14036移动的量值增大到高于增大的缩放移动阈值ST′,才发生缩放,因此从图14Q到图14R,虚拟对象11002的尺寸未发生变化。在图14R至图14S中,当接触14032沿箭头14044所指示的路径移动且接触14036沿箭头14046所指示的路径移动时,接触14032与接触14046之间的距离继续增大。在图14S中,接触14032远离接触14036移动的量值已超过增大的缩放阈值ST′,并且虚拟对象11002的尺寸已增大。在阈值ST′得到满足之后(如图14T中的缩放移动计14004所示的高水位标记14047所指示),接触14032和14036的任何缩放移动将引起虚拟对象11002的缩放。In FIGS. 14Q-14R, contact 14032 moves along the path indicated byarrow 14040, and contact 14036 moves along the path indicated byarrow 14042. In Figure 14R, the amount by whichcontact 14032 has moved away fromcontact 14036 has exceeded the original scaling threshold ST, but has not yet reached the increased scaling threshold ST'. When the increased zoom movement threshold ST' is in effect, zooming does not occur until the magnitude of thecontact 14032's movement away from thecontact 14036 increases above the increased zoom movement threshold ST', so from Figure 14Q to Figure 14R, the virtual object The dimensions of the 11002 have not changed. In Figures 14R-14S, ascontact 14032 moves along the path indicated byarrow 14044 and contact 14036 moves along the path indicated byarrow 14046, the distance betweencontact 14032 and contact 14046 continues to increase. In Figure 14S, the magnitude by whichcontact 14032 has moved away fromcontact 14036 has exceeded the increased zoom threshold ST', and the size ofvirtual object 11002 has increased. After threshold ST' is satisfied (as indicated byhigh water mark 14047 shown in zoom movement meter 14004 in Figure 14T), any zoom movement ofcontacts 14032 and 14036 will causevirtual object 11002 to zoom.

在图14T至图14U中,接触14032沿箭头14048所指示的路径移动,并且接触14036沿箭头14050所指示的路径移动。由于阈值TT已得到满足(如平移移动计14002所示的高水位标记14043所指示),虚拟对象11002在接触14032和14036的横向移动的方向上自由移动。In FIGS. 14T-14U, contact 14032 moves along the path indicated byarrow 14048, and contact 14036 moves along the path indicated byarrow 14050. Since the threshold TT has been met (as indicated by thehigh water mark 14043 shown by the translational movement meter 14002), thevirtual object 11002 is free to move in the direction of the lateral movement of thecontacts 14032 and 14036.

在图14V至图14W中,接触14032沿箭头14052所指示的路径移动,并且接触14036沿箭头14054所指示的路径移动。接触14032和14036的移动包括平移移动(接触14032和14036的向左移动)和缩放移动(减小接触14032与接触14036之间的距离的移动(例如,捏合手势))。由于平移阈值TT已得到满足(如平移移动计14002所示的高水位标记14043所指示),虚拟对象11002在接触14032和14036的横向移动的方向上自由移动,并且由于增大的缩放阈值ST′已得到满足(如缩放移动计14004所示的高水位标记14047所指示),虚拟对象11002响应于接触14032朝向接触14036的移动而自由缩放。从图14V到图14W,虚拟对象11002的尺寸已减小,并且虚拟对象11002响应于接触14032沿箭头14052所指示的路径的移动以及接触14036沿箭头14054所指示的路径的移动而向左移动。In FIGS. 14V-14W, contact 14032 moves along the path indicated by arrow 14052, and contact 14036 moves along the path indicated by arrow 14054. Movements ofcontacts 14032 and 14036 include translation movements (leftward movement ofcontacts 14032 and 14036) and zoom movements (movements that reduce the distance betweencontacts 14032 and 14036 (eg, a pinch gesture)). Since the translation threshold TT has been satisfied (as indicated by thehigh water mark 14043 shown in the translation movement meter 14002), thevirtual object 11002 is free to move in the direction of the lateral movement of thecontacts 14032 and 14036, and due to the increased zoom threshold ST' Having been satisfied (as indicated by thehigh water mark 14047 shown in the zoom movement meter 14004 ), thevirtual object 11002 is free to zoom in response to the movement of thecontact 14032 towards thecontact 14036 . From FIGS. 14V to 14W,virtual object 11002 has decreased in size, andvirtual object 11002 has moved to the left in response to movement ofcontact 14032 along the path indicated by arrow 14052 and movement ofcontact 14036 along the path indicated by arrow 14054.

在图14X至图14Z中,接触14032沿箭头14056所指示的路径在逆时针方向上旋转移动,并且接触14036沿箭头14058所指示的路径在逆时针方向上旋转移动。在图14Y中,接触14032和接触14036的旋转移动的量值已超过原始缩放阈值RT,但尚未达到增大的缩放阈值RT′。当增大的缩放移动阈值RT′有效时,直到接触14032和14036的旋转移动的量值增大到高于增大的旋转移动阈值RT′,才发生虚拟对象11002的旋转,因此从图14X到图14Y,虚拟对象11002未旋转。在图14Y至图14Z中,当接触14032沿箭头14060所指示的路径移动且接触14036沿箭头14062所指示的路径移动时,接触14032和14046继续在逆时针方向上旋转移动。在图14Z中,接触14032和接触14036的旋转移动的量值已超过增大的缩放阈值RT′,并且虚拟对象11002已响应于输入而旋转。In FIGS. 14X-14Z, thecontact 14032 moves rotationally counterclockwise along the path indicated byarrow 14056, and thecontact 14036 moves rotationally counterclockwise along the path indicated byarrow 14058. In Figure 14Y, the magnitude of the rotational movement ofcontacts 14032 and 14036 has exceeded the original scaling threshold RT, but has not yet reached the increased scaling threshold RT'. When the increased zoom movement threshold RT' is in effect, rotation of thevirtual object 11002 does not occur until the magnitude of the rotational movement ofcontacts 14032 and 14036 increases above the increased rotational movement threshold RT', so from Figure 14X to Figure 14Y,virtual object 11002 is not rotated. In Figures 14Y-14Z, ascontact 14032 moves along the path indicated byarrow 14060 and contact 14036 moves along the path indicated byarrow 14062,contacts 14032 and 14046 continue to move rotationally in a counterclockwise direction. In Figure 14Z, the magnitude of the rotational movement ofcontacts 14032 and 14036 has exceeded the increased zoom threshold RT', andvirtual object 11002 has rotated in response to the input.

图14AA至图14AD是示出用于根据确定第一对象操纵行为满足第一阈值移动量值来增大第二对象操纵行为所需的第二阈值移动量值的操作的流程图。参照图14AA至图14AD描述的操作在具有显示生成部件(例如,显示器、投影仪、平视显示器等)和触敏表面(例如,触敏表面或同时充当显示生成部件和触敏表面的触摸屏显示器)的电子设备(例如,图3的设备300或图1A的便携式多功能设备100)处执行。参照图14AA至图14AD描述的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。14AA-14AD are flowcharts illustrating operations for increasing a second threshold movement amount required for a second object manipulation behavior in accordance with a determination that the first object manipulation behavior satisfies the first threshold movement amount. The operations described with reference to FIGS. 14AA-14AD are described with respect to having a display generating component (eg, a display, a projector, a heads-up display, etc.) and a touch-sensitive surface (eg, a touch-sensitive surface or a touch screen display that acts as both a display-generating component and a touch-sensitive surface) performed at an electronic device (eg, device 300 of FIG. 3 or portablemultifunction device 100 of FIG. 1A ). Some operations described with reference to FIGS. 14AA-14AD are optionally combined, and/or the order of some operations is optionally changed.

在操作14066处,检测包括一个或多个接触的移动的用户输入的第一部分。在操作14068处,确定一个或多个接触(例如,在与虚拟对象11002对应的位置处)的移动是否增大到高于对象旋转阈值(例如,旋转移动计14006所指示的旋转阈值RT)。根据确定一个或多个接触的移动增大到高于对象旋转阈值(例如,如参照图14B至图14D描述的),流程前进至操作14070。根据确定一个或多个接触的移动未增大到高于对象旋转阈值,流程前进至操作14074。At operation 14066, a first portion of user input that includes movement of one or more contacts is detected. At operation 14068, it is determined whether the movement of one or more contacts (eg, at a location corresponding to virtual object 11002) has increased above an object rotation threshold (eg, rotation threshold RT indicated by rotational movement meter 14006). Upon determining that the movement of the one or more contacts has increased above the object rotation threshold (eg, as described with reference to FIGS. 14B-14D ), flow proceeds to operation 14070 . Upon determining that the movement of the one or more contacts has not increased above the object rotation threshold, flow proceeds to operation 14074.

在操作14070处,基于用户输入的第一部分来旋转对象(例如,虚拟对象11002)(例如,如参照图14B至图14D描述的)。在操作14072处,增大对象平移阈值(例如,从TT增大到TT′,如参照图14D描述的),并且增大对象缩放阈值(例如,从ST增大到ST′,如参照图14D描述的)。流程从操作14072前进至图14AB的操作14086,如A处所指示。At operation 14070, the object (eg, virtual object 11002) is rotated based on the first portion of the user input (eg, as described with reference to Figures 14B-14D). At operation 14072, the object translation threshold is increased (eg, from TT to TT', as described with reference to Figure 14D), and the object zoom threshold is increased (eg, from ST to ST', as described with reference to Figure 14D) describe). Flow proceeds from operation 14072 tooperation 14086 of FIG. 14AB, as indicated at A.

在操作14074处,确定一个或多个接触(例如,在与虚拟对象11002对应的位置处)的移动是否增大到高于对象平移阈值(例如,平移移动计14002所指示的平移阈值TT)。根据确定一个或多个接触的移动增大到高于对象平移阈值(例如,如参照图14K至图14M描述的),流程前进至操作14076。根据确定一个或多个接触的移动未增大到高于对象平移阈值,流程前进至操作14080。At operation 14074, it is determined whether the movement of one or more contacts (eg, at a location corresponding to virtual object 11002) increases above an object translation threshold (eg, translation threshold TT indicated by translation movement meter 14002). Upon determining that the movement of the one or more contacts has increased above the object translation threshold (eg, as described with reference to FIGS. 14K-14M ), flow proceeds to operation 14076 . Flow proceeds to operation 14080 upon determining that the movement of the one or more contacts has not increased above the object translation threshold.

在操作14076处,基于用户输入的第一部分来平移对象(例如,虚拟对象11002)(例如,如参照图14K至图14M描述的)。在操作14078处,增大对象旋转阈值(例如,从RT增大到RT′,如参照图14M描述的),并且增大对象缩放阈值(例如,从ST增大到ST′,如参照图14M描述的)。流程从操作14078前进至图14AC的操作14100,如B处所指示。At operation 14076, the object (eg, virtual object 11002) is translated based on the first portion of the user input (eg, as described with reference to Figures 14K-14M). At operation 14078, the object rotation threshold is increased (eg, from RT to RT', as described with reference to Figure 14M), and the object scaling threshold is increased (eg, from ST to ST', as described with reference to Figure 14M) describe). Flow proceeds from operation 14078 tooperation 14100 of FIG. 14AC, as indicated at B.

在操作14080处,确定一个或多个接触(例如,在与虚拟对象11002对应的位置处)的移动是否增大到高于对象缩放阈值(例如,缩放移动计14004所指示的缩放阈值ST)。根据确定一个或多个接触的移动增大到高于对象缩放阈值(例如,如参照图14G至图14I描述的),流程前进至操作14082。根据确定一个或多个接触的移动未增大到高于对象缩放阈值,流程前进至操作14085。At operation 14080, it is determined whether the movement of one or more contacts (eg, at a location corresponding to virtual object 11002) has increased above an object zoom threshold (eg, zoom threshold ST indicated by zoom movement meter 14004). Upon determining that the movement of the one or more contacts has increased above the object zoom threshold (eg, as described with reference to FIGS. 14G-14I ), flow proceeds to operation 14082 . Upon determining that the movement of the one or more contacts has not increased above the object zoom threshold, flow proceeds to operation 14085.

在操作14082处,基于用户输入的第一部分来缩放对象(例如,虚拟对象11002)(例如,如参照图14G至图14I描述的)。在操作14084处,增大对象旋转阈值(例如,从RT增大到RT′,如参照图14I描述的),并且增大对象平移阈值(例如,从TT增大到TT′,如参照图14I描述的)。流程从操作14084前进至图14AD的操作14114,如C处所指示。At operation 14082, the object (eg, virtual object 11002) is scaled based on the first portion of the user input (eg, as described with reference to Figures 14G-14I). At operation 14084, the object rotation threshold is increased (eg, from RT to RT', as described with reference to Figure 14I ), and the object translation threshold is increased (eg, from TT to TT', as described with reference to Figure 14I ) describe). Flow proceeds from operation 14084 tooperation 14114 of FIG. 14AD, as indicated at C.

在操作14085处,检测包括一个或多个接触的移动的用户输入的另外部分。流程从操作14086前进至操作14066。At operation 14085, an additional portion of the user input including movement of one or more contacts is detected. Flow proceeds fromoperation 14086 to operation 14066.

在图14AB中,在操作14086处,检测包括一个或多个接触的移动的用户输入的另外部分。流程从操作14086前进至操作14088。In Figure 14AB, atoperation 14086, additional portions of user input including movement of one or more contacts are detected. Flow proceeds fromoperation 14086 tooperation 14088.

在操作14088处,确定一个或多个接触的移动是否为旋转移动。根据确定一个或多个接触的移动为旋转移动,流程前进至操作14090。根据确定一个或多个接触的移动不为旋转移动,流程前进至操作14092。Atoperation 14088, it is determined whether the movement of the one or more contacts is a rotational movement. Flow proceeds tooperation 14090 upon determining that the movement of the one or more contacts is a rotational movement. Upon determining that the movement of the one or more contacts is not a rotational movement, flow proceeds tooperation 14092.

在操作14090处,基于用户输入的另外部分来旋转对象(例如,虚拟对象11002)(例如,如参照图14D至图14E描述的)。由于旋转阈值先前已得到满足,对象根据另外的旋转输入自由旋转。Atoperation 14090, the object (eg, virtual object 11002) is rotated based on the additional portion of the user input (eg, as described with reference to Figures 14D-14E). Since the rotation threshold was previously met, the object is free to rotate based on the additional rotation input.

在操作14092处,确定一个或多个接触的移动是否增大到高于增大的对象平移阈值(例如,图14D中的平移移动计14002所指示的平移阈值TT′)。根据确定一个或多个接触的移动增大到高于增大的对象平移阈值,流程前进至操作14094。根据确定一个或多个接触的移动未增大到高于增大的对象平移阈值,流程前进至操作14096。Atoperation 14092, it is determined whether the movement of the one or more contacts has increased above an increased object translation threshold (eg, translation threshold TT' indicated bytranslational movement meter 14002 in Figure 14D). Upon determining that the movement of the one or more contacts increased above the increased object translation threshold, flow proceeds tooperation 14094 . Flow proceeds tooperation 14096 upon determining that the movement of the one or more contacts has not increased above the increased object translation threshold.

在操作14094处,基于用户输入的另外部分来平移对象(例如,虚拟对象11002)。Atoperation 14094, the object (eg, virtual object 11002) is translated based on the additional portion of the user input.

在操作14096处,确定一个或多个接触的移动是否增大到高于增大的对象缩放阈值(例如,图14D中的缩放移动计14004所指示的缩放阈值ST′)。根据确定一个或多个接触的移动增大到高于增大的对象缩放阈值,流程前进至操作14098。根据确定一个或多个接触的移动未增大到高于增大的对象缩放阈值,流程返回至操作14086。Atoperation 14096, it is determined whether the movement of the one or more contacts has increased above an increased object zoom threshold (eg, zoom threshold ST' indicated by zoom movement meter 14004 in Figure 14D). Flow proceeds tooperation 14098 upon determining that the movement of the one or more contacts has increased above the increased object zoom threshold. Flow returns tooperation 14086 upon determining that the movement of the one or more contacts has not increased above the increased object zoom threshold.

在操作14098处,基于用户输入的另外部分来缩放对象(例如,虚拟对象11002)。Atoperation 14098, the object (eg, virtual object 11002) is scaled based on the additional portion of the user input.

在图14AC中,在操作14100处,检测包括一个或多个接触的移动的用户输入的另外部分。流程从操作14100前进至操作14102。In Figure 14AC, atoperation 14100, additional portions of user input including movement of one or more contacts are detected. Flow proceeds fromoperation 14100 tooperation 14102.

在操作14102处,确定一个或多个接触的移动是否为平移移动。根据确定一个或多个接触的移动为平移移动,流程前进至操作140104。根据确定一个或多个接触的移动不为平移移动,流程前进至操作14106。Atoperation 14102, it is determined whether the movement of the one or more contacts is a translational movement. Upon determining that the movement of the one or more contacts is a translational movement, flow proceeds to operation 140104 . Upon determining that the movement of the one or more contacts is not a translational movement, flow proceeds tooperation 14106 .

在操作14104处,基于用户输入的另外部分来平移对象(例如,虚拟对象11002)。由于平移阈值先前已得到满足,对象根据另外的平移输入自由平移。Atoperation 14104, the object (eg, virtual object 11002) is translated based on the additional portion of the user input. Since the translation threshold was previously satisfied, the object is free to translate according to the additional translation input.

在操作14106处,确定一个或多个接触的移动是否增大到高于增大的对象旋转阈值(例如,图14M中的旋转移动计14006所指示的旋转阈值RT′)。根据确定一个或多个接触的移动增大到高于增大的对象旋转阈值,流程前进至操作14108。根据确定一个或多个接触的移动未增大到高于增大的对象旋转阈值,流程前进至操作14110。Atoperation 14106, it is determined whether the movement of the one or more contacts has increased above an increased object rotation threshold (eg, rotation threshold RT' indicated byrotational movement meter 14006 in Figure 14M). Flow proceeds tooperation 14108 upon determining that the movement of the one or more contacts has increased above the increased object rotation threshold. Flow proceeds tooperation 14110 upon determining that the movement of the one or more contacts has not increased above the increased object rotation threshold.

在操作14108处,基于用户输入的另外部分来旋转对象(例如,虚拟对象11002)。Atoperation 14108, the object (eg, virtual object 11002) is rotated based on the additional portion of the user input.

在操作14110处,确定一个或多个接触的移动是否增大到高于增大的对象缩放阈值(例如,图14M中的缩放移动计14004所指示的缩放阈值ST′)。根据确定一个或多个接触的移动增大到高于增大的对象缩放阈值,流程前进至操作14112。根据确定一个或多个接触的移动未增大到高于增大的对象缩放阈值,流程返回至操作14100。Atoperation 14110, it is determined whether the movement of one or more contacts has increased above an increased object zoom threshold (eg, zoom threshold ST' indicated by zoom movement meter 14004 in Figure 14M). Flow proceeds tooperation 14112 upon determining that the movement of the one or more contacts increases above the increased object zoom threshold. Flow returns tooperation 14100 upon determining that the movement of the one or more contacts has not increased above the increased object zoom threshold.

在操作14112处,基于用户输入的另外部分来缩放对象(例如,虚拟对象11002)。Atoperation 14112, the object (eg, virtual object 11002) is scaled based on the additional portion of the user input.

在图14AD中,在操作14114处,检测包括一个或多个接触的移动的用户输入的另外部分。流程从操作14114前进至操作14116。In FIG. 14AD, atoperation 14114, further portions of user input including movement of one or more contacts are detected. Flow proceeds fromoperation 14114 tooperation 14116.

在操作14116处,确定一个或多个接触的移动是否为缩放移动。根据确定一个或多个接触的移动为缩放移动,流程前进至操作140118。根据确定一个或多个接触的移动不为缩放移动,流程前进至操作14120。Atoperation 14116, it is determined whether the movement of the one or more contacts is a zoom movement. Upon determining that the movement of the one or more contacts is a zoom movement, flow proceeds to operation 140118. Flow proceeds tooperation 14120 upon determining that the movement of the one or more contacts is not a zoom movement.

在操作14118处,基于用户输入的另外部分来缩放对象(例如,虚拟对象11002)。由于缩放阈值先前已得到满足,对象根据另外的缩放输入自由缩放。Atoperation 14118, the object (eg, virtual object 11002) is scaled based on the additional portion of the user input. Since the scaling threshold was previously met, the object is freely scaled according to the additional scaling input.

在操作14120处,确定一个或多个接触的移动是否增大到高于增大的对象旋转阈值(例如,图14I中的旋转移动计14006所指示的旋转阈值RT′)。根据确定一个或多个接触的移动增大到高于增大的对象旋转阈值,流程前进至操作14122。根据确定一个或多个接触的移动未增大到高于增大的对象旋转阈值,流程前进至操作14124。Atoperation 14120, it is determined whether the movement of one or more contacts has increased above an increased object rotation threshold (eg, rotation threshold RT' indicated byrotational movement meter 14006 in Figure 14I). Flow proceeds tooperation 14122 upon determining that the movement of the one or more contacts has increased above the increased object rotation threshold. Flow proceeds tooperation 14124 upon determining that the movement of the one or more contacts has not increased above the increased object rotation threshold.

在操作14122处,基于用户输入的另外部分来旋转对象(例如,虚拟对象11002)。Atoperation 14122, the object (eg, virtual object 11002) is rotated based on the additional portion of the user input.

在操作14124处,确定一个或多个接触的移动是否增大到高于增大的对象平移阈值(例如,图14I中的平移移动计14002所指示的平移阈值TT′)。根据确定一个或多个接触的移动增大到高于增大的对象平移阈值,流程前进至操作14126。根据确定一个或多个接触的移动未增大到高于增大的对象平移阈值,流程前进至操作14114。Atoperation 14124, it is determined whether the movement of the one or more contacts has increased above an increased object translation threshold (eg, translation threshold TT' indicated bytranslational movement meter 14002 in Figure 14I). Flow proceeds tooperation 14126 upon determining that the movement of the one or more contacts has increased above the increased object translation threshold. Flow proceeds tooperation 14114 upon determining that the movement of the one or more contacts has not increased above the increased object translation threshold.

图15A至图15AI示出了用于根据确定设备的移动使虚拟对象移动到所显示的一个或多个设备相机的视场之外来生成音频警报的示例用户界面。这些附图中的用户界面被用于示出下文所述的过程,包括图8A至图8E、图9A至图9D、图10A至图10D、图16A至图16G、图17A至图17D、图18A至图18I、图19A至图19H以及图20A至图20F中的过程。为了便于解释,将参考在具有触敏显示器系统112的设备上执行的操作来讨论实施方案中的一些实施方案。在此类实施方案中,焦点选择器为任选地:相应手指或触笔接触、对应于手指或触笔接触的代表点(例如,相应接触的重心或与相应接触相关联的点)、或在触敏显示系统112上所检测到的两个或更多个接触的重心。然而,响应于在显示附图中示出的在显示器450上的用户界面以及焦点选择器时检测触敏表面451上的接触,任选地在具有显示器450和独立的触敏表面451的设备上执行类似的操作。15A-15AI illustrate example user interfaces for generating audio alerts based on determining movement of the device to move virtual objects out of the field of view of one or more displayed device cameras. The user interfaces in these figures are used to illustrate the processes described below, including Figures 8A-8E, 9A-9D, 10A-10D, 16A-16G, 17A-17D, Processes in 18A-18I, 19A-19H, and 20A-20F. For ease of explanation, some of the embodiments will be discussed with reference to operations performed on a device having a touch-sensitive display system 112 . In such embodiments, the focus selector is optionally: the corresponding finger or stylus contact, a representative point corresponding to the finger or stylus contact (eg, the center of gravity of the corresponding contact or a point associated with the corresponding contact), or The center of gravity of two or more contacts detected on the touch-sensitive display system 112 . However, in response to detecting a contact on the touch-sensitive surface 451 while displaying the user interface shown in the figures on thedisplay 450 and the focus selector, optionally on a device having thedisplay 450 and a separate touch-sensitive surface 451 Do something similar.

图15A至图15AI示出了当可访问性特征处于活动状态时发生的用户界面和设备操作。在一些实施方案中,可访问性特征包括其中可使用数量减少的输入或另选输入来访问设备特征的模式(例如,以使得提供上述输入手势的能力有限的用户可更容易地访问设备特征)。例如,可访问性模式为切换控件模式,在该模式下,第一输入手势(例如,轻扫输入)用于推进或逆转可用的设备操作,并且选择输入(例如,双击输入)用于执行当前指示的操作。当用户与设备进行交互时,生成音频警报(例如,以向用户提供指示已执行操作的反馈,以指示虚拟对象11002相对于登台用户界面或设备的一个或多个相机的视场的当前显示状态,等等)。15A-15AI illustrate user interface and device operations that occur when the accessibility feature is active. In some embodiments, accessibility features include modes in which a reduced number of inputs or alternative inputs can be used to access device features (eg, to allow users with limited ability to provide the aforementioned input gestures to more easily access device features) . For example, the accessibility mode is a toggle control mode in which a first input gesture (eg, a swipe input) is used to advance or reverse an available device action, and a selection input (eg, a double-tap input) is used to perform the current the indicated action. When the user interacts with the device, an audio alert is generated (eg, to provide the user with feedback that an action has been performed to indicate the current display state of thevirtual object 11002 relative to the staging user interface or the field of view of the device's camera(s) ,and many more).

在图15A中,即时消息用户界面5008包括三维虚拟对象11002的二维表示。选择光标15001被示出为包围三维虚拟对象11002(例如,以指示当前选择的操作为将在虚拟对象11002上执行的操作)。检测到用于执行当前指示的操作(例如,在登台用户界面6010中显示虚拟对象11002的三维表示)的通过接触15002进行的输入(例如,双击输入)。响应于该输入,登台用户界面6010的显示替换即时消息用户界面5060的显示,如图15B所示。In FIG. 15A, instantmessaging user interface 5008 includes a two-dimensional representation of three-dimensionalvirtual object 11002.Selection cursor 15001 is shown surrounding three-dimensional virtual object 11002 (eg, to indicate that the currently selected operation is the operation to be performed on virtual object 11002). An input (eg, a double-tap input) throughcontact 15002 is detected for performing the currently indicated operation (eg, displaying a three-dimensional representation ofvirtual object 11002 in staging user interface 6010). In response to this input, the display of thestaging user interface 6010 replaces the display of the instantmessaging user interface 5060, as shown in Figure 15B.

在图15B中,虚拟对象11002显示在登台用户界面6010中。生成如15008处所指示的音频警报(例如,通过设备扬声器111),以指示设备的状态。例如,音频警报15008包括如15010处所指示的通知:“椅子现在示出在登台视图中”。In FIG. 15B,virtual object 11002 is displayed in staginguser interface 6010. An audio alert is generated as indicated at 15008 (eg, through the device speaker 111) to indicate the status of the device. For example,audio alert 15008 includes a notification as indicated at 15010: "The chair is now shown in staging view."

在图15B中,选择光标15001被示出为包围共享控件6020(例如,以指示当前选择的操作为共享操作)。检测到通过接触15004进行的输入(例如,沿箭头15006所指示的路径的向右轻扫)。响应于该输入,所选择的操作前进到下一操作。In Figure 15B, aselection cursor 15001 is shown surrounding a share control 6020 (eg, to indicate that the currently selected operation is a share operation). Input viacontact 15004 is detected (eg, a right swipe along the path indicated by arrow 15006). In response to this input, the selected operation proceeds to the next operation.

在图15C中,显示向上倾斜控件15012(例如,以指示当前选择的操作为用于向上倾斜所显示的虚拟对象11002的操作)。生成如15014处所指示的音频警报,以指示设备的状态。例如,音频警报包括如15016处所指示的通知:“选中:向上倾斜按钮”。检测到通过接触15018进行的输入(例如,沿箭头15020所指示的路径的向右轻扫)。响应于该输入,所选择的操作前进到下一操作。In Figure 15C, a tilt-upcontrol 15012 is displayed (eg, to indicate that the currently selected operation is an operation for tilting the displayedvirtual object 11002 up). An audio alert is generated as indicated at 15014 to indicate the status of the device. For example, the audio alert includes the notification as indicated at 15016: "Checked: Tilt button up". Input viacontact 15018 is detected (eg, a right swipe along the path indicated by arrow 15020). In response to this input, the selected operation proceeds to the next operation.

在图15D中,显示向下倾斜控件15022(例如,以指示当前选择的操作为用于向下倾斜所显示的虚拟对象11002的操作)。生成如15024处所指示的音频警报,以指示设备的状态。例如,音频警报包括如15026处所指示的通知:“选中:向下倾斜按钮”。检测到通过接触15028进行的输入(例如,双击输入)。响应于该输入,执行所选择的操作(例如,在登台视图中向下倾斜虚拟对象11002)。In Figure 15D, a tilt downcontrol 15022 is displayed (eg, to indicate that the currently selected operation is an operation for tilting the displayedvirtual object 11002 down). An audio alert is generated as indicated at 15024 to indicate the status of the device. For example, the audio alert includes the notification as indicated at 15026: "Checked: Tilt button down". Input via contact 15028 (eg, double tap input) is detected. In response to the input, the selected operation is performed (eg, tilting thevirtual object 11002 down in the staging view).

在图15E中,在登台视图中向下倾斜虚拟对象11002。生成如15030处所指示的音频警报,以指示设备的状态。例如,音频警报包括如15032处所指示的通知:“椅子向下倾斜了5度,椅子现在朝向屏幕倾斜10度”。In Figure 15E,virtual object 11002 is tilted downward in the staging view. An audio alert is generated as indicated at 15030 to indicate the status of the device. For example, the audio alert includes a notification as indicated at 15032: "The chair is tilted 5 degrees down, the chair is now tilted 10 degrees toward the screen".

在图15F中,检测到通过接触15034进行的输入(例如,沿箭头15036所指示的路径的向右轻扫)。响应于该输入,所选择的操作前进到下一操作。In Figure 15F, an input via contact 15034 (eg, a right swipe along the path indicated by arrow 15036) is detected. In response to this input, the selected operation proceeds to the next operation.

在图15G中,显示顺时针旋转控件15038(例如,以指示当前选择的操作为用于顺时针旋转所显示的虚拟对象11002的操作)。音频警报15040包括如15042处所指示的通知:“选中:顺时针旋转按钮”。检测到通过接触15044进行的输入(例如,沿箭头15046所指示的路径的向右轻扫)。响应于该输入,所选择的操作前进到下一操作。In Figure 15G, aclockwise rotation control 15038 is displayed (eg, to indicate that the currently selected operation is an operation for rotating the displayedvirtual object 11002 clockwise).Audio alert 15040 includes a notification as indicated at 15042: "Checked: Turn button clockwise". Input viacontact 15044 is detected (eg, a right swipe along the path indicated by arrow 15046). In response to this input, the selected operation proceeds to the next operation.

在图15H中,显示逆时针旋转控件15048(例如,以指示当前选择的操作为用于逆时针旋转所显示的虚拟对象11002的操作)。音频警报15050包括如15052处所指示的通知:“选中:逆时针旋转按钮”。检测到通过接触15054进行的输入(例如,双击输入)。响应于该输入,执行所选择的操作(例如,在登台视图中逆时针旋转虚拟对象11002,如图15I所指示)。In Figure 15H, acounterclockwise rotation control 15048 is displayed (eg, to indicate that the currently selected operation is an operation for rotating the displayedvirtual object 11002 counterclockwise).Audio alert 15050 includes a notification as indicated at 15052: "Checked: Turn button counter-clockwise". Input through contact 15054 (eg, double-tap input) is detected. In response to the input, the selected operation is performed (eg, rotatingvirtual object 11002 counterclockwise in the staging view, as indicated in Figure 15I).

在图15I中,音频警报15056包括如15058处所指示的通知:“椅子逆时针旋转了5度。椅子现在远离屏幕旋转5度”。In Figure 15I,audio alert 15056 includes a notification as indicated at 15058: "The chair is rotated 5 degrees counterclockwise. The chair is now rotated 5 degrees away from the screen".

在图15J中,检测到通过接触15060进行的输入(例如,沿箭头15062所指示的路径的向右轻扫)。响应于该输入,所选择的操作前进到下一操作。In Figure 15J, an input via contact 15060 (eg, a right swipe along the path indicated by arrow 15062) is detected. In response to this input, the selected operation proceeds to the next operation.

在图15K中,显示缩放控件15064(例如,以指示当前选择的操作为用于缩放所显示的虚拟对象11002的操作)。音频警报15066包括如15068处所指示的通知:“比例:可调整”。关键字“可调整”与通知中的控件名称一起指示轻扫输入(例如,垂直轻扫输入)可用于操作控件。例如,当接触5070沿箭头5072所指示的路径向上移动时,该接触提供向上轻扫输入。响应于该输入,执行缩放操作(例如,虚拟对象11002的尺寸增大,如图15K至图15L所指示)。In Figure 15K, azoom control 15064 is displayed (eg, to indicate that the currently selected operation is an operation for zooming the displayed virtual object 11002). Audio alert 15066 includes a notification as indicated at 15068: "Scale: Adjustable". The keyword "adjustable" along with the control name in the notification indicates that a swipe input (eg, a vertical swipe input) can be used to manipulate the control. For example, when thecontact 5070 moves up along the path indicated byarrow 5072, the contact provides an up swipe input. In response to the input, a zoom operation is performed (eg, the size ofvirtual object 11002 increases, as indicated in Figures 15K-15L).

在图15L中,音频警报15074包括如15076处所指示的通知:“椅子现在调整为原始尺寸的150%”。用于减小虚拟对象11002的尺寸的输入(例如,向下轻扫输入)由沿箭头5078所指示的路径向下移动的接触5078提供。响应于该输入,执行缩放操作(例如,虚拟对象11002的尺寸减小,如图15L至图15M所指示)。In Figure 15L,audio alert 15074 includes a notification as indicated at 15076: "The chair is now adjusted to 150% of its original size." Input for reducing the size of virtual object 11002 (eg, a swipe down input) is provided bycontact 5078 moving down the path indicated byarrow 5078 . In response to the input, a zoom operation is performed (eg, the size ofvirtual object 11002 is reduced, as indicated in Figures 15L-15M).

在图15M中,音频警报15082包括如15084处所指示的通知:“椅子现在调整为原始尺寸的100%”。由于虚拟对象11002的尺寸被调整为其在登台视图6010中最初显示时的尺寸,发生触觉输出(如15086处所示)(例如,以提供指示虚拟对象11002已返回到其原始尺寸的反馈)。In Figure 15M, audio alert 15082 includes a notification as indicated at 15084: "The chair is now adjusted to 100% of its original size." Asvirtual object 11002 is resized to the size it was originally displayed in stagingview 6010, a haptic output (shown at 15086) occurs (eg, to provide feedback indicating thatvirtual object 11002 has returned to its original size).

在图15N中,检测到通过接触15088进行的输入(例如,沿箭头15090所指示的路径的向右轻扫)。响应于该输入,所选择的操作前进到下一操作。In Figure 15N, an input via contact 15088 (eg, a right swipe along the path indicated by arrow 15090) is detected. In response to this input, the selected operation proceeds to the next operation.

在图15O中,选择光标15001被示出为包围后退控件6016(例如,以指示当前选择的操作为返回先前的用户界面的操作)。音频警报15092包括如15094处所指示的通知:“选中:返回按钮”。检测到通过接触15096进行的输入(例如,沿箭头15098所指示的路径的向右轻扫)。响应于该输入,所选择的操作前进到下一操作。In Figure 15O, aselection cursor 15001 is shown surrounding a back control 6016 (eg, to indicate that the currently selected operation is an operation to return to a previous user interface).Audio alert 15092 includes a notification as indicated at 15094: "Checked: Back button". Input viacontact 15096 is detected (eg, a right swipe along the path indicated by arrow 15098). In response to this input, the selected operation proceeds to the next operation.

在图15P中,选择光标15001被示出为包围切换控件6018(例如,以指示当前选择的操作为用于在显示登台用户界面6010与显示包括相机的视场6036的用户界面之间切换的操作)。音频警报15098包括如50100处所指示的通知:“选中:世界视图/登台视图切换件”。检测到通过接触15102进行的输入(例如,双击输入)。响应于该输入,包括相机的视场6036的用户界面的显示替换登台用户界面6010的显示(如图15Q所指示)。In Figure 15P, aselection cursor 15001 is shown surrounding a toggle control 6018 (eg, to indicate that the currently selected operation is an operation for switching between displaying astaging user interface 6010 and displaying a user interface including a field ofview 6036 of the camera) ).Audio Alert 15098 includes a notification as indicated at 50100: "Checked: World View/Staging View Toggle". Input through contact 15102 (eg, double-tap input) is detected. In response to this input, the display of the user interface including the camera's field ofview 6036 replaces the display of the staging user interface 6010 (as indicated in Figure 15Q).

图15Q至图15T示出了在显示相机的视场6036时发生的校准序列(例如,由于在相机的视场6036中尚未检测到与虚拟对象11002对应的平面)。在校准序列期间,显示虚拟对象11002的半透明表示,模糊相机的视场6036,并且显示包括动画图像(包括设备100的表示12004和平面的表示12010)的提示以提示用户移动设备。在图15Q中,音频警报15102包括如50104处所指示的通知:“移动设备以检测平面”。从图15Q到图15R,设备100相对于物理环境5002移动(例如,如相机的视场6036中的桌子5004的改变的位置所指示)。作为设备100的移动的检测结果,显示校准用户界面对象12014,如图15S所指示。15Q-15T illustrate a calibration sequence that occurs when the camera's field ofview 6036 is displayed (eg, because a plane corresponding tovirtual object 11002 has not been detected in the camera's field of view 6036). During the calibration sequence, a translucent representation ofvirtual object 11002 is displayed, blurring the camera's field ofview 6036, and a prompt including animated images (includingrepresentation 12004 ofdevice 100 andrepresentation 12010 of a plane) is displayed to prompt the user to move the device. In Figure 15Q,audio alert 15102 includes a notification as indicated at 50104: "Move device to detect plane". From Figures 15Q to 15R, thedevice 100 moves relative to the physical environment 5002 (eg, as indicated by the changing position of the table 5004 in the camera's field of view 6036). As a result of the detection of movement of thedevice 100, a calibrationuser interface object 12014 is displayed, as indicated in Figure 15S.

在图15S中,音频警报15106包括如50108处所指示的通知:“移动设备以检测平面”。在图15S至图15T中,当设备100相对于物理环境5002移动(例如,如相机的视场6036中的桌子5004的改变的位置所指示)时,校准用户界面对象12014旋转。在图15T中,已经发生足以在相机的视场6036中检测到与虚拟对象11002对应的平面的运动,并且音频警报15110包括如50112处所指示的通知:“检测到平面”。在图15U至图15V中,虚拟对象11002的半透明度降低,并且虚拟对象11002放置在检测到的平面上。In Figure 15S,audio alert 15106 includes a notification as indicated at 50108: "Move device to detect plane". In Figures 15S-15T, the calibrationuser interface object 12014 rotates as thedevice 100 moves relative to the physical environment 5002 (eg, as indicated by the changing position of the table 5004 in the camera's field of view 6036). In FIG. 15T, sufficient motion has occurred to detect the plane corresponding tovirtual object 11002 in the camera's field ofview 6036, andaudio alert 15110 includes a notification as indicated at 50112: "Plane Detected". In Figures 15U-15V, the translucency of thevirtual object 11002 is reduced, and thevirtual object 11002 is placed on the detected plane.

在图15V中,音频警报15114包括如50116处所指示的通知:“椅子现在被投影在世界中,100%可见,占据屏幕的10%”。触觉输出发生器输出指示虚拟对象11002已被放置在平面上的触觉输出(如15118处所指示)。虚拟对象11002显示在相对于物理环境5002的固定位置处。In Figure 15V,audio alert 15114 includes a notification as indicated at 50116: "The chair is now projected in the world, 100% visible, occupying 10% of the screen". The haptic output generator outputs a haptic output (as indicated at 15118) indicating that thevirtual object 11002 has been placed on the plane. Thevirtual object 11002 is displayed at a fixed location relative to thephysical environment 5002 .

在图15V至图15W中,设备100相对于物理环境5002移动(例如,如相机的视场6036中的桌子5004的改变的位置所指示),使得虚拟对象11002在相机的视场6036中不再可见。由于虚拟对象11002移出相机的视场6036,音频警报15122包括如50124处所指示的通知:“椅子不在屏幕上”。In FIGS. 15V-15W,device 100 is moved relative to physical environment 5002 (eg, as indicated by the changed position of table 5004 in camera's field of view 6036 ) such thatvirtual object 11002 is no longer in camera's field ofview 6036 visible. Asvirtual object 11002 moves out of camera's field ofview 6036,audio alert 15122 includes a notification as indicated at 50124: "Chair not on screen".

在图15W至图15X中,设备100相对于物理环境5002已移动,使得在图15X中虚拟对象11002在相机的视场6036中再次可见。由于虚拟对象11002移动到相机的视场6036中,生成音频警报15118,该音频警报包括如50120处所指示的通知:“椅子现在被投影在世界中,100%可见,占据屏幕的10%”。In Figures 15W-15X, thedevice 100 has moved relative to thephysical environment 5002, so that thevirtual object 11002 is again visible in the camera's field ofview 6036 in Figure 15X. As thevirtual object 11002 moves into the camera's field ofview 6036, anaudio alert 15118 is generated that includes a notification as indicated at 50120: "The chair is now projected in the world, 100% visible, occupying 10% of the screen".

在图15X至图15Y中,设备100相对于物理环境5002已移动(例如,使得在图15Y中,设备100“更靠近”如投影在相机的视场6036中的虚拟对象11002,并且虚拟对象11002在相机的视场6036中部分可见)。由于虚拟对象11002部分地移出相机的视场6036,音频警报15126包括如50128处所指示的通知:“椅子90%可见,占据屏幕的20%”。15X-15Y,device 100 has moved relative to physical environment 5002 (eg, so that in Partially visible in the camera's field of view 6036). Since thevirtual object 11002 is partially out of the camera's field ofview 6036, theaudio alert 15126 includes a notification as indicated at 50128: "Chair 90% visible, occupies 20% of the screen".

在一些实施方案中,在与虚拟对象11002对应的位置处提供的输入使得包括关于虚拟对象11002的言语信息的音频消息被提供。相反,当在远离虚拟对象11002和控件的位置处提供输入时,则不提供包括关于虚拟对象11002的言语信息的音频消息。在图15Z中,发生音频输出15130(例如,“点击声”或“嗡嗡声”),该音频输出指示在不与用户界面中的控件或虚拟对象11002的位置对应的位置处检测到接触15132。在图15AA中,在与虚拟对象11002的位置对应的位置处检测到通过接触15134进行的输入。响应于该输入,生成与虚拟对象11002对应(例如,指示虚拟对象11002的状态)的音频警报15136,该音频警报包括如50138处所指示的通知:“椅子90%可见,占据屏幕的20%”。In some embodiments, the input provided at the location corresponding tovirtual object 11002 causes an audio message including verbal information aboutvirtual object 11002 to be provided. Conversely, when the input is provided at a location remote from thevirtual object 11002 and controls, no audio message including verbal information about thevirtual object 11002 is provided. In Figure 15Z, an audio output 15130 (eg, "click" or "hum") occurs indicating that a contact 15132 was detected at a location that does not correspond to the location of a control orvirtual object 11002 in the user interface . In FIG. 15AA, an input bycontact 15134 is detected at a position corresponding to the position ofvirtual object 11002. In response to the input, anaudio alert 15136 is generated corresponding to (eg, indicating the state of the virtual object 11002)virtual object 11002, the audio alert including a notification as indicated at 50138: "Chair 90% visible, occupying 20% of the screen".

图15AB至图15AI示出了用于当显示包括相机的视场6036的用户界面时在切换控件模式下选择和执行操作的输入。Figures 15AB-15AI illustrate inputs for selecting and performing operations in toggle control mode when a user interface including the camera's field ofview 6036 is displayed.

在图15AB中,检测到通过接触15140进行的输入(例如,沿箭头15142所指示的路径的向右轻扫)。响应于该输入,选择操作,如图15AC所指示。In Figure 15AB, an input via contact 15140 (eg, a right swipe along the path indicated by arrow 15142) is detected. In response to this input, an operation is selected, as indicated in Figure 15AC.

在图15AC中,显示向右横向移动控件15144(例如,以指示当前选择的操作为用于向右移动虚拟对象11002的操作)。音频警报15146包括如15148处所指示的通知:“选中:向右移动按钮”。检测到通过接触15150进行的输入(例如,双击输入)。响应于该输入,执行所选择的操作(例如,在相机的视场6036中向右移动虚拟对象11002,如图15AD所指示)。In Figure 15AC, rightlateral movement control 15144 is displayed (eg, to indicate that the currently selected operation is the operation for movingvirtual object 11002 to the right).Audio alert 15146 includes a notification as indicated at 15148: "Checked: Move button right". Input through contact 15150 (eg, double-tap input) is detected. In response to the input, the selected operation is performed (eg, moving thevirtual object 11002 to the right in the camera's field ofview 6036, as indicated in Figure 15AD).

在图15AD中,虚拟对象11002的移动被报告为音频警报15152,该音频警报包括如15154处所指示的通知:“椅子100%可见,占据屏幕的30%”。In Figure 15AD, movement ofvirtual object 11002 is reported asaudio alert 15152, which includes a notification as indicated at 15154: "Chair 100% visible, occupies 30% of screen".

在图15AE中,检测到通过接触15156进行的输入(例如,沿箭头15158所指示的路径的向右轻扫)。响应于该输入,所选择的操作前进到下一操作。In Figure 15AE, an input via contact 15156 (eg, a right swipe along the path indicated by arrow 15158) is detected. In response to this input, the selected operation proceeds to the next operation.

在图15AF中,显示向左横向移动控件15160(例如,以指示当前选择的操作为用于向左移动虚拟对象11002的操作)。音频警报15162包括如15164处所指示的通知:“选中:向左移动”。检测到通过接触15166进行的输入(例如,沿箭头15168所指示的路径的向右轻扫)。响应于该输入,所选择的操作前进到下一操作。In Figure 15AF, alateral movement control 15160 to the left is displayed (eg, to indicate that the currently selected operation is the operation for moving thevirtual object 11002 to the left).Audio alert 15162 includes a notification as indicated at 15164: "Selected: Move Left". Input viacontact 15166 is detected (eg, a right swipe along the path indicated by arrow 15168). In response to this input, the selected operation proceeds to the next operation.

在图15AG中,显示顺时针旋转控件15170(例如,以指示当前选择的操作为用于顺时针旋转虚拟对象11002的操作)。音频警报15172包括如15174处所指示的通知:“选中:顺时针旋转”。检测到通过接触15176进行的输入(例如,沿箭头15178所指示的路径的向右轻扫)。响应于该输入,所选择的操作前进到下一操作。In Figure 15AG, aclockwise rotation control 15170 is displayed (eg, to indicate that the currently selected operation is the operation for rotatingvirtual object 11002 clockwise).Audio alert 15172 includes a notification as indicated at 15174: "Selected: Rotate Clockwise". Input viacontact 15176 is detected (eg, a right swipe along the path indicated by arrow 15178). In response to this input, the selected operation proceeds to the next operation.

在图15AH中,显示逆时针旋转控件15180(例如,以指示当前选择的操作为用于顺时针旋转虚拟对象11002的操作)。音频警报15182包括如15184处所指示的通知:“选中:逆时针旋转”。检测到通过接触15186进行的输入(例如,双击输入)。响应于该输入,执行所选择的操作(例如,逆时针旋转虚拟对象11002,如图15AI所指示)。In Figure 15AH, acounterclockwise rotation control 15180 is displayed (eg, to indicate that the currently selected operation is the operation for rotatingvirtual object 11002 clockwise).Audio alert 15182 includes a notification as indicated at 15184: "Selected: Rotate Counter-Clockwise". Input via contact 15186 (eg, double tap input) is detected. In response to this input, the selected operation is performed (eg, rotatingvirtual object 11002 counterclockwise, as indicated in Figure 15AI).

在图15AI中,音频警报15190包括如15164处所指示的通知:“椅子逆时针旋转了5度。椅子现在相对于屏幕旋转零度”。In Figure 15AI, audio alert 15190 includes a notification as indicated at 15164: "The chair is rotated 5 degrees counterclockwise. The chair is now rotated zero degrees relative to the screen".

在一些实施方案中,在对象(例如,虚拟对象11002)的至少一个表面(例如,下侧表面)上生成反射。使用设备100的一个或多个相机所捕获的图像数据生成反射。例如,反射基于与在一个或多个相机的视场6036中检测到的水平平面(例如,地板平面5038)对应的捕获图像数据(例如,图像、一组图像和/或视频)的至少一部分。在一些实施方案中,生成反射包括生成包括捕获的图像数据的球形模型(例如,通过将捕获的图像数据映射在虚拟球体的模型上)。In some embodiments, reflections are generated on at least one surface (eg, an underside surface) of an object (eg, virtual object 11002). Reflections are generated using image data captured by one or more cameras ofdevice 100 . For example, the reflection is based on at least a portion of captured image data (eg, an image, set of images, and/or video) corresponding to a horizontal plane (eg, floor plane 5038 ) detected in the field ofview 6036 of the one or more cameras . In some embodiments, generating the reflection includes generating a spherical model including the captured image data (eg, by mapping the captured image data on the model of a virtual sphere).

在一些实施方案中,在对象的表面上生成的反射包括反射梯度(例如,使得表面上更靠近平面的部分具有比表面上远离平面的部分高的反射率量值)。在一些实施方案中,在对象的表面上生成的反射的反射率量值基于与表面对应的纹理的反射率值。例如,在表面的非反射部分处未生成反射。In some embodiments, reflections generated on the surface of the object include a reflection gradient (eg, such that portions of the surface that are closer to the plane have higher reflectance magnitudes than portions of the surface that are farther from the plane). In some embodiments, the reflectance magnitude of the reflection generated on the surface of the object is based on the reflectance value of the texture corresponding to the surface. For example, reflections are not generated at non-reflective parts of the surface.

在一些实施方案中,随时间推移来调整反射。例如,在接收到用于移动和/或缩放对象的输入时,调整反射(例如,在对象移动时,将对象的反射调整为反射平面上与对象对应的位置处的部分)。在一些实施方案中,在旋转对象时(例如,围绕z轴),未调整反射。In some embodiments, the reflection is adjusted over time. For example, when input for moving and/or scaling an object is received, the reflection is adjusted (eg, when the object is moved, the reflection of the object is adjusted to the portion of the reflection plane at a location corresponding to the object). In some implementations, when the object is rotated (eg, about the z-axis), reflections are not adjusted.

在一些实施方案中,在将对象显示在确定的位置处(例如,在与对象对应的在相机的视场6036中检测到的平面上)之前,未在对象的表面上产生反射。例如,在显示对象的半透明表示时(例如,如参照图11G至图11H描述的),以及/或者在执行校准时(例如,如参照图12B至图12I描述的),未在对象的表面上生成反射。In some embodiments, no reflections are generated on the surface of the object until the object is displayed at a determined location (eg, on a plane corresponding to the object detected in the camera's field of view 6036). For example, when displaying a translucent representation of an object (eg, as described with reference to FIGS. 11G-11H ), and/or when performing calibration (eg, as described with reference to FIGS. 12B-12I ), the surface of the object is not generate reflections.

在一些实施方案中,在相机的视场6036中所检测到的一个或多个平面上生成对象的反射。在一些实施方案中,在相机的视场6036中未生成对象的反射。In some embodiments, reflections of objects are generated on one or more planes detected in the field ofview 6036 of the camera. In some embodiments, reflections of objects are not generated in the camera's field ofview 6036.

图16A至图16G是示出根据对象放置标准是否得到满足在包括一个或多个相机的视场的用户界面中显示使用不同视觉属性的虚拟对象的方法16000的流程图。方法16000在具有显示生成部件(例如,显示器、投影仪、平视显示器等)、一个或多个输入设备(例如,触敏表面或同时充当显示生成部件和触敏表面的触摸屏显示器)以及一个或多个相机(例如,设备上与显示器和触敏表面相对的一侧上的一个或多个后向相机)的电子设备(例如,图3的设备300或图1A的便携式多功能设备100)处执行。在一些实施方案中,显示器是触摸屏显示器,并且触敏表面在显示器上或与显示器集成。在一些实施方案中,显示器与触敏表面是分开的。方法16000中的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。16A-16G are flowcharts illustrating amethod 16000 of displaying virtual objects using different visual properties in a user interface that includes a field of view of one or more cameras depending on whether object placement criteria are met.Method 16000 is implemented when having a display generating component (eg, a display, a projector, a heads-up display, etc.), one or more input devices (eg, a touch-sensitive surface or a touch screen display that acts as both the display generating component and the touch-sensitive surface), and one or more input devices. implementation at an electronic device (eg, device 300 of FIG. 3 or portablemultifunction device 100 of FIG. 1A ) that includes multiple cameras (eg, one or more rear-facing cameras on the side of the device opposite the display and touch-sensitive surface) . In some embodiments, the display is a touch screen display and the touch-sensitive surface is on or integrated with the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations inMethod 16000 are optionally combined, and/or the order of some operations is optionally changed.

设备接收(16002)(例如,在显示包括虚拟对象的可移动表示的登台用户界面时,以及在显示相机的视场之前)在第一用户界面区域(例如,增强现实观察器界面)中显示虚拟对象(例如,三维模型的表示)的请求,其中第一用户界面区域包括一个或多个相机的视场的至少一部分(例如,该请求为通过在触摸屏显示器上的虚拟对象的表示上检测到的接触或者通过在与虚拟对象的表示同时显示的示能表示上检测到的接触(“AR视图”或“世界视图”按钮上的轻击)进行的输入,其中示能表示被配置为在被第一接触调用时触发AR视图的显示)。例如,该请求为在一个或多个相机的视场6036中显示虚拟对象11002的输入,如参照图11F描述的。The device receives ( 16002 ) (eg, while displaying the staging user interface including the movable representation of the virtual object, and prior to displaying the camera's field of view) displaying the virtual in the first user interface area (eg, the augmented reality viewer interface) A request for an object (eg, a representation of a three-dimensional model), wherein the first user interface area includes at least a portion of the field of view of one or more cameras (eg, the request is detected by a representation of the virtual object on a touchscreen display) Contact or input through a detected contact (tap on an "AR View" or "World View" button) on an affordance displayed concurrently with a representation of a virtual object, where the affordance is configured to be A touch call triggers the display of the AR view). For example, the request is an input to display thevirtual object 11002 in the field ofview 6036 of one or more cameras, as described with reference to FIG. 11F .

响应于在第一用户界面区域中显示虚拟对象的请求(例如,显示设备周围的物理环境的视图中的虚拟对象的请求),设备经由显示生成部件在包括在第一用户界面区域中的一个或多个相机的视场的至少一部分上显示(16004)虚拟对象的表示(例如,响应于在第一用户界面区域中显示虚拟对象的请求而显示一个或多个相机的视场),其中一个或多个相机的视场是一个或多个相机所处的物理环境的视图。例如,如参照图11G所描述的,虚拟对象11002显示在一个或多个相机的视场6036中,该视场是一个或多个相机所处的物理环境5002的视图。显示虚拟对象的表示包括:根据确定对象放置标准未得到满足,其中对象放置标准要求虚拟对象的放置位置(例如,平面)在一个或多个相机的视场中可被识别,以便满足对象放置标准(例如,当设备尚未识别用于在第一用户界面区域中相对于一个或多个相机的视场放置虚拟对象的位置或平面(例如,平面识别仍然在进展中,或者不存在足够的图像数据来识别平面)时,对象放置标准未得到满足),显示具有第一组视觉属性(例如,处于第一半透明水平,或第一亮度水平,或第一饱和度水平等)和具有第一取向的虚拟对象的表示,该第一取向与在一个或多个相机的视场中显示了物理环境的哪个部分无关(例如,虚拟对象漂浮在相机的视场上,具有相对于预定义平面的取向,该取向与物理环境无关(例如,设置在登台视图中的取向)并且与相机的视场中发生的改变(例如,由于设备相对于物理环境的移动而导致的改变)无关)。例如,在图11G至图11H中,因为尚未在相机的视场6036中识别虚拟对象11002的放置位置,所以显示虚拟对象11002的半透明版本。随着设备移动(如从图11G到图11H所示),虚拟对象11002的取向不改变。在一些实施方案中,对象放置标准包括视场稳定并且提供物理环境的静止视图(例如,相机在至少阈值时间量期间移动小于阈值量,并且/或者自从接收到请求以来已经过了至少预先确定的时间量,并且/或者相机已经在设备先前进行了充分移动的情况下被校准以进行平面检测)的要求。根据确定对象放置标准得到满足(例如,当设备尚未识别用于在第一用户界面中相对于一个或多个相机的视场放置虚拟对象的位置或平面时,对象放置标准得到满足),设备显示具有第二组视觉属性(例如,处于第二半透明水平,或第二亮度水平,或第二饱和度水平等)和具有第二取向的虚拟对象的表示,该第二组视觉属性不同于第一组视觉属性,该第二取向与在一个或多个相机的视场中检测到的物理环境中的平面对应。例如,在图11I中,因为已经在相机的视场6036中识别虚拟对象11002的放置位置(例如,对应于物理环境5002中的地板表面5038的平面),所以显示虚拟对象11002的非半透明版本。虚拟对象11002的取向(例如,在触摸屏显示器112上的位置)已从图11H所示的第一取向改变为图11I所示的第二取向。随着设备移动(如从图11I到图11J所示),虚拟对象11002的取向改变(因为虚拟对象11002现在以相对于物理环境5002的固定取向显示)。根据对象放置标准是否得到满足来显示具有第一组视觉属性或第二组视觉属性的虚拟对象为用户提供了视觉反馈(例如,以指示已经接收到显示虚拟对象的请求,但是为了将虚拟对象放置在一个或多个相机的视场中,需要附加的时间和/或校准信息)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供合适的输入并在将对象以对应于平面的第二取向放置之前避免尝试提供用于操纵虚拟对象的输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In response to a request to display the virtual object in the first user interface area (eg, a request to display the virtual object in a view of the physical environment surrounding the device), the device generates a component via the display in one of the or Displaying (16004) representations of virtual objects on at least a portion of the fields of view of the plurality of cameras (eg, displaying the fields of view of one or more cameras in response to a request to display the virtual object in the first user interface area), wherein one or A field of view of multiple cameras is a view of the physical environment in which one or more cameras are located. For example, as described with reference to FIG. 11G, thevirtual object 11002 is displayed in the field ofview 6036 of the one or more cameras, which is the view of thephysical environment 5002 in which the one or more cameras are located. Displaying the representation of the virtual object includes determining that object placement criteria are not met, wherein the object placement criteria require that a placement location (eg, a plane) of the virtual object is identifiable in the field of view of the one or more cameras in order to meet the object placement criteria (eg, when the device has not yet identified a location or plane for placing the virtual object in the first user interface area relative to the field of view of the one or more cameras (eg, plane identification is still in progress, or sufficient image data does not exist) to identify a plane), the object placement criteria are not met), the display has a first set of visual properties (eg, at a first translucency level, or a first brightness level, or a first saturation level, etc.) and has a first orientation A representation of a virtual object whose first orientation is independent of which part of the physical environment is displayed in the field of view of one or more cameras (e.g. a virtual object floats over the camera's field of view, with an orientation relative to a predefined plane , the orientation is independent of the physical environment (eg, the orientation set in the staging view) and independent of changes that occur in the camera's field of view (eg, changes due to movement of the device relative to the physical environment). For example, in FIGS. 11G-11H , a translucent version ofvirtual object 11002 is displayed because the placement location ofvirtual object 11002 has not been identified in the camera's field ofview 6036 . As the device moves (as shown from Figure 11G to 11H ), the orientation ofvirtual object 11002 does not change. In some embodiments, the object placement criteria include that the field of view is stable and provides a stationary view of the physical environment (eg, the camera has moved less than a threshold amount during at least a threshold amount of time, and/or at least a predetermined amount of time has elapsed since the request was received. amount of time, and/or the camera has been calibrated for plane detection with the device previously moved sufficiently. Upon determining that the object placement criteria are met (eg, the object placement criteria are met when the device has not identified a location or plane for placing the virtual object in the first user interface relative to the field of view of the one or more cameras), the device displays the A representation of a virtual object having a second set of visual attributes (eg, at a second translucency level, or a second brightness level, or a second saturation level, etc.) and a second orientation, the second set of visual attributes being different from the first A set of visual properties, the second orientation corresponds to a plane in the physical environment detected in the field of view of the one or more cameras. For example, in Figure 11I, a non-translucent version ofvirtual object 11002 is displayed because the placement location ofvirtual object 11002 has been identified in the camera's field of view 6036 (eg, a plane corresponding tofloor surface 5038 in physical environment 5002). . The orientation (eg, position on the touch screen display 112) of thevirtual object 11002 has been changed from the first orientation shown in Figure 11H to the second orientation shown in Figure 11I. As the device moves (as shown from Figure 11I to 11J ), the orientation ofvirtual object 11002 changes (becausevirtual object 11002 is now displayed in a fixed orientation relative to physical environment 5002). Displaying virtual objects with either the first set of visual properties or the second set of visual properties depending on whether the object placement criteria are met provides visual feedback to the user (eg, to indicate that a request to display the virtual object has been received, but to place the virtual object In the field of view of one or more cameras, additional timing and/or calibration information is required). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and avoiding attempts to provide an object before placing the object in a second orientation that corresponds to the plane) input for manipulating virtual objects), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在以第一组视觉属性和第一取向显示虚拟对象的表示时,设备检测到(16006)对象放置标准得到满足(例如,在虚拟对象处于半透明状态悬浮在设备周围的物理环境的视图上时,识别用于放置虚拟对象的平面)。在以第一组视觉属性(例如,处于半透明状态)显示虚拟对象时检测到对象放置标准得到满足而不需要用于发起检测对象放置标准的进一步的用户输入减少了对象放置所需的输入的数量。减少执行操作所需的输入的数量增强了设备的可操作性,并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, when displaying the representation of the virtual object with the first set of visual attributes and the first orientation, the device detects (16006) that the object placement criteria are met (eg, when the virtual object is in a semi-transparent state hovering around the device) On a view of the physical environment, identify the plane on which to place virtual objects). Detecting that the object placement criteria are met when the virtual object is displayed with the first set of visual attributes (eg, in a translucent state) without requiring further user input to initiate detection of the object placement criteria reduces the amount of input required for object placement quantity. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and prolongs the battery of the device by enabling the user to use the device more quickly and efficiently life.

在一些实施方案中,响应于检测到对象放置标准得到满足,设备经由显示生成部件显示(16008)动画过渡,该动画过渡示出虚拟对象的表示从第一取向移动(例如,旋转、缩放、平移和/或以上的组合)到第二取向,并且从具有第一组视觉属性改变为具有第二组视觉属性。例如,一旦在相机的视场中识别用于放置虚拟对象的平面,就在视觉地调整其取向、尺寸和半透明度(等等)的情况下将虚拟对象放置在该平面上。显示从第一取向到第二取向的动画过渡(例如,不需要进一步的用户输入来在第一用户界面中对虚拟对象进行重新取向)减少了对象放置所需的输入的数量。减少执行操作所需的输入的数量增强了设备的可操作性,并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to detecting that the object placement criteria are met, the device displays (16008), via the display generation component, an animated transition that shows the representation of the virtual object moving (eg, rotating, zooming, translating) from the first orientation and/or combinations of the above) to the second orientation and changing from having the first set of visual attributes to having the second set of visual attributes. For example, once a plane for placing a virtual object is identified in the camera's field of view, the virtual object is placed on that plane with its orientation, size, and translucency (etc.) adjusted visually. Displaying an animated transition from the first orientation to the second orientation (eg, requiring no further user input to reorient the virtual object in the first user interface) reduces the amount of input required for object placement. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and prolongs the battery of the device by enabling the user to use the device more quickly and efficiently life.

在一些实施方案中,检测到对象放置标准得到满足包括以下操作中的一个或多个(16010):检测到在一个或多个相机的视场中已经识别出平面;在至少阈值时间量内检测到设备和物理环境之间的小于阈值移动量的移动(例如,在相机的视场中导致物理环境的基本上静止的视图);以及检测到自从接收到在第一用户界面区域中显示虚拟对象的请求以来已经过了至少预先确定的时间量。检测到对象放置标准得到满足(例如,通过在一个或多个相机的视场中检测到平面而不需要用户输入来检测平面)减少了对象放置所需的输入的数量。减少执行操作所需的输入的数量增强了设备的可操作性,并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, detecting that the object placement criteria are met comprises one or more of (16010): detecting that a plane has been identified in the field of view of the one or more cameras; detecting within at least a threshold amount of time movement between the device and the physical environment that is less than a threshold amount of movement (eg, resulting in a substantially stationary view of the physical environment in the camera's field of view); and detecting a virtual object displayed in the first user interface area since receiving the At least a predetermined amount of time has passed since the request was made. Detecting that object placement criteria are met (eg, by detecting planes in the field of view of one or more cameras without requiring user input) reduces the amount of input required for object placement. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and prolongs the battery of the device by enabling the user to use the device more quickly and efficiently life.

在一些实施方案中,在一个或多个相机的视场中捕获的物理环境的第一部分(例如,用户通过半透明的虚拟对象可见的物理环境的第一部分)上显示具有第一组视觉属性和第一取向的虚拟对象的表示时(例如,在虚拟对象处于半透明状态悬浮在设备周围的物理环境的视图上时),设备检测(16012)一个或多个相机的第一移动(例如,设备相对于围绕设备的物理环境的旋转和/或平移)。例如,在图11G至图11H中,在显示虚拟对象11002的半透明表示时,一个或多个相机移动(如例如相机的视场6036中的桌子5004的改变位置所指示)。在相机的视场6036中捕获并且显示在用户界面中的物理环境的墙壁和桌子通过半透明的虚拟对象11002可见。响应于检测到一个或多个相机的第一移动,设备在一个或多个相机的视场中捕获的物理环境的第二部分上显示(16014)具有第一组视觉属性和第一取向的虚拟对象,其中物理环境的第二部分不同于物理环境的第一部分。例如,在虚拟对象的半透明版本被显示悬停在相机的视场中示出的物理环境上时,当设备相对于物理环境移动时,相机的视场内的物理环境的视图(例如,在半透明的虚拟对象后面)移位并缩放。因此,在设备移动期间,虚拟对象的半透明版本变成覆盖在视场中表示的物理环境的不同部分之上,作为相机的视场内的物理环境的视图的平移和缩放的结果。例如,在图11H中,相机的视场6036显示物理环境5002的第二部分,其不同于显示在图11G中的物理环境5002的第一部分。虚拟对象11002的半透明表示的取向不随着图11G至图11H中发生的一个或多个相机的移动而改变。响应于检测到一个或多个相机的移动而显示具有第一取向的虚拟对象为用户提供了视觉反馈(例如,以指示虚拟对象尚未被放置在相对于物理环境的固定位置处并且因此不随着在一个或多个相机的视场中捕获的物理环境的该部分根据一个或多个相机的移动改变而移动)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户在将对象以对应于平面的第二取向放置之前避免尝试提供用于操纵虚拟对象的输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, a display having the first set of visual attributes and While the representation of the virtual object in the first orientation (eg, when the virtual object is in a semi-transparent state hovering over the view of the physical environment around the device), the device detects (16012) a first movement of the one or more cameras (eg, the device Rotation and/or translation relative to the physical environment surrounding the device). For example, in FIGS. 11G-11H , while displaying the translucent representation ofvirtual object 11002, one or more cameras move (as indicated, for example, by the changing position of table 5004 in camera's field of view 6036). The walls and tables of the physical environment captured in the camera's field ofview 6036 and displayed in the user interface are visible through the translucentvirtual objects 11002 . In response to detecting the first movement of the one or more cameras, the device displays (16014) a virtual image having the first set of visual attributes and the first orientation on a second portion of the physical environment captured in the field of view of the one or more cameras Object, where the second part of the physical environment is different from the first part of the physical environment. For example, when a semi-transparent version of a virtual object is displayed hovering over the physical environment shown in the camera's field of view, as the device moves relative to the physical environment, the view of the physical environment within the camera's field of view (eg, in the behind the translucent dummy) is shifted and scaled. Thus, during device movement, the translucent version of the virtual object becomes overlaid on different parts of the physical environment represented in the field of view as a result of translation and zooming of the view of the physical environment within the camera's field of view. For example, in Figure 11H, the camera's field ofview 6036 displays a second portion of thephysical environment 5002 that is different from the first portion of thephysical environment 5002 shown in Figure 11G. The orientation of the semi-transparent representation ofvirtual object 11002 does not change with the movement of one or more cameras that occurs in FIGS. 11G-11H . Displaying the virtual object with the first orientation in response to detecting movement of the one or more cameras provides visual feedback to the user (eg, to indicate that the virtual object has not been placed at a fixed location relative to the physical environment and therefore does not follow The portion of the physical environment captured in the field of view of the one or more cameras moves as a function of movement of the one or more cameras). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (for example, by helping the user avoid attempts to provide tools for manipulating virtual objects before placing the object in a second orientation that corresponds to the plane) input), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在一个或多个相机的视场中捕获的物理环境的第三部分上显示具有第二组视觉属性和第二取向的虚拟对象的表示(例如,物理环境的第三部分的直接视图(例如,支撑虚拟对象的检测到的平面的一部分)被虚拟对象阻挡)时(例如,在对象放置标准已得到满足并且虚拟对象已被放置在相机的视场中的物理环境中检测到的平面上之后),设备检测(16016)一个或多个相机的第二移动(例如,设备相对于围绕设备的物理环境的旋转和/或平移)。例如,在图11I至图11J中,在显示虚拟对象11002的非半透明表示时,一个或多个相机移动(如例如相机的视场6036中的桌子5004的改变位置所指示)。响应于检测到设备的第二移动,当在一个或多个相机的视场中捕获的物理环境根据设备的第二移动而移动(例如,移位和缩放),并且第二取向继续与在一个或多个相机的视场中检测到的物理环境中的平面对应时,设备保持(16018)在一个或多个相机的视场中捕获的物理环境的第三部分上显示具有第二组视觉属性和第二取向的虚拟对象的表示。例如,在虚拟对象的非半透明版本落在相机的视场中示出的物理环境中检测到的平面上的静止位置之后,虚拟对象的位置和取向相对于相机的视场内的物理环境固定,并且当设备相对于物理环境移动时,虚拟对象将随着相机的视场中的物理环境移位和缩放(例如,当在图11I至图11J中发生一个或多个相机的移动时,虚拟对象11002的非半透明表示保持固定在相对于物理环境5002中的地板平面的取向上)。响应于检测到一个或多个相机的移动而将虚拟对象的显示保持在第二取向上为用户提供了视觉反馈(例如,以指示虚拟对象已被放置在相对于物理环境的固定位置处并且因此随着在一个或多个相机的视场中捕获的物理环境的该部分根据一个或多个相机的移动改变而移动)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户针对已被放置在对应于平面的第二取向上的虚拟对象提供合适的输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, representations of virtual objects having the second set of visual attributes and the second orientation are displayed on a third portion of the physical environment captured in the field of view of the one or more cameras (eg, the third portion of the physical environment when the direct view of the object (e.g. a portion of the detected plane supporting the virtual object is blocked by the virtual object) (e.g. detected in a physical environment where the object placement criteria have been met and the virtual object has been placed in the camera's field of view) After arriving on the plane), the device detects (16016) a second movement of the one or more cameras (eg, rotation and/or translation of the device relative to the physical environment surrounding the device). For example, in Figures 11I-11J, one or more cameras move (as indicated, for example, by the changing position of table 5004 in the camera's field of view 6036) while displaying the non-translucent representation ofvirtual object 11002. In response to detecting the second movement of the device, when the physical environment captured in the field of view of the one or more cameras moves (eg, shifts and zooms) according to the second movement of the device, and the second orientation continues with the The device maintains (16018) a display on a third portion of the physical environment captured in the field of view of the one or more cameras with the second set of visual attributes when corresponding to a plane in the physical environment detected in the field of view of the one or more cameras and the representation of the virtual object in the second orientation. For example, the position and orientation of the virtual object is fixed relative to the physical environment within the camera's field of view after a non-translucent version of the virtual object falls in a stationary position on a plane detected in the physical environment shown in the camera's field of view , and as the device moves relative to the physical environment, virtual objects will shift and scale with the physical environment in the camera's field of view (eg, when movement of one or more cameras occurs in Figures 11I-11J, the virtual The non-translucent representation ofobject 11002 remains fixed in orientation relative to the floor plane in physical environment 5002). Maintaining the display of the virtual object in the second orientation in response to detecting movement of the one or more cameras provides visual feedback to the user (eg, to indicate that the virtual object has been placed at a fixed location relative to the physical environment and thus As the portion of the physical environment captured in the field of view of the one or more cameras changes according to the movement of the one or more cameras). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (eg, by helping the user provide appropriate input for virtual objects that have been placed in a second orientation corresponding to the plane) , which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,根据确定对象放置标准得到满足(例如,当设备尚未识别用于在第一用户界面中相对于一个或多个相机的视场放置虚拟对象的位置或平面时,对象放置标准得到满足),设备(例如,使用设备的一个或多个触觉输出发生器)生成(16020)触觉输出,结合显示具有第二组视觉属性(例如,处于减小的半透明水平,或更高的亮度水平,或更高的饱和度水平等)和具有第二取向的虚拟对象的表示,该第二取向与在一个或多个相机的视场中检测到的物理环境中的平面对应(例如,触觉输出的生成与到虚拟对象的非半透明外观的转换的完成和虚拟对象到停置于在物理环境中检测到的平面上的降落位置的旋转和平移的完成同步)。例如,如图11I所示,生成如11010处所指示的触觉输出,结合显示附接到对应于虚拟对象11002的平面(例如,地板表面5038)的虚拟对象11002的非半透明表示。根据确定对象放置标准得到满足而生成触觉输出为用户提供了改进的触觉反馈(例如,指示用于放置虚拟对象的操作被成功执行)。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知对象放置标准已得到满足的感官信息而不由于显示的信息使用户界面杂乱)并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, object placement criteria are met according to determining (eg, object placement criteria when the device has not identified a location or plane for placing virtual objects in the first user interface relative to the field of view of the one or more cameras) is satisfied), the device (eg, using one or more haptic output generators of the device) generates (16020) a haptic output that, in conjunction with the display, has a second set of visual properties (eg, at a reduced translucency level, or a higher brightness levels, or higher saturation levels, etc.) and representations of virtual objects with a second orientation corresponding to a plane in the physical environment detected in the field of view of one or more cameras (e.g., The generation of the haptic output is synchronized with the completion of the transition to the non-translucent appearance of the virtual object and the completion of the rotation and translation of the virtual object to the landing position resting on the plane detected in the physical environment). For example, as shown in FIG. 11I , a haptic output as indicated at 11010 is generated, in conjunction with displaying a non-translucent representation ofvirtual object 11002 attached to a plane corresponding to virtual object 11002 (eg, floor surface 5038 ). Generating the haptic output based on determining that the object placement criteria are met provides the user with improved haptic feedback (eg, indicating that the operation for placing the virtual object was successfully performed). Providing the user with improved feedback enhances the operability of the device (eg, by providing sensory information that allows the user to perceive that object placement criteria have been met without cluttering the user interface with displayed information) and makes the user-device interface more efficient, This in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在显示具有第二组视觉属性和具有与在一个或多个相机的视场中检测到的物理环境中的平面对应的第二取向的虚拟对象的表示时,设备接收(16022)关于在一个或多个相机的视场中检测到的物理环境中的平面的至少位置或取向的更新(例如,更新的平面位置和取向是基于在初始的平面检测结果被用于放置虚拟对象之后累积的附加数据的更准确计算或更耗时的计算方法(例如,更少近似度等)的结果)。响应于接收到关于在一个或多个相机的视场中检测到的物理环境中的平面的至少位置或取向的更新,设备根据更新调整(16024)虚拟对象的表示的至少位置和/或取向(例如,将虚拟对象逐渐移动(例如,平移和旋转)靠近更新的平面)。响应于接收到关于物理环境中的平面的更新而调整虚拟对象的位置和/或取向(例如,不需要用于相对于平面放置虚拟对象的用户输入)减少了调整虚拟对象所需的输入的数量。减少执行操作所需的输入的数量增强了设备的可操作性,并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, when displaying the representation of the virtual object having the second set of visual attributes and having the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras, the device receives ( 16022) Updates regarding at least the position or orientation of the planes in the physical environment detected in the field of view of one or more cameras (e.g., the updated plane positions and orientations are based on the initial plane detection results used to place the virtual The result of a more accurate calculation or a more time-consuming calculation method (eg, less approximation, etc.) of additional data accumulated after the object). In response to receiving an update regarding at least the position or orientation of the plane in the physical environment detected in the field of view of the one or more cameras, the device adjusts (16024) at least the position and/or orientation of the representation of the virtual object according to the update ( For example, gradually move (eg, translate and rotate) the virtual object closer to the updated plane). Adjusting the position and/or orientation of virtual objects in response to receiving updates about planes in the physical environment (eg, without requiring user input for placing virtual objects relative to planes) reduces the amount of input required to adjust virtual objects . Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and prolongs the battery of the device by enabling the user to use the device more quickly and efficiently life.

在一些实施方案中,第一组视觉属性包括(16026)第一尺寸和第一半透明水平(例如,在落入AR视图之前,对象具有相对于显示器的固定尺寸和固定的高半透明水平)并且第二组视觉属性包括(16028)不同于第一尺寸的第二尺寸(例如,一旦落入AR视图中,对象就显示具有与尺寸相关的模拟物理尺寸和在物理环境中的降落位置)和低于第一半透明水平(例如,比其更不透明)的第二半透明水平(例如,对象在AR视图中不再是半透明的)。例如,在图11H中,虚拟对象11002的半透明表示被示出具有第一尺寸,并且在图11I中,虚拟对象11004的非半透明表示被示出具有第二(更小)尺寸。根据对象放置标准是否得到满足来显示具有第一尺寸和第一半透明水平或第二尺寸和第二半透明水平的虚拟对象为用户提供了视觉反馈(例如,以指示已经接收到显示虚拟对象的请求,但是为了将虚拟对象放置在一个或多个相机的视场中,需要附加的时间和/或校准信息)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供合适的输入并在将对象以对应于平面的第二取向放置之前避免尝试提供用于操纵虚拟对象的输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the first set of visual attributes includes (16026) a first size and a first level of translucency (eg, the object has a fixed size relative to the display and a fixed high level of translucency before falling into the AR view) And the second set of visual attributes includes (16028) a second size that is different from the first size (eg, once dropped into the AR view, the object appears to have a simulated physical size associated with the size and a landing position in the physical environment) and A second level of translucency that is lower (eg, more opaque than) the first level of translucency (eg, the object is no longer translucent in the AR view). For example, in Figure 11H, a translucent representation ofvirtual object 11002 is shown having a first size, and in Figure 11I, a non-translucent representation ofvirtual object 11004 is shown having a second (smaller) size. Displaying a virtual object having a first size and a first level of translucency or a second size and a second level of translucency depending on whether the object placement criteria are met provides visual feedback to the user (eg, to indicate that a request to display the virtual object has been received). request, but additional timing and/or calibration information is required in order to place virtual objects in the field of view of one or more cameras). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and avoiding attempts to provide an object before placing the object in a second orientation that corresponds to the plane) input for manipulating virtual objects), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在虚拟对象显示在不包括一个或多个相机的视场的至少一部分的相应用户界面(例如,登台用户界面)中(例如,虚拟对象相对于具有与设备的物理环境无关的取向的虚拟台架取向)时,接收(16030)在包括一个或多个相机的视场的至少一部分的第一用户界面区域(例如,AR视图)中显示虚拟对象的请求。在虚拟对象在接收到请求时显示在相应用户界面中时,第一取向与虚拟对象的取向对应。例如,如参照图11F所描述的,在显示登台用户界面6010(其不包括相机的视场)时,接收到在包括相机的视场6036的用户界面中显示虚拟对象11002的请求。其中虚拟对象11002显示在包括相机的视场6036的用户界面中的图11G中的虚拟对象11002的取向对应于其中虚拟对象11002显示在登台用户界面6010中的图11F中的虚拟对象11002的取向。将虚拟对象以与当显示在(先前显示的)界面(例如,登台用户界面)中时虚拟对象的取向对应的取向显示在第一用户界面(例如,显示的增强现实视图)中为用户提供了视觉反馈(例如,以指示在显示登台用户界面时提供的对象操纵输入可用于建立对象在AR视图中的取向)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供合适的输入并在将对象以对应于平面的第二取向放置之前避免尝试提供用于操纵虚拟对象的输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the virtual object is displayed in a corresponding user interface (eg, a staging user interface) that does not include at least a portion of the field of view of the one or more cameras (eg, the virtual object is relatively When the virtual gantry orientation is oriented), a request to display a virtual object in a first user interface area (eg, an AR view) that includes at least a portion of the field of view of the one or more cameras is received (16030). The first orientation corresponds to the orientation of the virtual object when the virtual object is displayed in the corresponding user interface upon receiving the request. For example, as described with reference to FIG. 11F , while displaying staging user interface 6010 (which does not include the camera's field of view), a request is received to displayvirtual object 11002 in the user interface that includes the camera's field ofview 6036 . The orientation ofvirtual object 11002 in FIG. 11G in whichvirtual object 11002 is displayed in the user interface including the camera's field ofview 6036 corresponds to the orientation ofvirtual object 11002 in FIG. 11F in whichvirtual object 11002 is displayed in staginguser interface 6010 . Displaying the virtual object in the first user interface (eg, the displayed augmented reality view) in an orientation corresponding to the orientation of the virtual object when displayed in the (previously displayed) interface (eg, the staging user interface) provides the user with Visual feedback (eg, to indicate that object manipulation input provided when the staging user interface is displayed can be used to establish the orientation of the object in the AR view). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and avoiding attempts to provide an object before placing the object in a second orientation that corresponds to the plane) input for manipulating virtual objects), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,第一取向与(16032)预定义取向(例如,默认取向,诸如当虚拟对象最初显示在不包括一个或多个相机的视场的至少一部分的相应用户界面中时所显示的取向)对应。将虚拟对象以第一组视觉属性和以预定义的取向显示在第一用户界面(例如,显示的增强现实视图)中减少了电力使用并且延长了设备的电池寿命(例如,通过允许显示预先生成的虚拟对象的半透明表示而不是根据在登台用户界面中建立的取向来渲染半透明表示)。In some embodiments, the first orientation is the same as (16032) a predefined orientation (eg, a default orientation, such as when the virtual object is initially displayed in a corresponding user interface that does not include at least a portion of the field of view of the one or more cameras) orientation) corresponds to. Displaying the virtual object in the first user interface (eg, the displayed augmented reality view) with the first set of visual properties and in the predefined orientation reduces power usage and extends the battery life of the device (eg, by allowing the display to be pre-generated instead of rendering the translucent representation according to the orientation established in the staging UI).

在一些实施方案中,在将虚拟对象以第二组视觉属性和与在一个或多个相机的视场中检测到的物理环境中的平面对应的第二取向显示在第一用户界面区域(例如,AR视图)中时,设备检测(16034)(例如,作为缩放输入(例如,指向虚拟对象的捏合或展开手势)的结果)将虚拟对象的模拟物理尺寸从第一模拟物理尺寸改变为相对于在一个或多个相机的视场中捕获的物理环境的第二模拟物理尺寸(例如,从默认尺寸的80%改变为默认尺寸的120%,反之亦然)的请求。例如,用于减小虚拟对象11002的模拟物理尺寸的输入是如参照图11N至图11P所描述的捏合手势。响应于检测到改变虚拟对象的模拟物理尺寸的请求,根据虚拟对象的模拟物理尺寸从第一模拟物理尺寸到第二模拟物理尺寸的逐渐变化,设备逐渐改变(16036)第一用户界面区域中虚拟对象的表示的显示尺寸(例如,虚拟对象的显示尺寸增长或缩小,而在一个或多个相机的视场中捕获的物理环境的显示尺寸保持不改变),并且在第一用户界面区域中虚拟对象的表示的显示尺寸逐渐改变的过程中,根据确定虚拟对象的模拟物理尺寸已达到预定义的模拟物理尺寸(例如,默认尺寸的100%),设备生成触觉输出以指示虚拟对象的模拟物理尺寸已达到预定义的模拟物理尺寸。例如,如参照图11N至图11P所描述的,响应于捏合手势输入,虚拟对象11002的表示的显示尺寸逐渐减小。在图11O中,当虚拟对象11002的表示的显示尺寸达到虚拟对象11002的尺寸的100%(例如,当最初显示在包括一个或多个相机的视场6036的用户界面中时虚拟对象11002的尺寸,如图11I所指示)时,生成如11024处所指示的触觉输出。根据确定虚拟对象的模拟物理尺寸已达到预定义的模拟物理尺寸来生成触觉输出为用户提供了反馈(例如,指示不需要进一步的输入来使虚拟对象的模拟尺寸返回到预定义的尺寸)。提供改进的触觉反馈增强了设备的可操作性(例如,通过提供允许用户感知已达到虚拟对象的预定义模拟物理尺寸的感官信息而不由于显示的信息使用户界面杂乱),这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the virtual object is displayed in the first user interface area (eg, , AR view), device detection (16034) (eg, as a result of a zoom input (eg, a pinch or expand gesture directed at a virtual object)) changes the simulated physical size of the virtual object from the first simulated physical size to relative to A request for a second simulated physical size of the physical environment captured in the field of view of one or more cameras (eg, changing from 80% of the default size to 120% of the default size, and vice versa). For example, the input for reducing the simulated physical size ofvirtual object 11002 is a pinch gesture as described with reference to FIGS. 11N-11P . In response to detecting a request to change the simulated physical size of the virtual object, the device gradually changes (16036) the virtual object in the first user interface area according to the gradual change in the simulated physical size of the virtual object from the first simulated physical size to the second simulated physical size. The display size of the representation of the object (eg, the display size of the virtual object grows or shrinks, while the display size of the physical environment captured in the field of view of the one or more cameras remains unchanged), and the virtual object is displayed in the first user interface area. During the gradual change in the displayed size of the representation of the object, upon determining that the simulated physical size of the virtual object has reached a predefined simulated physical size (eg, 100% of the default size), the device generates a haptic output to indicate the simulated physical size of the virtual object A predefined simulation physical size has been reached. For example, as described with reference to FIGS. 11N-11P , in response to the pinch gesture input, the display size of the representation ofvirtual object 11002 is gradually reduced. In FIG. 110 , when the display size of the representation ofvirtual object 11002reaches 100% of the size of virtual object 11002 (eg, the size ofvirtual object 11002 when initially displayed in a user interface that includes field ofview 6036 of one or more cameras) , as indicated in FIG. 11I ), a haptic output as indicated at 11024 is generated. Generating a haptic output based on determining that the simulated physical size of the virtual object has reached the predefined simulated physical size provides feedback to the user (eg, indicating that no further input is required to return the simulated size of the virtual object to the predefined size). Providing improved haptic feedback enhances the operability of the device (for example, by providing sensory information that allows the user to perceive that a virtual object has reached a predefined simulated physical size without cluttering the user interface with displayed information), which in turn makes the user The ability to use the device more quickly and efficiently reduces power usage and extends the battery life of the device.

在一些实施方案中,在第一用户界面区域(例如,AR视图)中以虚拟对象的不同于预定义的模拟物理尺寸的第二模拟物理尺寸(例如,默认尺寸的120%,或默认尺寸的80%,作为缩放输入(例如,指向虚拟对象的捏合或展开手势)的结果)显示虚拟对象时,设备检测(16038)使虚拟对象返回到预定义的模拟物理尺寸的请求(例如,检测在触摸屏上(例如,在虚拟对象上,或另选地,在虚拟对象之外)的轻击或双击)。例如,在捏合输入已使得虚拟对象11002的尺寸减小之后(如参照图11N至图11P所述),在对应于虚拟对象11002的位置处检测到双击输入(如参照图11R所描述的)。响应于检测到使虚拟对象返回到预定义的模拟物理尺寸的请求,根据虚拟对象的模拟物理尺寸到预定义的模拟物理尺寸的变化,设备改变(16040)第一用户界面区域中虚拟对象的表示的显示尺寸(例如,虚拟对象的显示尺寸增长或缩小,而在一个或多个相机的视场中捕获的物理环境的显示尺寸保持不改变)。例如,响应于参照图11R描述的双击输入,虚拟对象11002的尺寸返回到当显示在图11I中时虚拟对象11002的尺寸(当最初显示在包括一个或多个相机的视场6036的用户界面中时虚拟对象11002的尺寸)。在一些实施方案中,根据确定虚拟对象的模拟物理尺寸已达到预定义的模拟物理尺寸(例如,默认尺寸的100%),设备生成触觉输出以指示虚拟对象的模拟物理尺寸已达到预定义的模拟物理尺寸。响应于检测到使虚拟对象返回到预定义的模拟物理尺寸的请求而将虚拟对象的显示尺寸改变为预定义的尺寸(例如,通过提供用于将显示尺寸精确地调整到预定义的模拟物理尺寸的选项,而不是要求用户估计被提供来调整显示尺寸的输入何时足够使虚拟对象以预定义的模拟物理尺寸显示)减少了显示具有预定义的尺寸的对象所需的输入的数量。减少执行操作所需的输入的数量增强了设备的可操作性,并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the virtual object is displayed in a first user interface area (eg, an AR view) at a second simulated physical size of the virtual object that is different from the predefined simulated physical size (eg, 120% of the default size, or 80%, when a virtual object is displayed as a result of a zoom input (e.g., a pinch or expand gesture directed at the virtual object), device detection (16038) requests to return the virtual object to a predefined simulated physical size (e.g., detects a A tap or double tap on (eg, on the virtual object, or alternatively, outside the virtual object). For example, after a pinch input has reduced the size of virtual object 11002 (as described with reference to FIGS. 11N-11P ), a double tap input is detected at a location corresponding to virtual object 11002 (as described with reference to FIG. 11R ). In response to detecting the request to return the virtual object to the predefined simulated physical size, the device changes (16040) the representation of the virtual object in the first user interface area according to the change from the simulated physical size of the virtual object to the predefined simulated physical size (eg, the display size of virtual objects grows or shrinks, while the display size of the physical environment captured in the field of view of one or more cameras remains unchanged). For example, in response to the double tap input described with reference to FIG. 11R, the size ofvirtual object 11002 returns to the size ofvirtual object 11002 when displayed in FIG. size of virtual object 11002). In some embodiments, upon determining that the simulated physical size of the virtual object has reached a predefined simulated physical size (eg, 100% of the default size), the device generates a haptic output to indicate that the simulated physical size of the virtual object has reached the predefined simulated physical size physical size. changing the display size of the virtual object to a predefined size in response to detecting a request to return the virtual object to a predefined simulated physical size (e.g., by providing a method for accurately adjusting the display size to the predefined simulated physical size option, rather than requiring the user to estimate when the input provided to adjust the display size is sufficient to cause the virtual object to be displayed at a predefined simulated physical size) reduces the amount of input required to display an object with a predefined size. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and prolongs the battery of the device by enabling the user to use the device more quickly and efficiently life.

在一些实施方案中,设备选择用于根据一个或多个相机相对于物理环境的相应位置和取向(例如,当对象放置标准得到满足时的当前位置和取向)来设置具有第二组视觉属性的虚拟对象的表示的第二取向的平面,其中选择平面包括(16042):根据确定当在一个或多个相机的视场中捕获的物理环境的第一部分上显示虚拟对象的表示(例如,半透明对象的底座与物理环境的第一部分中的平面重叠)时对象放置标准得到满足(例如,作为设备指向物理环境中的第一方向的结果),选择在一个或多个相机的视场中的物理环境中检测到的多个平面的第一平面(例如,根据在显示器上对象的底座与第一平面之间的接近度越大,在物理世界中第一平面与物理环境的第一部分之间的接近度越大)作为用于设置具有第二组视觉属性的虚拟对象的表示的第二取向的平面;并且根据确定当在一个或多个相机的视场中捕获的物理环境的第二部分上显示虚拟对象的表示(例如,半透明对象的底座与物理环境的第二部分中的平面重叠)时对象放置标准得到满足(例如,作为设备指向物理环境中的第二方向的结果),选择在一个或多个相机的视场中的物理环境中检测到的多个平面的第二平面(例如,根据在显示器上对象的底座与第二平面之间的接近度越大,在物理世界中第二平面与物理环境的第二部分之间的接近度越大)作为用于设置具有第二组视觉属性的虚拟对象的表示的第二取向的平面,其中物理环境的第一部分不同于物理环境的第二部分,并且第一平面不同于第二平面。选择第一平面或第二平面作为将相对于其设置虚拟对象的平面(例如,不需要用户输入来指定许多检测到的平面中的哪个平面将是相对于其设置虚拟对象的平面)减少了选择平面所需的输入的数量。减少执行操作所需的输入的数量增强了设备的可操作性,并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device selection is used to set the camera having the second set of visual attributes based on the corresponding position and orientation of the one or more cameras relative to the physical environment (eg, the current position and orientation when the object placement criteria are met). the plane of the second orientation of the representation of the virtual object, wherein selecting the plane comprises (16042): according to determining when the representation of the virtual object is displayed (eg, translucent) on the first portion of the physical environment captured in the field of view of the one or more cameras When the object's base overlaps a plane in the first part of the physical environment) the object placement criteria are met (e.g., as a result of the device pointing in the first direction in the physical environment), select the physical object in the field of view of one or more cameras A first plane of the plurality of planes detected in the environment (eg, the distance between the first plane in the physical world and the first part of the physical environment, based on the greater the proximity between the object's base and the first plane on the display) greater proximity) as a plane for setting a second orientation of representations of virtual objects having a second set of visual properties; Object placement criteria are met (e.g., as a result of the device pointing in a second direction in the physical environment) when the representation of the virtual object is displayed (e.g., the base of the translucent object overlaps the plane in the second part of the physical environment), select the A second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras (eg, the second plane in the physical world based on the greater the proximity of the object's base to the second plane on the display). The greater the proximity between the two planes and the second part of the physical environment) as the plane for setting the second orientation of the representation of the virtual object with the second set of visual properties, wherein the first part of the physical environment is different from the the second portion, and the first plane is different from the second plane. Selecting either the first or second plane as the plane against which the virtual object will be set (eg no user input is required to specify which of many detected planes will be the plane against which the virtual object will be set) reduces selection The number of inputs required for the plane. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and prolongs the battery of the device by enabling the user to use the device more quickly and efficiently life.

在一些实施方案中,设备在第一用户界面区域(例如,AR视图)中显示具有第二组视觉属性和第二取向的虚拟对象的同时显示(16044)快照示能表示(例如,相机快门按钮)。响应于快照示能表示的激活,设备捕获(16046)包括虚拟对象的表示的当前视图的快照图像,虚拟对象的表示位于一个或多个相机的视场中的物理环境中的放置位置,具有第二组视觉属性和第二取向,该第二取向与在一个或多个相机的视场中检测到的物理环境中的平面对应。显示用于捕获对象的当前视图的快照图像的快照示能表示减少了捕获对象的快照图像所需的输入的数量。减少执行操作所需的输入的数量增强了设备的可操作性,并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device displays (16044) a snapshot affordance representation (eg, a camera shutter button) while displaying the virtual object with the second set of visual properties and the second orientation in the first user interface area (eg, the AR view). ). In response to the activation of the snapshot affordance representation, the device captures (16046) a snapshot image of the current view of the representation of the virtual object at its placement location in the physical environment in the field of view of the one or more cameras, with the first Two sets of visual properties and a second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras. Displaying the snapshot representation for capturing the snapshot image of the current view of the object reduces the amount of input required to capture the snapshot image of the object. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and prolongs the battery of the device by enabling the user to use the device more quickly and efficiently life.

在一些实施方案中,设备在第一用户界面区域中与具有第二组视觉属性的虚拟对象的表示一起显示(16048)一个或多个控制示能表示(例如,用于切换回到登台用户界面的示能表示、用于退出AR视图器的示能表示、用于捕获快照的示能表示等)。例如,在图11J中,显示包括后退控件6016、切换控件6018和共享控件6020的一组控件。在与具有第二组视觉属性的虚拟对象的表示一起显示一个或多个控制示能表示时,设备检测到(16050)控制渐淡标准得到满足(例如,在阈值时间量内尚未在触敏表面上检测到用户输入(例如,具有或没有设备的移动以及对相机的视场的更新))。响应于检测到控制渐淡标准得到满足,设备停止(16052)显示一个或多个控制示能表示,同时继续在包括一个或多个相机的视场的第一用户界面区域中显示具有第二组视觉属性的虚拟对象的表示。例如,如参照图11K至图11L所描述的,当在阈值时间量内未检测到用户输入时,控件6016、6018和6020逐渐淡出并且停止显示。在一些实施方案中,在控制示能表示逐渐消失之后,在触敏表面上的轻击输入或与虚拟对象的交互使得设备在第一用户界面区域中与虚拟对象的表示一起同时重新显示控制示能表示。响应于确定控制渐淡标准得到满足而自动停止显示控件减少了停止显示控件所需的输入的数量。减少执行操作所需的输入的数量增强了设备的可操作性,并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device displays (16048) one or more control affordance representations in the first user interface area along with the representation of the virtual object having the second set of visual properties (eg, for switching back to the staging user interface) , an affordance for exiting an AR viewer, an affordance for capturing snapshots, etc.). For example, in FIG. 11J, a set of controls including aback control 6016, atoggle control 6018, and ashare control 6020 are displayed. Upon displaying the one or more control affordance representations with the representation of the virtual object having the second set of visual attributes, the device detects (16050) that control fade criteria are met (eg, the touch-sensitive surface has not been used for a threshold amount of time) User input (eg, with or without movement of the device and updates to the camera's field of view) is detected on the . In response to detecting that the control fade criteria are met, the device ceases ( 16052 ) to display the one or more control affordances while continuing to display in the first user interface area including the field of view of the one or more cameras with the second set of A representation of a virtual object for visual properties. For example, when no user input is detected for a threshold amount of time, controls 6016, 6018, and 6020 gradually fade out and stop being displayed, as described with reference to FIGS. 11K-11L. In some embodiments, after the control affordances fade out, a tap input on the touch-sensitive surface or interaction with the virtual object causes the device to simultaneously redisplay the control representation in the first user interface area with the representation of the virtual object can express. Automatically stopping the display of the control in response to determining that the control fade criteria are met reduces the amount of input required to stop the display of the control. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and prolongs the battery of the device by enabling the user to use the device more quickly and efficiently life.

在一些实施方案中,响应于在第一用户界面区域中显示虚拟对象的请求:在包括在第一用户界面区域中的一个或多个相机的视场的至少一部分上显示虚拟对象的表示之前,根据确定校准标准未得到满足(例如,因为不存在来自不同查看角度的足够量的图像来为在一个或多个相机的视场中捕获的物理环境生成尺寸和空间关系数据),设备向用户显示(16054)用于相对于物理环境移动设备的提示(例如,显示用于移动设备的视觉提示,并且任选地在第一用户界面区域中显示校准用户界面对象(例如,根据设备的移动来移动的弹性线框球或立方体)(例如,校准用户界面对象覆盖在一个或多个相机的视场的模糊图像上),如下文参考方法17000更详细地描述的)。向用户显示用于相对于物理环境移动设备的提示为用户提供了视觉反馈(例如,以指示需要设备的移动以获得用于将虚拟对象放置在相机的视场中的信息)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供校准输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, Upon determining that calibration criteria are not met (eg, because there is not a sufficient amount of images from different viewing angles to generate dimensional and spatial relationship data for the physical environment captured in the field of view of one or more cameras), the device displays to the user (16054) Prompts for moving the device relative to the physical environment (eg, displaying visual cues for the mobile device, and optionally displaying calibration user interface objects in the first user interface area (eg, moving according to the movement of the device) an elastic wireframe sphere or cube) (eg, a calibration user interface object overlays a blurred image of the field of view of one or more cameras, as described in more detail below with reference to Method 17000). Displaying a prompt to the user for moving the device relative to the physical environment provides the user with visual feedback (eg, to indicate that movement of the device is required to obtain information for placing virtual objects in the camera's field of view). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (for example, by helping the user provide calibration input), which in turn reduces the Electricity is used and the battery life of the device is extended.

应当理解,对图16A至图16G中的操作进行描述的特定顺序仅仅是一个示例,并非旨在表明所述顺序是可以执行这些操作的唯一顺序。本领域的普通技术人员会想到多种方式来对本文所述的操作进行重新排序。另外,应当注意,本文相对于本文所述的其他方法(例如,方法800、900、1000、17000、18000、19000和20000)描述的其他过程的细节同样以类似的方式适用于上文相对于图16A至图16G所述的方法16000。例如,上文参考方法16000所述的接触、输入、虚拟对象、用户界面区域、视场、触觉输出、移动和/或动画任选地具有本文参考本文所述的其他方法(例如,方法800、900、1000、17000、18000、19000和20000)所述的接触、输入、虚拟对象、用户界面区域、视场、触觉输出、移动和/或动画的特征中的一者或多者。为了简明起见,此处不再重复这些细节。It should be understood that the particular order in which the operations in FIGS. 16A-16G are described is merely an example, and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize numerous ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (eg,methods 800, 900, 1000, 17000, 18000, 19000, and 20000) also apply in a similar manner to the above with respect to Figures Themethod 16000 described in 16A-16G. For example, the contacts, inputs, virtual objects, user interface areas, fields of view, haptic outputs, movements, and/or animations described above with reference tomethod 16000 optionally have other methods described herein with reference to (eg,method 800, 900, 1000, 17000, 18000, 19000, and 20000) of one or more of the touch, input, virtual object, user interface area, field of view, haptic output, movement and/or animation features. For the sake of brevity, these details are not repeated here.

图17A至图17D是示出显示根据设备的一个或多个相机的移动而动态地动画的校准用户界面对象的方法17000的流程图。方法17000在具有显示生成部件(例如,显示器、投影仪、平视显示器等)、一个或多个输入设备(例如,触敏表面或同时充当显示生成部件和触敏表面的触摸屏显示器)、一个或多个相机(例如,设备上与显示器和触敏表面相对的一侧上的一个或多个后向相机),以及用于检测包括一个或多个相机的设备的姿态(例如,相对于周围物理环境的取向(例如,旋转、偏航和/或倾斜角)和位置)的变化的一个或多个姿态传感器(例如,加速度计、陀螺仪和/或磁力仪)的电子设备(例如,图3的设备300或图1A的便携式多功能设备100)处执行。方法17000中的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。17A-17D are flowcharts illustrating amethod 17000 of displaying calibration user interface objects that dynamically animate in accordance with movement of one or more cameras of a device.Method 17000 is performed when having a display generating component (eg, a display, a projector, a heads-up display, etc.), one or more input devices (eg, a touch-sensitive surface or a touch screen display that acts as both the display generating component and the touch-sensitive surface), one or more cameras (eg, one or more rear-facing cameras on the side of the device opposite the display and touch-sensitive surface), and for detecting the pose of the device that includes the one or more cameras (eg, relative to the surrounding physical environment) Changes in orientation (eg, rotation, yaw, and/or pitch) and position) of one or more attitude sensors (eg, accelerometers, gyroscopes, and/or magnetometers) of the electronics (eg, Figure 3 device 300 or portablemultifunction device 100 of FIG. 1A ). Some operations inMethod 17000 are optionally combined, and/or the order of some operations is optionally changed.

设备接收(17002)在第一用户界面区域中显示物理环境(例如,包括一个或多个相机的设备周围的物理环境)的增强现实视图的请求,该第一用户界面区域包括一个或多个相机的视场的表示(例如,该视场捕获物理环境的至少一部分)。在一些实施方案中,该请求是在按钮上检测到的用于从虚拟对象的登台视图切换到虚拟对象的增强现实视图的轻击输入。在一些实施方案中,该请求是对在二维用户界面中紧挨着虚拟对象的表示显示的增强现实示能表示的选择。在一些实施方案中,该请求是对增强现实测量应用程序(例如,促进物理环境的测量的测量应用程序)的激活。例如,该请求是在切换6018处检测到的用于在一个或多个相机的视场6036中显示虚拟对象11002的轻击输入,如参照图12A所描述的。The device receives (17002) a request to display an augmented reality view of a physical environment (eg, a physical environment surrounding the device including one or more cameras) in a first user interface area, the first user interface area including the one or more cameras A representation of the field of view (eg, the field of view captures at least a portion of the physical environment). In some embodiments, the request is a tap input detected on a button to switch from a staging view of the virtual object to an augmented reality view of the virtual object. In some embodiments, the request is a selection of an augmented reality affordance that is displayed next to the representation of the virtual object in the two-dimensional user interface. In some embodiments, the request is activation of an augmented reality measurement application (eg, a measurement application that facilitates measurement of the physical environment). For example, the request is a tap input detected atswitch 6018 to displayvirtual object 11002 in field ofview 6036 of one or more cameras, as described with reference to Figure 12A.

响应于接收到显示物理环境的增强现实视图的请求,设备显示(17004)一个或多个相机的视场的表示(例如,当校准标准未得到满足时,设备显示一个或多个相机的视场的物理环境的模糊版本)。例如,设备显示一个或多个相机的视场6036的模糊表示,如图12E-1所示。根据确定用于物理环境的增强现实视图的校准标准未得到满足(例如,因为不存在(例如,来自不同查看角度的)足够量的图像数据来为在一个或多个相机的视场中捕获的物理环境生成尺寸和空间关系数据,因为未在一个或多个相机的视场中检测到对应于虚拟对象的平面,和/或因为不存在足够的信息以基于从相机可获得的图像数据来开始或继续进行平面检测),设备(例如,经由显示生成部件,并且在包括一个或多个相机的视场的表示(例如,视场的模糊版本)的第一用户界面区域中)显示根据物理环境中的一个或多个相机的移动动态地动画的校准用户界面对象(例如,扫描提示对象,诸如弹性立方体或线框对象)。例如,在图12E-1至图12I-1中,显示校准用户界面对象12014。参照例如图12E-1至图12F-1描述了校准用户界面对象根据一个或多个相机的移动的动画。在一些实施方案中,当接收到对应于显示增强现实视图的表示的请求的输入的初始部分时,分析一个或多个相机的视场以检测一个或多个相机的视场中的一个或多个平面(例如,地板、墙壁、桌子等)发生。在一些实施方案中,分析在接收到请求之前(例如,在虚拟对象显示在登台视图中时)发生。显示校准用户界面对象包括:在显示校准用户界面对象时,经由一个或多个姿态传感器检测物理环境中的一个或多个相机的姿态(例如,位置和/或取向(例如,旋转、倾斜、偏航角))的变化;并且,响应于检测到物理环境中的一个或多个相机的姿态变化,根据所检测到的物理环境中的一个或多个相机的姿态变化来调整校准用户界面对象(例如,扫描提示对象,诸如弹性立方体或线框对象)的至少一个显示参数(例如,在显示器上的取向、尺寸、旋转或位置)。例如,分别对应于图12E-2至图12F-2的图12E-1至图12F-1示出了设备100相对于物理环境5002的横向移动,以及设备的一个或多个相机的显示视场6036中的对应变化。在图12E-2至图12F-2中,校准用户界面对象12014响应于一个或多个相机的移动而旋转。In response to receiving a request to display an augmented reality view of the physical environment, the device displays (17004) a representation of the field of view of the one or more cameras (eg, the device displays the field of view of the one or more cameras when calibration criteria are not met obscured version of the physical environment). For example, the device displays a blurred representation of the field ofview 6036 of one or more cameras, as shown in Figure 12E-1. The calibration criteria determined for the augmented reality view of the physical environment are not met (eg, because there is not a sufficient amount of image data (eg, from different viewing angles) for images captured in the field of view of one or more cameras The physical environment generates dimensional and spatial relationship data because planes corresponding to virtual objects are not detected in the field of view of one or more cameras, and/or because there is insufficient information to start based on image data available from the cameras or continue with plane detection), the device (eg, via a display generation component, and in a first user interface area that includes a representation of the field of view of the one or more cameras (eg, a blurred version of the field of view)) displays a display according to the physical environment Movement of one or more cameras in dynamically animated calibration user interface objects (eg, scan cue objects, such as elastic cubes or wireframe objects). For example, in Figures 12E-1 through 12I-1, a calibrationuser interface object 12014 is displayed. Animations of calibration user interface objects according to movement of one or more cameras are described with reference to, eg, Figures 12E-1 through 12F-1. In some embodiments, upon receiving an initial portion of input corresponding to a request to display a representation of an augmented reality view, the fields of view of the one or more cameras are analyzed to detect one or more of the fields of view of the one or more cameras each plane (eg, floor, wall, table, etc.). In some embodiments, the analysis occurs before the request is received (eg, when the virtual object is displayed in the staging view). Displaying the calibration user interface object includes, while displaying the calibration user interface object, detecting the pose (eg, position and/or orientation (eg, rotation, tilt, bias) of one or more cameras in the physical environment via one or more pose sensors and, in response to detecting a change in the pose of the one or more cameras in the physical environment, adjusting the calibration user interface object ( For example, scanning prompts at least one display parameter (eg, orientation, size, rotation, or position on the display) of an object, such as an elastic cube or wireframe object. For example, FIGS. 12E-1 through 12F-1, respectively corresponding to FIGS. 12E-2 through 12F-2, illustrate lateral movement of thedevice 100 relative to thephysical environment 5002, and the displayed field of view of one or more cameras of the device Corresponding changes in 6036. In Figures 12E-2 through 12F-2, calibrationuser interface object 12014 rotates in response to movement of one or more cameras.

在显示根据所检测到的物理环境中的一个或多个相机的姿态变化来在显示器上移动的校准用户界面对象(例如,扫描提示对象,诸如弹性立方体或线框对象)时,设备检测到(17006)校准标准得到满足。例如,如参照图12E至图12J所描述的,响应于从12E-1到12I-1发生的设备的移动,设备确定校准标准得到满足。When displaying a calibrated user interface object (eg, a scan cue object, such as an elastic cube or a wireframe object) that moves on the display according to the detected pose changes of one or more cameras in the physical environment, the device detects ( 17006) calibration criteria are met. For example, as described with reference to Figures 12E-12J, in response to movement of the device occurring from 12E-1 to 12I-1, the device determines that the calibration criteria are met.

响应于检测到校准标准得到满足,设备停止(17008)显示校准用户界面对象(例如,扫描提示对象,诸如弹性立方体或线框对象)。在一些实施方案中,在设备停止显示校准用户界面对象之后,设备显示不具有模糊的相机的视场的表示。在一些实施方案中,虚拟对象的表示显示在相机的视场的非模糊表示上。例如,在图12J中,响应于参照12E-1至12I-1所描述的设备的移动,不再显示校准用户界面对象12014,并且虚拟对象11002显示在相机的视场的非模糊表示6036上。根据一个或多个相机(例如,捕获设备的物理环境的设备相机)的移动来调整校准用户界面对象的显示参数为用户提供了视觉反馈(例如,以指示需要设备的移动以进行校准)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户以提供满足校准标准所需的信息的方式移动设备),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In response to detecting that the calibration criteria are met, the device stops (17008) displaying a calibration user interface object (eg, a scan prompt object, such as an elastic cube or wireframe object). In some embodiments, after the device stops displaying the calibration user interface object, the device displays a representation of the camera's field of view without blur. In some embodiments, representations of virtual objects are displayed on a non-blurred representation of the camera's field of view. For example, in Figure 12J, in response to movement of the device described with reference to 12E-1 to 12I-1, calibrationuser interface object 12014 is no longer displayed, andvirtual object 11002 is displayed on anon-blurred representation 6036 of the camera's field of view. Adjusting display parameters of calibration user interface objects based on movement of one or more cameras (eg, device cameras capturing the device's physical environment) provides visual feedback to the user (eg, to indicate that movement of the device is required for calibration). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (for example, by helping the user move the device in a way that provides the information needed to meet calibration standards), which in turn by enabling the user to The device is used more quickly and efficiently while reducing power usage and extending the battery life of the device.

在一些实施方案中,在包括一个或多个相机的视场的表示的第一用户界面区域中显示物理环境(例如,包括一个或多个相机的设备周围的物理环境)的增强现实视图的请求包括(17010)在物理环境的增强现实视图中显示虚拟三维对象(例如,具有三维模型的虚拟对象)的表示的请求。在一些实施方案中,该请求是在按钮上检测到的用于从虚拟对象的登台视图切换到虚拟对象的增强现实视图的轻击输入。在一些实施方案中,该请求是对在二维用户界面中紧挨着虚拟对象的表示显示的增强现实示能表示的选择。例如,在图12A中,由接触12002在对应于切换控件6018的位置处进行的输入是在包括相机的视场6036的用户界面中显示虚拟对象11002的请求,如图12B所示。响应于在增强现实视图中显示虚拟对象的请求而显示物理环境的增强现实视图减少了(例如,显示物理环境的视图和虚拟对象两者)所需的输入的数量。减少执行操作所需的输入的数量增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供校准输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, a request to display an augmented reality view of a physical environment (eg, a physical environment surrounding a device including the one or more cameras) in a first user interface area that includes a representation of the field of view of the one or more cameras A request is included (17010) to display a representation of a virtual three-dimensional object (eg, a virtual object having a three-dimensional model) in an augmented reality view of a physical environment. In some embodiments, the request is a tap input detected on a button to switch from a staging view of the virtual object to an augmented reality view of the virtual object. In some embodiments, the request is a selection of an augmented reality affordance that is displayed next to the representation of the virtual object in the two-dimensional user interface. For example, in Figure 12A, the input bycontact 12002 at a location corresponding to togglecontrol 6018 is a request to displayvirtual object 11002 in a user interface that includes the camera's field ofview 6036, as shown in Figure 12B. Displaying the augmented reality view of the physical environment in response to a request to display the virtual object in the augmented reality view reduces the amount of input required (eg, to display both the view of the physical environment and the virtual object). Reducing the amount of input required to perform an operation enhances the operability of the device and makes the user-device interface more efficient (for example, by helping the user provide calibration inputs), which in turn improves the user's ability to use the device more quickly and efficiently Power usage is reduced and device battery life is extended.

在一些实施方案中,在停止显示校准用户界面对象之后,设备(例如,在校准标准得到满足之后)在包括一个或多个相机的视场的表示的第一用户界面区域中显示(17012)虚拟三维对象的表示。在一些实施方案中,响应于请求,在完成校准并且完全清晰地显示相机的视场之后,虚拟对象降落到相对于在一个或多个相机的视场中识别的预定义平面(例如,物理表面,诸如可用作虚拟对象的三维表示的支撑平面的垂直墙壁或水平地板)的预定义位置和/或取向。例如,在图12J中,设备已经停止显示在图12E至图12I中显示的校准用户界面对象12014,并且虚拟对象11002显示在包括相机的视场6036的用户界面中。在停止显示校准用户界面对象之后在所显示的增强现实视图中显示虚拟对象提供了视觉反馈(例如,以指示校准标准已得到满足)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供合适的输入并在校准标准得到满足之前避免尝试提供用于操纵虚拟对象的输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, after ceasing to display the calibration user interface object, the device (eg, after calibration criteria are satisfied) displays (17012) a virtualized first user interface area in a first user interface area that includes a representation of the field of view of the one or more cameras A representation of a three-dimensional object. In some embodiments, in response to a request, after calibration is complete and the camera's field of view is fully displayed, the virtual object lands relative to a predefined plane (eg, a physical surface) identified in the camera's field of view , such as a predefined position and/or orientation of a support plane, such as a vertical wall or a horizontal floor, that can be used as a three-dimensional representation of a virtual object. For example, in Figure 12J, the device has stopped displaying the calibrationuser interface object 12014 shown in Figures 12E-12I, and thevirtual object 11002 is displayed in the user interface including the camera's field ofview 6036. Displaying the virtual object in the displayed augmented reality view after ceasing to display the calibration user interface object provides visual feedback (eg, to indicate that calibration criteria have been met). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and avoiding attempts to provide input for manipulating virtual objects until calibration criteria are met) ), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,设备在显示校准用户界面对象的同时(例如,在校准标准得到满足之前)在第一用户界面区域中(例如,在校准用户界面对象后面)显示(17014)虚拟三维对象的表示,其中在一个或多个相机在物理环境中的移动的过程中(例如,在校准用户界面对象根据一个或多个相机的移动在第一用户界面区域中移动时),虚拟三维对象的表示保持在第一用户界面区域中的固定位置(例如,虚拟三维对象不被放置在物理环境中的位置)。例如,在图12E-1至图12I-1中,在显示校准用户界面对象12014的同时显示虚拟对象1102的表示。当包括一个或多个相机的设备100移动时(例如,如图12E-1至图12F-1和对应的图12E-2至图12F-2所示),虚拟对象1102保持在包括一个或多个相机的视场6036的用户界面中的固定位置。在显示校准用户界面对象的同时显示虚拟对象提供了视觉反馈(例如,以指示针对其正在执行校准的对象)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供对应于将相对于其放置虚拟对象的平面的校准输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device displays (17014) the virtual three-dimensional object in the first user interface area (eg, behind the calibration user interface object) while displaying the calibration user interface object (eg, before the calibration criteria are satisfied). a representation in which the representation of the virtual three-dimensional object remains during movement of the one or more cameras in the physical environment (eg, when calibrating the user interface object to move in the first user interface area according to the movement of the one or more cameras) A fixed location in the first user interface area (eg, where the virtual three-dimensional object is not placed in the physical environment). For example, in FIGS. 12E-1 through 12I-1, a representation of virtual object 1102 is displayed at the same time as calibrationuser interface object 12014 is displayed. When thedevice 100 including the one or more cameras is moved (eg, as shown in FIGS. 12E-1 through 12F-1 and corresponding FIGS. 12E-2 through 12F-2 ), the virtual object 1102 remains in the position including the one or more cameras A fixed position in the user interface of the camera's field ofview 6036. Displaying the virtual object while displaying the calibration user interface object provides visual feedback (eg, to indicate the object for which calibration is being performed). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (for example, by helping the user provide calibration input corresponding to the plane relative to which the virtual object will be placed), which in turn makes the The user is able to use the device more quickly and efficiently while reducing power usage and extending the battery life of the device.

在一些实施方案中,在包括一个或多个相机的视场的表示的第一用户界面区域中显示物理环境(例如,包括一个或多个相机的设备周围的物理环境)的增强现实视图的请求包括(17016)显示一个或多个相机的视场的表示(例如,同时显示一个或多个用户界面对象和/或控件(例如,平面的轮廓、对象、指针、图标、标记等))的请求,而不要求在一个或多个相机的视场中捕获的物理环境中显示任何虚拟三维对象(例如,具有三维模型的虚拟对象)的表示。在一些实施方案中,该请求是对在二维用户界面中紧挨着虚拟对象的表示显示的增强现实示能表示的选择。在一些实施方案中,该请求是对增强现实测量应用程序(例如,促进物理环境的测量的测量应用程序)的激活。请求显示一个或多个相机的视场的表示而不请求显示任何虚拟三维对象的表示提供了反馈(例如,通过使用相同的校准用户界面对象来指示需要校准,而不管虚拟对象是否显示)。为用户提供改进的反馈增强了设备的可操作性并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, a request to display an augmented reality view of a physical environment (eg, a physical environment surrounding a device including the one or more cameras) in a first user interface area that includes a representation of the field of view of the one or more cameras Include (17016) a request to display a representation of the field of view of one or more cameras (eg, simultaneously display one or more user interface objects and/or controls (eg, flat outlines, objects, pointers, icons, markers, etc.)) , without requiring representations of any virtual three-dimensional objects (eg, virtual objects with three-dimensional models) to be displayed in the physical environment captured in the field of view of one or more cameras. In some embodiments, the request is a selection of an augmented reality affordance that is displayed next to the representation of the virtual object in the two-dimensional user interface. In some embodiments, the request is activation of an augmented reality measurement application (eg, a measurement application that facilitates measurement of the physical environment). Requesting display of representations of the field of view of one or more cameras without requesting display of representations of any virtual three-dimensional objects provides feedback (eg, by using the same calibration user interface object to indicate the need for calibration regardless of whether the virtual objects are displayed). Providing the user with improved feedback enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,响应于接收到显示物理环境的增强现实视图的请求,设备显示(17018)一个或多个相机的视场的表示(例如,当校准标准未得到满足时,显示一个或多个相机的视场中的物理环境的模糊版本),并且根据确定用于物理环境的增强现实视图的校准标准得到满足(例如,因为存在(例如,来自不同查看角度的)足够量的图像数据来为在一个或多个相机的视场中捕获的物理环境生成尺寸和空间关系数据,因为已经在一个或多个相机的视场中检测到对应于虚拟对象的平面,和/或因为存在足够的信息以基于从相机可获得的图像数据来开始或继续进行平面检测),设备放弃校准用户界面对象(例如,扫描提示对象,诸如弹性立方体或线框对象)的显示。在一些实施方案中,在虚拟三维对象显示在登台用户界面中时,开始对物理环境进行扫描以检测平面,这样使得设备能够在显示增强现实视图之前在一些情况下(例如,在相机的视场已经充分移动以提供足够的数据来检测物理空间中的一个或多个平面的情况下)检测物理空间中的一个或多个平面,使得不需要显示校准用户界面。根据确定用于物理环境的增强现实视图的校准标准得到满足而放弃校准用户界面对象的显示为用户提供了视觉反馈(例如,不存在校准用户界面对象指示校准标准已得到满足并且不需要设备的移动以进行校准)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户避免出于校准目的的设备的不必要移动),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to receiving a request to display an augmented reality view of the physical environment, the device displays ( 17018 ) representations of the field of view of the one or more cameras (eg, when calibration criteria are not met, displays one or more a blurred version of the physical environment in the field of view of each camera), and is satisfied according to the calibration criteria used to determine the augmented reality view of the physical environment (eg, because there is a sufficient amount of image data (eg, from different viewing angles) to Generate dimensional and spatial relationship data for the physical environment captured in the field of view of one or more cameras because planes corresponding to virtual objects have been detected in the field of view of one or more cameras, and/or because there are sufficient information to start or continue plane detection based on image data available from the camera), the device discards the display of calibration user interface objects (eg, scan cue objects such as elastic cubes or wireframe objects). In some embodiments, scanning the physical environment to detect planes begins when the virtual 3D object is displayed in the staging user interface, which enables the device to in some cases (eg, in the camera's field of view) before displaying the augmented reality view has moved sufficiently to provide sufficient data to detect one or more planes in physical space) to detect one or more planes in physical space such that a calibration user interface does not need to be displayed. Discarding the display of calibration UI objects based on determining that the calibration criteria for the augmented reality view of the physical environment are met provides visual feedback to the user (eg, the absence of calibration UI objects indicates that the calibration criteria have been met and no movement of the device is required for calibration). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (for example, by helping the user avoid unnecessary movement of the device for calibration purposes), which in turn enables the user to move more quickly And efficiently use the device while reducing power usage and extending the battery life of the device.

在一些实施方案中,设备在显示校准用户界面对象的同时(例如,在校准标准得到满足之前)在第一用户界面区域中显示(17020)文本对象(例如,描述当前检测到的错误状态的文本描述和/或请求用户动作(例如,以纠正检测到的错误状态)的文本提示),该文本对象(例如,紧挨着校准用户界面对象)提供关于用户能够采取的动作的信息,以改进增强现实视图的校准。在一些实施方案中,文本对象向用户提供针对设备的移动的提示(例如,具有当前检测到的错误状态),诸如“移动过度”、“细节较差”、“移动靠近一点”等。在一些实施方案中,设备根据用户在校准过程期间的动作和基于用户动作检测到的新的错误状态来更新文本对象。在显示校准用户界面对象的同时显示文本为用户提供了视觉反馈(例如,提供校准所需的移动类型的口头指示)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,帮助用户提供合适的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device displays (17020) a text object (eg, text describing the currently detected error state) in the first user interface area while displaying the calibration user interface object (eg, before the calibration criteria are met) A text prompt that describes and/or requests user action (e.g., to correct a detected error state), the text object (e.g., next to a calibration user interface object) provides information about actions the user can take to improve the enhancement Calibration of Reality View. In some embodiments, the text object provides a hint to the user for movement of the device (eg, with a currently detected error state), such as "move too much", "poor detail", "move closer", and the like. In some embodiments, the device updates the text object according to the user's actions during the calibration process and new error states detected based on the user's actions. Displaying text while displaying the calibration user interface object provides visual feedback to the user (eg, provides a verbal indication of the type of movement required for calibration). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., helps the user provide appropriate input and reduces user errors when operating/interacting with the device), which in turn is Enabling the user to use the device more quickly and efficiently reduces power usage and extends the battery life of the device.

在一些实施方案中,响应于检测到校准标准得到满足(例如,标准在校准用户界面对象被显示之前得到满足,或者标准在校准用户界面对象被显示并且以动画方式显示长达某一时间段之后得到满足),设备(例如,如果校准用户界面对象初始被显示,则在停止显示校准用户界面对象之后)显示(17022)在一个或多个相机的视场中捕获的物理环境中检测到的平面的视觉指示(例如,显示围绕检测到的平面的轮廓,或者突出显示检测到的平面)。例如,在图12J中,突出显示平面(地板表面5038)以指示已经在一个或多个相机的显示视场6036中捕获的物理环境5002中检测到该平面。显示检测到的平面的视觉指示提供了视觉反馈(例如,指示已经在由设备相机捕获的物理环境中检测到平面)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供合适的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to detecting that the calibration criteria are met (eg, the criteria are met before the calibration user interface object is displayed, or the criteria are met after the calibration user interface object is displayed and animated for a certain period of time is satisfied), the device (eg, after ceasing to display the calibration UI object if the calibration UI object is initially displayed) displays (17022) the plane detected in the physical environment captured in the field of view of the one or more cameras (e.g., showing an outline around the detected plane, or highlighting the detected plane). For example, in Figure 12J, a plane (floor surface 5038) is highlighted to indicate that the plane has been detected in thephysical environment 5002 captured in the display field ofview 6036 of one or more cameras. Displaying a visual indication of the detected plane provides visual feedback (eg, indicating that the plane has been detected in the physical environment captured by the device camera). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), which in turn Power usage is reduced and battery life of the device is extended by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,响应于接收到显示物理环境的增强现实视图的请求:根据确定校准标准未得到满足并且在显示校准用户界面对象之前,设备(例如,经由显示生成部件,并且在包括一个或多个相机的视场的表示(例如,视场的模糊版本)的第一用户界面区域中)显示(17024)动画提示对象(例如,扫描提示对象,诸如弹性立方体或线框对象),该动画提示对象包括相对于平面的表示移动的设备的表示(例如,设备的表示相对于平面的表示的移动指示由用户实现的所需设备移动)。例如,动画提示对象包括相对于平面的表示12010移动的设备100的表示12004,如参照图12B至图12D所描述的。在一些实施方案中,当设备检测到设备的移动时,设备停止显示动画提示对象(例如,指示用户已经开始以将使得校准继续进行的方式移动设备)。在一些实施方案中,当设备检测到设备的移动时并且在校准已经完成之前,设备用校准用户界面对象替换动画提示对象的显示以进一步相对于设备的校准来引导用户。例如,如参照图12C至图12E所描述的,当检测到设备的移动时(如图12C至图12D所示),包括设备100的表示12004的动画提示停止显示并且在图12E中显示校准用户界面对象12014。显示包括相对于平面的表示移动的设备的表示的动画提示对象为用户提供了视觉反馈(例如,以示出校准所需的设备的移动类型)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户以提供满足校准标准所需的信息的方式移动设备),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to receiving a request to display an augmented reality view of the physical environment: upon determining that calibration criteria are not met and prior to displaying the calibration user interface object, the device (eg, via the display generation component, and after including one or A representation of the fields of view of the plurality of cameras (eg, blurred versions of the fields of view) in a first user interface area) displays ( 17024 ) an animated cue object (eg, a scan cue object such as an elastic cube or a wireframe object), the animation The cue object includes a representation of the device that moves relative to the representation of the plane (eg, movement of the representation of the device relative to the representation of the plane indicates a desired device movement by the user). For example, the animation cue object includes therepresentation 12004 of thedevice 100 moving relative to theplanar representation 12010, as described with reference to Figures 12B-12D. In some embodiments, when the device detects movement of the device, the device stops displaying the animated prompt object (eg, indicating that the user has begun to move the device in a manner that will cause the calibration to proceed). In some embodiments, when the device detects movement of the device and before calibration has been completed, the device replaces the display of an animated cue object with a calibration user interface object to further guide the user relative to the calibration of the device. For example, as described with reference to Figures 12C-12E, when movement of the device is detected (as shown in Figures 12C-12D), the animated prompt including therepresentation 12004 of thedevice 100 ceases to be displayed and the calibration user is displayed in Figure12E Interface Object 12014. Displaying an animated cue object that includes a representation of the device moving relative to the planar representation provides visual feedback to the user (eg, to show the type of movement of the device required for calibration). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user move the device in a way that provides the information needed to meet calibration standards), which in turn by enabling the user to The device is used more quickly and efficiently while reducing power usage and extending the battery life of the device.

在一些实施方案中,根据检测到的物理环境中的一个或多个相机的姿态变化来调整校准用户界面对象的至少一个显示参数包括(17026):根据物理环境中一个或多个相机的第一移动量值,将校准用户界面对象移动第一量;并且根据物理环境中一个或多个相机的第二移动量值,将校准用户界面对象移动第二量,其中第一量不同于(例如,大于)第二量,并且第一移动量值不同于(例如,大于)第二移动量值(例如,基于在物理环境中的同一方向上的移动来测量第一移动量值和第二移动量值)。将校准用户界面对象移动对应于一个或多个(设备)相机的移动量值的量提供了视觉反馈(例如,向用户指示校准用户界面对象的移动是针对校准所需的设备移动的引导)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供合适的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, adjusting at least one display parameter of the calibration user interface object based on the detected change in pose of the one or more cameras in the physical environment includes ( 17026 ): based on a first change in the one or more cameras in the physical environment a movement amount, moving the calibration user interface object by a first amount; and according to a second movement amount value of the one or more cameras in the physical environment, moving the calibration user interface object a second amount, wherein the first amount is different from (eg, greater than) a second amount, and the first movement value is different from (eg, greater than) the second movement value (eg, the first and second movement values are measured based on movement in the same direction in the physical environment value). Moving the calibration user interface object by an amount corresponding to the magnitude of the movement of the one or more (device) cameras provides visual feedback (eg, indicating to the user that movement of the calibration user interface object is a guide for device movement required for calibration). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), which in turn Power usage is reduced and battery life of the device is extended by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,根据检测到的物理环境中的一个或多个相机的姿态变化来调整校准用户界面对象的至少一个显示参数包括(17028):根据确定检测到的一个或多个相机的姿态变化与第一类型的移动(例如,侧向移动,诸如向左、向右或来来回回的侧向移动)对应(并且不与第二类型的移动(例如,垂直移动,诸如向上、向下或上上下下的移动)对应),基于第一类型的移动来移动校准用户界面对象(例如,以第一方式移动校准用户界面对象(例如,围绕穿过校准用户界面对象的垂直轴旋转校准用户界面对象));并且根据确定检测到的一个或多个相机的姿态变化与第二类型的移动对应(并且不与第一类型的移动对应),放弃基于第二类型的移动来移动校准用户界面对象(例如,放弃以第一方式移动校准用户界面对象或者将校准用户界面对象保持静止)。例如,包括一个或多个相机的设备100的侧向移动(例如,如参照图12F-1至图12G-1和图12F-2至图12G-2所描述的)使得校准用户界面对象12014旋转,而设备100的垂直移动(例如,如参照图12G-1至图12H-1和图12G-2至图12H-2所描述的)不会使得校准用户界面对象12014旋转。根据确定检测到的设备相机的姿态变化对应于第二类型的移动而放弃校准用户界面对象的移动提供了视觉反馈(例如,向用户指示不需要一个或多个相机的第二类型的移动以进行校准)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户避免提供不必要的输入),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, adjusting at least one display parameter of the calibration user interface object based on the detected change in the pose of the one or more cameras in the physical environment comprises (17028): based on determining the detected pose of the one or more cameras The change corresponds to a first type of movement (eg, lateral movement, such as left, right, or sideways movement back and forth) (and does not correspond to a second type of movement (eg, vertical movement, such as up, down) or movement up and down), moving the calibration user interface object based on the first type of movement (eg, moving the calibration user interface object in a first manner (eg, rotating the calibration user about a vertical axis passing through the calibration user interface object) interface object)); and refraining from moving the calibration user interface based on the second type of movement upon determining that the detected pose change of one or more cameras corresponds to the second type of movement (and does not correspond to the first type of movement) object (eg, forgo moving the calibration user interface object in the first way or keep the calibration user interface object stationary). For example, lateral movement ofdevice 100 including one or more cameras (eg, as described with reference to FIGS. 12F-1-12G-1 and 12F-2-12G-2) causes calibrationuser interface object 12014 to rotate , while vertical movement of device 100 (eg, as described with reference to FIGS. 12G-1-12H-1 and 12G-2-12H-2 ) does not cause calibrationuser interface object 12014 to rotate. Abandoning the movement of the calibration user interface object based on determining that the detected change in the pose of the device camera corresponds to the second type of movement provides visual feedback (eg, indicating to the user that the second type of movement of the one or more cameras is not required to make the movement) calibration). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (for example, by helping the user avoid providing unnecessary input), which in turn by enabling the user to use the device more quickly and efficiently This reduces power usage and extends the battery life of the device.

在一些实施方案中,根据检测到的物理环境中的一个或多个相机的姿态变化来调整校准用户界面对象的至少一个显示参数包括(17030):根据检测到的物理环境中的一个或多个相机的姿态变化来移动校准用户界面对象(例如,旋转和/或倾斜)而不改变校准用户界面对象在第一用户界面区域上的特征显示位置(例如,几何中心的位置,或者校准用户界面对象在显示器上的轴)(例如,校准用户界面对象被锚定到显示器上的固定位置,同时物理环境在一个或多个相机的视场内在校准用户界面对象下方移动)。例如,在图12E-1至图12I-1中,校准用户界面对象12014旋转,同时保持在相对于显示器112的固定位置。移动校准用户界面对象而不改变校准用户界面对象的特征显示位置提供了视觉反馈(例如,指示校准用户界面对象不同于被放置在相对于显示的增强现实环境的位置处的虚拟对象)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供合适的输入并减少用户输入错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, adjusting at least one display parameter of the calibration user interface object based on the detected pose change of the one or more cameras in the physical environment comprises (17030): based on the detected one or more of the physical environment The pose change of the camera to move the calibration user interface object (eg, rotate and/or tilt) without changing the feature display position of the calibration user interface object on the first user interface area (eg, the position of the geometric center, or the calibration user interface object) axis on the display) (eg, the calibration user interface object is anchored to a fixed position on the display while the physical environment moves under the calibration user interface object within the field of view of one or more cameras). For example, in FIGS. 12E-1 to 12I-1 , calibrationuser interface object 12014 is rotated while remaining in a fixed position relative to display 112 . Moving the calibration user interface object without changing the characteristic display position of the calibration user interface object provides visual feedback (eg, indicating that the calibration user interface object is different from a virtual object placed at a location relative to the displayed augmented reality environment). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user input errors), which in turn by enabling the user to be faster and more efficient Use the device more efficiently, reducing power usage and extending the battery life of the device.

在一些实施方案中,根据检测到的物理环境中的一个或多个相机的姿态变化来调整校准用户界面对象的至少一个显示参数包括(17032):围绕垂直于物理环境中的一个或多个相机的移动方向的轴旋转校准用户界面对象(例如,当(例如,包括相机)的设备在x-y平面上来来回回地移动时,校准用户界面对象围绕z轴旋转,或者当(例如,包括相机)的设备沿x轴(例如,x轴被限定为相对于物理环境的水平方向并且例如位于触摸屏显示器的平面内)边到边地移动时,校准用户界面对象围绕y轴旋转)。例如,在图12E-1至图12G-1中,校准用户界面对象12014围绕垂直于图12E-2至图12G-2中所示的设备的侧向移动的垂直轴旋转。围绕垂直于设备相机的移动的轴旋转校准用户界面对象提供了视觉反馈(例如,向用户指示校准用户界面对象的移动是针对校准所需的设备移动的引导)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供合适的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, adjusting at least one display parameter of the calibration user interface object based on the detected change in pose of the one or more cameras in the physical environment comprises (17032): surrounding perpendicular to the one or more cameras in the physical environment The axis of the movement direction rotates the calibrated user interface object (e.g., when the device (e.g., including the camera) moves back and forth in the x-y plane, the calibrated user interface object rotates around the z-axis, or when the (e.g., including the camera) The calibration user interface object rotates around the y-axis as the device moves edge-to-edge along the x-axis (eg, the x-axis is defined as a horizontal direction relative to the physical environment and eg lies within the plane of the touchscreen display). For example, in Figures 12E-1 to 12G-1, the calibrationuser interface object 12014 is rotated about a vertical axis that is perpendicular to the lateral movement of the device shown in Figures 12E-2 to 12G-2. Rotating the calibration user interface object about an axis perpendicular to the movement of the device camera provides visual feedback (eg, indicating to the user that movement of the calibration user interface object is a guide for device movement required for calibration). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), which in turn Power usage is reduced and battery life of the device is extended by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,根据检测到的物理环境中的一个或多个相机的姿态变化来调整校准用户界面对象的至少一个显示参数包括(17034):以根据在一个或多个相机的视场中检测到的变化率(例如,物理环境的移动速度)来确定的速度移动校准用户界面对象。以根据设备相机的姿态变化确定的速度移动校准用户界面对象提供了视觉反馈(例如,向用户指示校准用户界面对象的移动是针对校准所需的设备移动的引导)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供合适的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, adjusting at least one display parameter of the calibration user interface object based on the detected change in the pose of the one or more cameras in the physical environment comprises (17034): The detected rate of change (eg, the speed of movement of the physical environment) is used to determine the speed at which the calibrated user interface object moves. Moving the calibration user interface object at a velocity determined from changes in the pose of the device camera provides visual feedback (eg, indicating to the user that movement of the calibration user interface object is a guide for device movement required for calibration). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), which in turn Power usage is reduced and battery life of the device is extended by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,根据检测到的物理环境中的一个或多个相机的姿态变化来调整校准用户界面对象的至少一个显示参数包括(17036):沿着根据在一个或多个相机的视场中检测到的变化(例如,物理环境的移动速度)方向确定的方向移动校准用户界面对象(例如,设备针对设备从右向左的移动顺时针旋转校准用户界面对象并且针对设备从左向右的移动逆时针旋转校准用户界面对象,或者设备针对设备从右向左的移动逆时针旋转校准用户界面对象并且针对设备从左向右的移动顺时针旋转校准用户界面对象)。沿着根据设备相机的姿态变化确定的方向移动校准用户界面对象提供了视觉反馈(例如,向用户指示校准用户界面对象的移动是针对校准所需的设备移动的引导)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更有效(例如,通过帮助用户提供合适的输入并减少操作设备/与设备进行交互时的用户错误),这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, adjusting at least one display parameter of the calibration user interface object based on the detected change in the pose of the one or more cameras in the physical environment comprises (17036): along a field of view based on the one or more cameras Changes detected in (e.g., the speed of movement of the physical environment) direction determine the direction of movement of the calibrated user interface object (e.g., the device rotates clockwise for right-to-left movement of the device and calibrates the user interface object for left-to-right movement of the device Movement counterclockwise rotates the calibrated user interface object, or the device rotates the calibrated user interface object counterclockwise for right-to-left movement of the device and rotates the user interface object clockwise for left-to-right movement of the device). Moving the calibration user interface object in a direction determined from changes in the pose of the device camera provides visual feedback (eg, indicating to the user that movement of the calibration user interface object is a guide for device movement required for calibration). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and reducing user errors when operating/interacting with the device), which in turn Power usage is reduced and battery life of the device is extended by enabling the user to use the device more quickly and efficiently.

应当理解,对图17A至图17D中的操作进行描述的特定顺序仅仅是一个示例,并非旨在表明所述顺序是可以执行这些操作的唯一顺序。本领域的普通技术人员会想到多种方式来对本文所述的操作进行重新排序。另外,应当注意,本文相对于本文所述的其他方法(例如,方法800、900、1000、16000、18000、19000和20000)描述的其他过程的细节同样以类似的方式适用于上文相对于图17A至图17D所述的方法17000。例如,上文参考方法17000所述的接触、输入、虚拟对象、用户界面区域、视场、触觉输出、移动和/或动画任选地具有本文参考本文所述的其他方法(例如,方法800、900、1000、16000、18000、19000和20000)所述的接触、输入、虚拟对象、用户界面区域、视场、触觉输出、移动和/或动画的特征中的一者或多者。为了简明起见,此处不再重复这些细节。It should be understood that the particular order in which the operations in FIGS. 17A-17D are described is merely an example, and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize numerous ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (eg,methods 800, 900, 1000, 16000, 18000, 19000, and 20000) also apply in a similar manner to the above with respect to Figures Themethod 17000 described in 17A-17D. For example, the contacts, inputs, virtual objects, user interface areas, fields of view, haptic outputs, movements, and/or animations described above with reference tomethod 17000 optionally have other methods described herein with reference to (eg,method 800, 900, 1000, 16000, 18000, 19000, and 20000) of one or more of the touch, input, virtual object, user interface area, field of view, haptic output, movement and/or animation features. For the sake of brevity, these details are not repeated here.

图18A至图18I是示出约束虚拟对象围绕轴的旋转的方法18000的流程图。方法18000在具有显示生成部件(例如,显示器、投影仪、平视显示器等)、一个或多个输入设备(例如,触敏表面或同时充当显示生成部件和触敏表面的触摸屏显示器)、一个或多个相机(例如,设备上与显示器和触敏表面相对的一侧上的一个或多个后向相机),以及用于检测包括一个或多个相机的设备的姿态(例如,相对于周围物理环境的取向(例如,旋转、偏航和/或倾斜角)和位置)的变化的一个或多个姿态传感器(例如,加速度计、陀螺仪和/或磁力仪)的电子设备(例如,图3的设备300或图1A的便携式多功能设备100)处执行。方法18000中的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。18A-18I are flowcharts illustrating amethod 18000 of constraining the rotation of a virtual object about an axis.Method 18000 is performed when having a display generating component (eg, a display, a projector, a heads-up display, etc.), one or more input devices (eg, a touch-sensitive surface or a touch screen display that acts as both the display generating component and the touch-sensitive surface), one or more cameras (eg, one or more rear-facing cameras on the side of the device opposite the display and touch-sensitive surface), and for detecting the pose of the device that includes the one or more cameras (eg, relative to the surrounding physical environment) Changes in orientation (eg, rotation, yaw, and/or pitch) and position) of one or more attitude sensors (eg, accelerometers, gyroscopes, and/or magnetometers) of the electronics (eg, Figure 3 device 300 or portablemultifunction device 100 of FIG. 1A ). Some operations inMethod 18000 are optionally combined, and/or the order of some operations is optionally changed.

设备通过显示生成部件在第一用户界面区域中显示(18002)虚拟三维对象的第一视角的表示(例如,登台用户界面或增强现实用户界面)。例如,虚拟对象11002在登台用户界面6010中示出,如图13B所示。The device displays (18002) a representation (eg, a staging user interface or an augmented reality user interface) of a first perspective view of the virtual three-dimensional object in a first user interface area via the display generation component. For example,virtual object 11002 is shown in staginguser interface 6010, as shown in Figure 13B.

当在显示器上的第一用户界面区域中显示虚拟三维对象的第一视角的表示时,设备检测(18004)第一输入(例如,触敏表面上的轻扫输入(例如,通过一个或两个手指接触),或枢转输入(例如,两个手指旋转,或一个手指接触绕另一个手指接触枢转)),该第一输入对应于将虚拟三维对象相对于显示器(例如,与显示生成部件对应的显示平面,诸如触摸屏显示器的平面)旋转的请求,以显示从虚拟三维对象的第一视角不可见的虚拟三维对象的一部分。例如,该请求是如参照图13B至图13C所描述的输入或如参照图13E至图13F所描述的输入。When displaying the representation of the first perspective of the virtual three-dimensional object in the first user interface area on the display, the device detects (18004) a first input (eg, a swipe input on a touch-sensitive surface (eg, via one or two finger contact), or a pivot input (eg, two finger rotation, or one finger contact pivoting about another finger contact)), the first input corresponding to placing the virtual three-dimensional object relative to the display (eg, with a display generating component) A corresponding display plane, such as the plane of a touch screen display, is requested to rotate to display a portion of the virtual three-dimensional object that is not visible from a first perspective of the virtual three-dimensional object. For example, the request is an input as described with reference to FIGS. 13B to 13C or an input as described with reference to FIGS. 13E to 13F .

响应于检测到第一输入(18006):根据确定第一输入对应于围绕第一轴(例如,在水平方向上平行于显示器平面(例如,x-y平面)的第一轴,诸如x轴)旋转三维对象的请求,设备相对于第一轴旋转虚拟三维对象一定量,该量基于第一输入的量值(例如,轻扫输入沿着触敏表面(例如,与显示器的x-y平面对应的x-y平面)的垂直轴(例如,y轴)的速度和/或距离)确定,并且该量由对限制虚拟三维对象相对于第一轴旋转超过阈值旋转量的移动的限制来约束(例如,围绕第一轴的旋转被限制在围绕第一轴的+/-30度角的范围内,无论第一输入的量值如何,禁止超出范围的旋转)。例如,如参照图13E至图13G所述,虚拟对象11002的旋转受限制约束。根据确定第一输入对应于围绕与第一轴不同的第二轴(例如,在垂直方向上平行于显示器平面(例如,x-y平面)的第二轴,诸如y轴)旋转三维对象的请求,设备相对于第二轴旋转虚拟三维对象一定量,该量基于第一输入的量值(例如,轻扫输入沿着触敏表面(例如,与显示器的x-y平面对应的x-y平面)的水平轴(例如,x轴)的速度和/或距离)确定,其中对于具有高于相应阈值的量值的输入,设备相对于第二轴旋转虚拟三维对象超过阈值旋转量。在一些实施方案中,对于相对于第二轴的旋转,设备对旋转施加约束,该约束大于对相对于第一轴的旋转的约束(例如,允许三维对象旋转60度而不是30度)。在一些实施方案中,对于相对于第二轴的旋转,设备不对旋转施加约束,使得三维对象可绕第二轴自由旋转(例如,对于具有足够高量值的输入,诸如包括一个或多个接触的移动的快或长的轻扫输入,三维对象可相对于第二轴旋转超过360度)。例如,虚拟对象11002响应于参照图13B至图13C所描述的输入而绕y轴发生的旋转量比虚拟对象11002响应于参照图13E至图13G所描述的输入而绕x轴发生的旋转量更大。根据输入是绕第一轴还是绕第二轴旋转对象的请求,确定将对象旋转受阈值量约束的量还是将对象旋转超过阈值量,从而提高了控制不同类型的旋转操作的能力。提供附加控制选项而不由于附加显示的控件使用户界面杂乱增强了设备的可操作性,并且使用户-设备界面更高效。In response to detecting the first input (18006): according to the determination that the first input corresponds to a three-dimensional rotation about a first axis (eg, a first axis horizontally parallel to the display plane (eg, the x-y plane), such as the x-axis) A request from an object that the device rotates the virtual three-dimensional object relative to a first axis by an amount based on the magnitude of the first input (eg, a swipe input along a touch-sensitive surface (eg, the x-y plane corresponding to the display's x-y plane) The vertical axis (e.g., the y-axis) of the velocity and/or distance) is determined, and the amount is constrained by a restriction on movement that restricts the virtual 3D object from rotating relative to the first axis by more than a threshold amount of rotation (e.g., about the first axis) Rotation is limited to an angle of +/- 30 degrees around the first axis, regardless of the magnitude of the first input, out-of-range rotation is prohibited). For example, as described with reference to Figures 13E-13G, the rotation of thevirtual object 11002 is constrained by constraints. Upon determining that the first input corresponds to a request to rotate the three-dimensional object about a second axis different from the first axis (eg, a second axis that is vertically parallel to the display plane (eg, the x-y plane), such as the y axis), the device Rotates the virtual three-dimensional object relative to the second axis by an amount based on the magnitude of the first input (eg, the swipe input is along a horizontal axis (eg, the x-y plane corresponding to the x-y plane of the display) of the touch-sensitive surface). , x-axis) velocity and/or distance) determination, wherein the device rotates the virtual three-dimensional object relative to the second axis by more than a threshold rotation amount for an input having a magnitude above a corresponding threshold. In some embodiments, for rotation relative to the second axis, the device imposes constraints on the rotation that are greater than constraints on rotation relative to the first axis (eg, allowing the three-dimensional object to rotate 60 degrees instead of 30 degrees). In some embodiments, for rotation relative to the second axis, the device imposes no constraints on the rotation such that the three-dimensional object is free to rotate about the second axis (eg, for inputs of sufficiently high magnitude, such as including one or more contacts The 3D object can be rotated more than 360 degrees relative to the second axis with a quick or long swipe input of the move). For example,virtual object 11002 is rotated more about the y-axis in response to the input described with reference to FIGS. 13B-13C thanvirtual object 11002 is rotated about the x-axis in response to the input described with reference to FIGS. 13E-13G . big. Depending on whether the input request to rotate the object about a first axis or a second axis determines whether to rotate the object by an amount constrained by a threshold amount or beyond a threshold amount, improving the ability to control different types of rotation operations. Providing additional control options without cluttering the user interface with additionally displayed controls enhances the operability of the device and makes the user-device interface more efficient.

在一些实施方案中,响应于检测到第一输入(18008):根据确定第一输入包括接触在第一方向(例如,y方向,触敏表面上的垂直方向)上跨触敏表面的第一移动,并且确定接触在第一方向上的第一移动满足用于相对于第一轴旋转虚拟对象的表示的第一标准,其中第一标准包括第一输入在第一方向上包括大于第一阈值移动量以便满足第一标准的要求(例如,设备不发起三维对象绕第一轴的旋转,直到设备在第一方向上检测到大于第一阈值移动量),设备确定第一输入对应于围绕第一轴(例如,x轴,与显示器平行的水平轴,或穿过虚拟对象的水平轴)旋转三维对象的请求;并且根据确定第一输入包括接触在第二方向(例如,x方向,触敏表面上的水平方向)上跨触敏表面的第二移动,并且确定接触在第二方向上的第二移动满足用于相对于第二轴旋转虚拟对象的表示的第二标准,其中第二标准包括第一输入在第二方向上包括大于第二阈值移动量以便满足第二标准的要求(例如,设备不发起三维对象绕第二轴的旋转,直到设备在第二方向上检测到大于第二阈值移动量),设备确定第一输入对应于围绕第二轴(例如,平行于显示器的垂直轴,或穿过虚拟对象的垂直轴)旋转三维对象的请求,其中第一阈值大于第二阈值(例如,用户需要在垂直方向上轻扫以触发围绕水平轴的旋转(例如,相对于用户向前或向后倾斜对象)比在水平方向上轻扫以触发围绕垂直轴的旋转(例如,旋转对象)更大的量)。根据输入是绕第一轴还是绕第二轴旋转对象的请求,确定将对象旋转受阈值量约束的量还是将对象旋转超过阈值量,从而提高了响应于与旋转对象的请求对应的输入来控制不同类型的旋转操作的能力。提供附加控制选项而不由于附加显示的控件使用户界面杂乱增强了设备的可操作性,并且使用户-设备界面更高效。In some embodiments, in response to detecting the first input (18008): according to the determination that the first input comprises contacting a first direction across the touch-sensitive surface in a first direction (eg, a y-direction, a vertical direction on the touch-sensitive surface) moving, and determining that the first movement of the contact in the first direction satisfies a first criterion for rotating the representation of the virtual object relative to the first axis, wherein the first criterion includes that the first input includes greater than a first threshold in the first direction The amount of movement in order to satisfy the requirements of the first criterion (eg, the device does not initiate rotation of the three-dimensional object about the first axis until the device detects movement in the first direction greater than a first threshold), the device determines that the first input corresponds to a movement around the first axis. A request to rotate a three-dimensional object on an axis (eg, an x-axis, a horizontal axis parallel to the display, or a horizontal axis passing through the virtual object); and upon determining that the first input includes contact in a second direction (eg, an x-direction, a touch-sensitive a second movement across the touch-sensitive surface in a horizontal direction on the surface), and determining that the second movement of the contact in the second direction satisfies a second criterion for rotating the representation of the virtual object relative to a second axis, wherein the second criterion Including that the first input includes movement in the second direction greater than a second threshold in order to meet the requirements of the second criterion (eg, the device does not initiate rotation of the three-dimensional object about the second axis until the device detects in the second direction greater than the second threshold amount of movement), the device determines that the first input corresponds to a request to rotate the three-dimensional object about a second axis (eg, parallel to the vertical axis of the display, or through the vertical axis of the virtual object), where the first threshold is greater than the second threshold ( For example, a user needs to swipe in a vertical direction to trigger a rotation about a horizontal axis (eg, tilt the object forward or backward relative to the user) than a swipe in a horizontal direction to trigger a rotation about a vertical axis (eg, rotate an object ) in larger quantities). Depending on whether the input is a request to rotate the object about a first axis or a second axis, determining whether to rotate the object by an amount constrained by a threshold amount or beyond a threshold amount improves control in response to an input corresponding to a request to rotate the object Ability for different types of rotation operations. Providing additional control options without cluttering the user interface with additionally displayed controls enhances the operability of the device and makes the user-device interface more efficient.

在一些实施方案(18010)中,虚拟三维对象相对于第一轴的旋转以第一输入的第一输入参数(例如,轻扫距离或轻扫速度)的特征值与围绕第一轴施加于虚拟三维对象的旋转量之间的第一程度的对应关系发生,虚拟三维对象相对于第二轴的旋转以第二输入手势的第一输入参数(例如,轻扫距离或轻扫速度)的特征值与围绕第二轴施加于虚拟三维对象的旋转量之间的第二程度的对应关系发生,并且第一程度的对应关系涉及的虚拟三维对象相对于第一输入参数的旋转比第二程度的对应关系更少(例如,围绕第一轴的旋转比围绕第二轴的旋转具有更多的摩擦或接获)。例如,虚拟对象11002的第一旋转量响应于轻扫输入而发生,具有轻扫距离d1,用于围绕y轴旋转(如参照图13B至图13C所述),并且虚拟对象11002的小于第一旋转量的第二旋转量响应于轻扫输入而发生,具有轻扫距离d1,用于围绕x轴旋转(如参照图13E至图13G所述)。响应于输入以更大旋转度还是更小旋转度旋转虚拟对象,取决于输入是绕第一轴还是绕第二轴旋转对象的请求,这提高了响应于与旋转对象的请求对应的输入来控制不同类型的旋转操作的能力。提供附加控制选项而不由于附加显示的控件使用户界面杂乱增强了设备的可操作性,并且使用户-设备界面更高效。In some embodiments (18010), the rotation of the virtual three-dimensional object relative to the first axis is applied to the virtual three-dimensional object with the characteristic value of the first input parameter (eg, swipe distance or swipe speed) of the first input and about the first axis A first degree of correspondence between the amount of rotation of the three-dimensional object occurs, and the rotation of the virtual three-dimensional object relative to the second axis is a characteristic value of the first input parameter (eg, swipe distance or swipe speed) of the second input gesture A second degree of correspondence occurs with the amount of rotation applied to the virtual three-dimensional object about the second axis, and the first degree of correspondence involves a greater rotation of the virtual three-dimensional object relative to the first input parameter than the second degree of correspondence Fewer relationships (eg, rotation about a first axis has more friction or pickup than rotation about a second axis). For example, a first amount of rotation ofvirtual object 11002 occurs in response to a swipe input, with a swipe distance d1 for rotation about the y-axis (as described with reference to FIGS. 13B-13C ), and a first amount ofvirtual object 11002 is less than the first amount of rotation A second amount of rotation occurs in response to the swipe input, with a swipe distance d1 , for rotation about the x-axis (as described with reference to FIGS. 13E-13G ). Rotate the virtual object with a greater or lesser rotation in response to an input, depending on whether the input is a request to rotate the object about a first axis or a second axis, which improves control in response to an input corresponding to a request to rotate the object Ability for different types of rotation operations. Providing additional control options without cluttering the user interface with additionally displayed controls enhances the operability of the device and makes the user-device interface more efficient.

在一些实施方案中,设备检测(18012)第一输入的结束(例如,输入包括触敏表面上的一个或多个接触的移动,并且检测第一输入的结束包括检测一个或多个接触从触敏表面的抬离)。在(例如,响应于)检测到第一输入的结束之后,设备继续(18014)在检测到输入结束之前基于第一输入的量值(例如,基于接触抬离之前接触的移动速度)旋转三维对象,包括:根据确定三维对象相对于第一轴旋转,使对象相对于第一轴的旋转减慢第一量,该第一量与三维对象相对于第一轴的旋转的量值成比例(例如,基于第一模拟物理参数,诸如具有第一摩擦系数的模拟摩擦,减慢三维对象绕第一轴的旋转);并且根据确定三维对象相对于第二轴旋转,使对象相对于第二轴的旋转减慢第二量,该第二量与三维对象相对于第二轴的旋转的量值成比例(例如,基于第二模拟物理参数,诸如具有小于第一摩擦系数的第二摩擦系数的模拟摩擦,减慢三维对象绕第二轴的旋转),其中第二量与第一量不同。例如,在图13C至图13D中,虚拟对象11002在接触13002抬离之后继续旋转,这引起虚拟对象11002的旋转,如参照图13B至图13C所描述的。在一些实施方案中,第二量大于第一量。在一些实施方案中,第二量小于第一量。在检测到输入结束之后,根据输入是绕第一轴还是绕第二轴旋转对象的请求,使虚拟对象的旋转减慢第一量或第二量,提供指示将旋转操作以不同方式施加于虚拟对象以围绕第一轴和第二轴旋转的视觉反馈。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更高效(例如,通过帮助用户提供合适的输入并在将对象以对应于平面的第二取向放置之前避免尝试提供用于操纵虚拟对象的输入),这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device detects (18012) the end of the first input (eg, the input includes movement of one or more contacts on the touch-sensitive surface, and detecting the end of the first input includes detecting that the one or more contacts are removed from the touch lift-off of sensitive surfaces). After (eg, in response to) detecting the end of the first input, the device continues ( 18014 ) to rotate the three-dimensional object based on the magnitude of the first input (eg, based on the speed of movement of the contact before the contact lifts off) prior to detecting the end of the input , comprising: based on determining that the three-dimensional object is rotated relative to the first axis, slowing the rotation of the object relative to the first axis by a first amount proportional to the magnitude of the rotation of the three-dimensional object relative to the first axis (eg , based on a first simulated physical parameter, such as simulated friction with a first coefficient of friction, slowing the rotation of the three-dimensional object about the first axis); and based on determining the rotation of the three-dimensional object relative to the second axis, the The rotation is slowed down by a second amount proportional to the magnitude of the rotation of the three-dimensional object relative to the second axis (eg, based on a second simulated physical parameter, such as a simulation with a second coefficient of friction less than the first coefficient of friction friction, which slows the rotation of the three-dimensional object about a second axis), where the second amount is different from the first amount. For example, in Figures 13C-13D,virtual object 11002 continues to rotate aftercontact 13002 lifts off, which causes rotation ofvirtual object 11002, as described with reference to Figures 13B-13C. In some embodiments, the second amount is greater than the first amount. In some embodiments, the second amount is less than the first amount. After detecting the end of the input, slow down the rotation of the virtual object by a first amount or a second amount depending on whether the input is a request to rotate the object about a first axis or a second axis, providing instructions to apply a rotation operation to the virtual object differently Visual feedback for the object to rotate around the first and second axes. Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and avoiding attempts to provide an object before placing the object in a second orientation corresponding to the plane) input for manipulating virtual objects), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,设备检测(18016)第一输入的结束(例如,输入包括触敏表面上的一个或多个接触的移动,并且检测第一输入的结束包括检测一个或多个接触从触敏表面的抬离)。在(例如,响应于)检测到第一输入的结束之后(18018):根据确定三维对象相对于第一轴旋转超过相应的旋转阈值,设备使三维对象相对于第一轴的旋转的至少一部分反转;并且,根据确定三维对象相对于第一轴旋转未超过相应的旋转阈值,设备放弃反转三维对象相对于第一轴的旋转。(例如,停止三维对象相对于第一轴的旋转,并且/或者继续三维对象相对于第一轴在输入的运动方向上的旋转,其旋转量值由在检测到输入结束之前输入的量值确定)。例如,在虚拟对象11002旋转超过旋转阈值之后,如参照图13E至图13G所描述的,虚拟对象11002的旋转被反转,如图13G至图13H所示。在一些实施方案中,基于三维对象旋转超过相应旋转阈值的距离来确定三维对象的旋转的反转量(例如,如果三维对象被旋转超过相应旋转阈值的旋转量较大,则将三维对象相对于第一轴的旋转反转较大的量,相比之下,如果三维对象被旋转超过相应旋转阈值的旋转量较小,则将相对于第一轴的旋转反转较小的量)。在一些实施方案中,旋转的反转由模拟的物理参数诸如弹性效应驱动,当三维对象相对于第一轴被旋转超过相应的旋转阈值更远时,该弹性效应以更大的力拉动。在一些实施方案中,旋转的反转是沿基于相对于第一轴被旋转超过相应旋转阈值的旋转方向确定的旋转方向(例如,如果三维对象被旋转成使得对象的顶部向后移动到显示器中,则旋转的反转将对象的顶部向前旋转出显示器;如果三维对象被旋转成使得对象的顶部向前旋转出显示器,则旋转的反转将对象的顶部向后旋转到显示器中;如果三维对象被旋转成使得对象的右侧向后移动到显示器中,则旋转的反转将对象的右侧向前旋转出显示器;并且/或者如果三维对象被旋转成使得对象的左侧向前旋转出显示器,则旋转的反转将对象的左侧向后旋转到显示器中)。在一些实施方案中,例如,在相对于第二轴的旋转被约束到相应的角度范围的情况下,执行用于绕第二轴旋转的类似的橡皮条选择(rubberbanding)(例如,旋转的条件性反转)。在一些实施方案中,例如,在不约束相对于第二轴的旋转使得设备允许三维对象旋转360度的情况下,不执行用于绕第二轴旋转的橡皮条选择(例如,因为设备确实不对相对于第二轴的旋转施加旋转阈值)。根据对象是否被旋转超过旋转阈值,在检测到输入结束之后反转三维对象相对于第一轴的旋转的至少一部分,或者放弃反转三维对象相对于第一轴的旋转的一部分,从而提供指示适用于虚拟对象的旋转的旋转阈值的视觉反馈。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更高效(例如,通过帮助用户避免尝试提供用于将虚拟对象旋转超过旋转阈值的输入),这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device detects (18016) the end of the first input (eg, the input includes movement of one or more contacts on the touch-sensitive surface, and detecting the end of the first input includes detecting that the one or more contacts are removed from the touch lift-off of sensitive surfaces). After (eg, in response to) detecting the end of the first input (18018): the device reverses at least a portion of the rotation of the three-dimensional object relative to the first axis based on determining that the rotation of the three-dimensional object relative to the first axis exceeds a corresponding rotation threshold and, according to determining that the rotation of the three-dimensional object relative to the first axis does not exceed a corresponding rotation threshold, the device abandons reversing the rotation of the three-dimensional object relative to the first axis. (For example, stop the rotation of the three-dimensional object relative to the first axis and/or continue the rotation of the three-dimensional object relative to the first axis in the entered direction of motion, the amount of rotation of which is determined by the amount of the input before the end of the input is detected ). For example, after thevirtual object 11002 is rotated beyond the rotation threshold, as described with reference to FIGS. 13E-13G, the rotation of thevirtual object 11002 is reversed, as shown in FIGS. 13G-13H. In some embodiments, the amount of reversal of the rotation of the three-dimensional object is determined based on the distance by which the three-dimensional object is rotated beyond the corresponding rotation threshold (eg, if the three-dimensional object is rotated by a greater amount than the corresponding rotation threshold, then the three-dimensional object is rotated relative to the corresponding rotation threshold. The rotation of the first axis is reversed by a larger amount, in contrast, if the three-dimensional object is rotated by a smaller amount than the corresponding rotation threshold, the rotation relative to the first axis will be reversed by a smaller amount). In some embodiments, the reversal of rotation is driven by a simulated physical parameter such as an elastic effect that pulls with greater force as the three-dimensional object is rotated further relative to the first axis beyond a corresponding rotation threshold. In some embodiments, the inversion of the rotation is in a direction of rotation determined based on a direction of rotation that is rotated relative to the first axis by more than a corresponding rotation threshold (eg, if a three-dimensional object is rotated such that the top of the object moves back into the display) , the inversion of the rotation rotates the top of the object forward out of the display; if the 3D object is rotated so that the top of the object rotates forward out of the display, the inversion of the rotation rotates the top of the object back into the display; if the 3D object is rotated so that the top of the object rotates forward out of the display the object is rotated such that the right side of the object is moved back into the display, the inversion of the rotation rotates the right side of the object forward out of the display; and/or if the 3D object is rotated such that the left side of the object is rotated forward out of the display monitor, the inversion of the rotation rotates the left side of the object back into the monitor). In some embodiments, for example, where the rotation relative to the second axis is constrained to a corresponding angular range, a similar rubberbanding for rotation about the second axis is performed (eg, the condition of the rotation gender reversal). In some embodiments, for example, without constraining the rotation relative to the second axis such that the device allows the three-dimensional object to rotate 360 degrees, the rubber strip selection for rotation about the second axis is not performed (eg, because the device does not A rotation threshold is applied relative to the rotation of the second axis). Depending on whether the object has been rotated beyond a rotation threshold, reverse at least a portion of the rotation of the 3D object with respect to the first axis after detecting the end of the input, or give up reversing a portion of the rotation of the 3D object with respect to the first axis, thereby providing an indication that applies Visual feedback on the rotation threshold of the virtual object's rotation. Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (for example, by helping the user avoid attempts to provide input for rotating virtual objects beyond a rotation threshold), which in turn makes the user The ability to use the device more quickly and efficiently reduces power usage and extends the battery life of the device.

在一些实施方案(18020)中,根据确定第一输入对应于绕不同于第一轴和第二轴的第三轴(例如,垂直于显示器的平面(例如,x-y平面)的第三轴,诸如z轴)旋转三维对象的请求,设备放弃相对于第三轴旋转虚拟三维对象(例如,绕z轴的旋转被禁止并且绕z轴旋转对象的请求被设备忽略)。在一些实施方案中,设备提供警报(例如,用于指示输入失败的触觉输出)。根据确定旋转输入对应于绕第三轴旋转虚拟对象的请求而放弃虚拟对象的旋转,提供指示绕第三轴的旋转受到限制的视觉反馈。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更高效(例如,通过帮助用户避免尝试提供用于将虚拟对象绕第三轴旋转的输入),这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments (18020), the first input corresponds to a third axis about a third axis different from the first axis and the second axis (eg, perpendicular to a plane (eg, x-y plane) of the display) according to the determination, such as z-axis) request to rotate the 3D object, the device discards the rotation of the virtual 3D object relative to the third axis (eg, rotation about the z-axis is disabled and requests to rotate the object about the z-axis are ignored by the device). In some embodiments, the device provides an alert (eg, a tactile output to indicate input failure). Abandoning rotation of the virtual object upon determining that the rotation input corresponds to a request to rotate the virtual object about the third axis provides visual feedback indicating that rotation about the third axis is limited. Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (for example, by helping the user avoid trying to provide input for rotating the virtual object about a third axis), which in turn makes the The user is able to use the device more quickly and efficiently while reducing power usage and extending the battery life of the device.

在一些实施方案中,设备显示(18022)虚拟三维对象投射的阴影的表示,同时在第一用户界面区域(例如,登台用户界面)中显示虚拟三维对象的第一视角的表示。该设备根据虚拟三维对象相对于第一轴和/或第二轴的旋转来改变阴影的表示的形状。例如,当虚拟对象11002旋转时,虚拟对象11002的阴影13006的形状与图13B至图13F不同。在一些实施方案中,阴影移位并改变形状以指示虚拟对象相对于支持虚拟对象的预定义底侧的登台用户界面中的不可见接地层的当前取向。在一些实施方案中,虚拟三维对象的表面看起来反射来自模拟光源的光,该模拟光源位于登台用户界面中表示的虚拟空间中的预定义方向上。根据虚拟对象的旋转改变阴影的形状提供视觉反馈(例如,指示虚拟对象相对于其取向的虚拟平面(例如,登台视图中的台架))。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更高效(例如,通过帮助用户确定用于引起绕第一轴或第二轴旋转的轻扫输入的适当方向),这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device displays ( 18022 ) a representation of the shadow cast by the virtual three-dimensional object while displaying a representation of the first perspective of the virtual three-dimensional object in a first user interface area (eg, a staging user interface). The device changes the shape of the representation of the shadow according to the rotation of the virtual three-dimensional object relative to the first axis and/or the second axis. For example, when thevirtual object 11002 is rotated, the shape of theshadow 13006 of thevirtual object 11002 is different from that of FIGS. 13B to 13F . In some implementations, the shadow shifts and changes shape to indicate the current orientation of the virtual object relative to an invisible ground layer in a staging user interface that supports a predefined bottom side of the virtual object. In some embodiments, the surface of the virtual three-dimensional object appears to reflect light from a simulated light source located in a predefined direction in the virtual space represented in the staging user interface. Changing the shape of the shadow according to the rotation of the virtual object provides visual feedback (eg, indicating the virtual plane relative to which the virtual object is oriented (eg, a gantry in a staging view)). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (eg, by helping the user determine the appropriate orientation for a swipe input to cause rotation about the first or second axis) , which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在第一用户界面区域(18024)中旋转虚拟三维对象时:根据确定以显露虚拟三维对象的预定义的底部的第二视角显示虚拟三维对象,放弃用虚拟三维对象的第二视角的表示来显示阴影的表示。例如,当从下方观察虚拟对象时,设备不显示虚拟对象的阴影(例如,如参照图13G至图13I所描述的)。根据确定显示虚拟对象的底部而放弃显示虚拟对象的阴影提供视觉反馈(例如,指示对象已旋转到不再对应于虚拟平面的位置(例如,登台视图的台架))。为用户提供改进的视觉反馈增强了设备的可操作性并且使用户-设备界面更高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, when the virtual 3D object is rotated in the first user interface area (18024): the virtual 3D object is displayed in a second viewing angle that exposes a predefined bottom of the virtual 3D object according to the determination, discarding the second view of the virtual 3D object Representation of two perspectives to show the representation of shadows. For example, when the virtual object is viewed from below, the device does not display shadows of the virtual object (eg, as described with reference to FIGS. 13G-13I ). Abandoning display of the shadow of the virtual object upon determining that the bottom of the virtual object is displayed provides visual feedback (eg, indicating that the object has been rotated to a position that no longer corresponds to the virtual plane (eg, a gantry of a staging view)). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在第一用户界面区域(例如,登台视图)中旋转虚拟三维对象之后,设备检测(18026)对应于在第一用户界面区域中重置虚拟三维对象的请求的第二输入(例如,第二输入是在第一用户界面区域上的双击)。响应于检测到第二输入,设备在第一用户界面区域中显示(18028)(例如,通过旋转和重新设定虚拟对象的尺寸)虚拟三维对象的预定义原始视角的表示(例如,第一视角,或与第一视角不同的默认起始视角(例如,当第一视角是在登台用户界面中的用户操纵之后的显示视角时))(例如,响应于双击,设备将虚拟对象的取向重置到预定义的原始取向(例如,直立,其中前侧面向用户并且底侧搁置在预定义的接地层上))。例如,图13I至图13J示出了使虚拟对象11002的视角从改变的视角(作为参照图13B至图13G所描述的旋转输入的结果)改变为图13J中的原始视角(其与图13A中所示的虚拟对象11002的视角相同)的输入。在一些实施方案中,响应于检测到与重置虚拟三维对象的指令对应的第二输入,设备还重新设定虚拟三维对象的尺寸以反映虚拟三维对象的默认显示尺寸。在一些实施方案中,双击输入在登台用户界面中重置虚拟对象的取向和尺寸,而双击输入仅重置尺寸,而不重置增强现实用户界面中虚拟对象的取向。在一些实施方案中,设备要求将双击指向虚拟对象,以便重置增强现实用户界面中的虚拟对象的尺寸,同时设备响应于在虚拟对象上检测到的双击和在虚拟对象周围检测到的双击重置虚拟对象的取向和尺寸。在增强现实视图中,单个手指轻扫拖动虚拟对象而不是旋转虚拟对象(例如,与在登台视图中不同)。响应于检测到重置虚拟对象的请求而显示虚拟对象的预定义原始视角增强了设备的可操作性并且使用户-设备界面更高效(例如,通过提供重置对象的选项而不是要求用户估计提供的用于调整对象属性的输入何时将对象返回到预定义的原始视角)。减少执行操作所需的输入数量提高了设备的可操作性,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, after rotating the virtual three-dimensional object in the first user interface area (eg, the staging view), the device detects (18026) a second input corresponding to a request to reset the virtual three-dimensional object in the first user interface area (For example, the second input is a double-click on the first user interface area). In response to detecting the second input, the device displays (18028) in the first user interface area (eg, by rotating and resizing the virtual object) a representation of the predefined original perspective of the virtual three-dimensional object (eg, the first perspective , or a default starting point of view that is different from the first point of view (eg, when the first point of view is the display point of view after user manipulation in the staging UI)) (eg, in response to a double tap, the device resets the orientation of the virtual object to a predefined native orientation (eg, upright, with the front side facing the user and the bottom side resting on a predefined ground plane). For example, FIGS. 13I-13J illustrate changing the perspective ofvirtual object 11002 from a changed perspective (as a result of the rotation input described with reference to FIGS. 13B-13G ) to the original perspective in FIG. 13J (which is different from that in FIG. 13A ).virtual object 11002 shown with the same viewing angle) input. In some embodiments, in response to detecting the second input corresponding to the instruction to reset the virtual three-dimensional object, the device also resizes the virtual three-dimensional object to reflect the default display size of the virtual three-dimensional object. In some embodiments, the double tap input resets the orientation and size of the virtual object in the staging user interface, while the double tap input only resets the size, not the orientation of the virtual object in the augmented reality user interface. In some embodiments, the device requires a double tap to be directed at the virtual object in order to reset the size of the virtual object in the augmented reality user interface, while the device re-sizes the double tap detected on and around the virtual object in response to a double tap detected on the virtual object. Set the orientation and size of the virtual object. In augmented reality view, a single finger swipe drags virtual objects instead of rotating them (eg, unlike in staging view). Displaying a predefined original view of a virtual object in response to detecting a request to reset the virtual object enhances the operability of the device and makes the user-device interface more efficient (e.g., by providing the option to reset the object rather than requiring the user to estimate the The input to adjust the object's properties when to return the object to the predefined original perspective). Reducing the number of inputs required to perform an operation improves the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,当在第一用户界面区域(例如,登台用户界面)中显示虚拟三维对象时,设备检测(18030)对应于重新设定虚拟三维对象的尺寸的请求的第三输入(例如,第三输入是指向在第一用户界面区域上表示的虚拟对象的捏合或展开手势,第三输入具有满足标准(例如,用于发起重新设定尺寸操作的原始或增强标准(如下文参考方法19000更详细描述的))的量值。响应于检测到第三输入,设备根据输入的量值调整(18032)第一用户界面区域中的虚拟三维对象的表示的尺寸。例如,响应于包括展开手势的输入(例如,如参照图6N至图6O所描述的),虚拟对象11002的尺寸减小。在一些实施方案中,当调整虚拟三维对象的表示的尺寸时,设备显示指示符以指示虚拟对象的当前缩放水平。在一些实施方案中,设备在第三输入终止时停止显示缩放水平的指示符。根据用于重新设定对象尺寸的输入的量值来调整虚拟对象的尺寸增强了设备的可操作性(例如,通过提供按所需量重新设定对象尺寸的选项)。减少执行操作所需的输入数量提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, when the virtual three-dimensional object is displayed in the first user interface area (eg, the staging user interface), the device detects (18030) a third input (eg, a third input corresponding to a request to resize the virtual three-dimensional object) , the third input is a pinch or expand gesture directed to the virtual object represented on the first user interface area, the third input has a satisfying criterion (eg, the original or enhanced criterion for initiating the resize operation (as described below with reference to the method) 19000). In response to detecting the third input, the device adjusts (18032) the size of the representation of the virtual three-dimensional object in the first user interface area according to the magnitude of the input. For example, in response to including expanding On input of a gesture (eg, as described with reference to Figures 6N-6O), the size of thevirtual object 11002 decreases. In some implementations, when the representation of the virtual three-dimensional object is resized, the device displays an indicator to indicate the virtual object The current zoom level of the object. In some embodiments, the device stops displaying an indicator of the zoom level when the third input is terminated. Adjusting the size of the virtual object according to the magnitude of the input for resizing the object enhances the device's Operability (for example, by providing the option to resize objects by the desired amount). Reducing the amount of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn makes the user The ability to use the device more quickly and efficiently reduces power usage and extends the battery life of the device.

在一些实施方案中,在调整第一用户界面区域(例如,登台用户界面)中虚拟三维对象的表示的尺寸的同时,设备检测(18034)虚拟三维对象的尺寸已达到虚拟三维对象的预定义的默认显示尺寸。响应于检测到虚拟三维对象的尺寸已达到虚拟三维对象的预定义的默认显示尺寸,设备生成(18036)触觉输出(例如,离散触觉输出)以指示虚拟三维对象以预定义的默认显示尺寸显示。图11O提供了触觉输出11024的实施例,其响应于检测到虚拟对象11002的尺寸已达到虚拟对象11002的先前预定义尺寸而被提供(例如,如参照图11M至图11O所描述的)。在一些实施方案中,当响应于双击输入将虚拟对象的尺寸重置为默认显示尺寸时,设备生成相同的触觉输出。根据确定虚拟对象的尺寸已达到预定义的默认显示尺寸来生成触觉输出向用户提供反馈(例如,指示不需要进一步输入将虚拟对象的模拟尺寸返回到预定义的尺寸)。提供改进的触觉反馈增强了设备的可操作性(例如,通过提供允许用户感知已达到虚拟对象的预定义模拟物理尺寸的感官信息而不由于显示的信息使用户界面杂乱),这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, while resizing the representation of the virtual three-dimensional object in the first user interface area (eg, the staging user interface), the device detects (18034) that the size of the virtual three-dimensional object has reached a predefined size of the virtual three-dimensional object Default display size. In response to detecting that the size of the virtual three-dimensional object has reached a predefined default display size for the virtual three-dimensional object, the device generates (18036) a haptic output (eg, a discrete haptic output) to indicate that the virtual three-dimensional object is displayed at the predefined default display size. 110 provides an embodiment of ahaptic output 11024 that is provided in response to detecting that the size ofvirtual object 11002 has reached a previously predefined size of virtual object 11002 (eg, as described with reference to FIGS. 11M-110 ). In some embodiments, the device generates the same haptic output when resetting the size of the virtual object to the default display size in response to the double tap input. The haptic output is generated to provide feedback to the user based on determining that the size of the virtual object has reached a predefined default display size (eg, indicating that no further input is required to return the simulated size of the virtual object to the predefined size). Providing improved haptic feedback enhances the operability of the device (for example, by providing sensory information that allows the user to perceive that a virtual object has reached a predefined simulated physical size without cluttering the user interface with displayed information), which in turn makes the user The ability to use the device more quickly and efficiently reduces power usage and extends the battery life of the device.

在一些实施方案中,在第一用户界面区域(例如,登台用户界面)中显示缩放水平的视觉指示(例如,指示与当前缩放水平对应的值的滑块)。当调整虚拟三维对象的表示的尺寸时,根据虚拟三维对象的表示的经调整尺寸来调整缩放水平的视觉指示。In some implementations, a visual indication of the zoom level (eg, a slider indicating a value corresponding to the current zoom level) is displayed in a first user interface area (eg, a staging user interface). When the representation of the virtual three-dimensional object is resized, the visual indication of the zoom level is adjusted according to the resized representation of the virtual three-dimensional object.

在一些实施方案中,当在第一用户界面区域(例如,登台用户界面)中显示虚拟三维对象的第三视角的表示时,设备检测(18042)与在第二用户界面区域(例如,增强现实用户界面)中显示虚拟三维对象的请求对应的第四输入,该第二用户界面区域包括一个或多个相机(例如,嵌入在设备中的相机)的视场。响应于检测到第四输入,设备经由显示生成部件在包括在第二用户界面区域中的一个或多个相机的视场的至少一部分上显示(18044)虚拟对象的表示(例如,响应于在第二用户界面区域中显示虚拟对象的请求而显示一个或多个相机的视场),其中一个或多个相机的视场是一个或多个相机所处的物理环境的视图。显示虚拟对象的表示包括:将虚拟三维对象绕第一轴(例如,在水平方向上平行于显示器的平面(例如,x-y平面)的轴,诸如x轴)旋转到预定义角度(例如,到默认偏航角,诸如0度;或者旋转到与在一个或多个相机的视场中捕获的物理环境中检测到的平面对准(例如,平行)的角度)。在一些实施方案中,设备显示三维对象的动画,该动画相对于第一轴逐渐旋转到预定义角度。保持虚拟三维对象相对于第二轴(例如,在垂直方向上平行于显示器的平面(例如,x-y平面)的轴,诸如y轴)的当前角度。响应于在一个或多个相机的视场中显示虚拟对象的请求,将虚拟对象绕第一轴旋转到预定义角度(例如,无需进一步输入将虚拟对象重新定位到相对于平面的预定义取向)增强了设备的可操作性。减少执行操作所需的输入数量提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, when a representation of a third perspective of a virtual three-dimensional object is displayed in a first user interface area (eg, a staging user interface), the device detects (18042) a A fourth input corresponding to a request to display a virtual three-dimensional object in a user interface), the second user interface area includes the field of view of one or more cameras (eg, cameras embedded in the device). In response to detecting the fourth input, the device displays (18044), via the display generation component, a representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the second user interface area (eg, in response to the display in the second user interface area). The field of view of the one or more cameras is displayed upon request for displaying a virtual object in the user interface area), wherein the field of view of the one or more cameras is a view of the physical environment in which the one or more cameras are located. Displaying the representation of the virtual object includes rotating the virtual three-dimensional object about a first axis (eg, an axis horizontally parallel to a plane of the display (eg, an x-y plane), such as the x-axis) to a predefined angle (eg, to a default A yaw angle, such as 0 degrees; or an angle rotated to align (eg, parallel) with a plane detected in the physical environment captured in the field of view of one or more cameras. In some embodiments, the device displays an animation of the three-dimensional object that is gradually rotated to a predefined angle relative to the first axis. The current angle of the virtual three-dimensional object relative to a second axis (eg, an axis that is vertically parallel to a plane of the display (eg, the x-y plane), such as the y-axis) is maintained. In response to a request to display the virtual object in the field of view of the one or more cameras, rotate the virtual object to a predefined angle about the first axis (eg, reposition the virtual object to a predefined orientation relative to the plane without further input) Enhanced device operability. Reducing the amount of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,当在第一用户界面区域(例如,登台用户界面)中显示虚拟三维对象的第四视角的表示时,设备检测(18046)对应于返回到包括虚拟三维对象的二维表示的二维用户界面的请求的第五输入。响应于检测到第五输入,设备(18048):(例如,在显示虚拟三维对象和二维用户界面的二维表示之前)旋转虚拟三维对象以显示虚拟三维对象的与虚拟三维对象的二维表示对应的视角;并且在旋转虚拟三维对象以显示与虚拟三维对象的二维表示对应的相应视角之后显示虚拟三维对象的二维表示。在一些实施方案中,设备显示三维对象的动画,该动画逐渐旋转以显示虚拟三维对象的与虚拟三维对象的二维表示对应的视角。在一些实施方案中,设备还在旋转期间或旋转之后重新设定虚拟三维对象的尺寸以匹配在二维用户界面中显示的虚拟三维对象的二维表示的尺寸。在一些实施方案中,显示动画过渡以示出旋转的虚拟三维对象在二维用户界面中朝向二维表示(例如,虚拟对象的缩略图)的位置移动,并且稳定在该位置中。响应于用于返回到显示虚拟三维对象的二维表示的输入,将虚拟三维对象旋转到与虚拟三维对象的二维表示对应的视角提供视觉反馈(例如,以指示所显示的对象是二维的)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更高效(例如,通过帮助用户提供适当的输入并且避免尝试提供用于使二维对象沿着轴旋转的输入,二维对象的旋转对于该轴不可用),这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, when the representation of the fourth viewing angle of the virtual three-dimensional object is displayed in the first user interface area (eg, the staging user interface), device detection (18046) corresponds to returning to the two-dimensional representation including the virtual three-dimensional object The 2D user interface requests the fifth input. In response to detecting the fifth input, the device (18048): (eg, before displaying the virtual three-dimensional object and the two-dimensional representation of the two-dimensional user interface) rotate the virtual three-dimensional object to display the virtual three-dimensional object and the two-dimensional representation of the virtual three-dimensional object and displaying the two-dimensional representation of the virtual three-dimensional object after rotating the virtual three-dimensional object to display the corresponding viewing angle corresponding to the two-dimensional representation of the virtual three-dimensional object. In some embodiments, the device displays an animation of the three-dimensional object that gradually rotates to display a viewing angle of the virtual three-dimensional object corresponding to the two-dimensional representation of the virtual three-dimensional object. In some embodiments, the device also resizes the virtual three-dimensional object during or after the rotation to match the size of the two-dimensional representation of the virtual three-dimensional object displayed in the two-dimensional user interface. In some embodiments, an animated transition is displayed to show the rotating virtual three-dimensional object moving in the two-dimensional user interface toward the position of the two-dimensional representation (eg, the thumbnail of the virtual object), and stabilizing in that position. In response to the input for returning to displaying the two-dimensional representation of the virtual three-dimensional object, rotating the virtual three-dimensional object to a viewing angle corresponding to the two-dimensional representation of the virtual three-dimensional object provides visual feedback (e.g., to indicate that the displayed object is two-dimensional) ). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user provide appropriate input and avoiding attempts to provide input for rotating a two-dimensional object along an axis, The rotation of the two-dimensional object is not available for this axis), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在显示虚拟三维对象的第一视角的表示之前,设备显示(18050)用户界面,该用户界面包括虚拟三维对象(例如,缩略图或图标)的表示,该表示包括来自相应视角的虚拟三维对象的视图的表示(例如,静态表示诸如对应于虚拟三维对象的二维图像)。在显示虚拟三维对象的表示时,设备检测(18052)显示虚拟三维对象的请求(例如,指向虚拟三维对象的表示的轻击输入或其他选择输入)。响应于检测到显示虚拟三维对象的请求,设备用被旋转以匹配虚拟三维对象的表示的相应视角的虚拟三维对象替换(18054)虚拟三维对象的表示的显示。图11A至图11E提供了显示虚拟对象11002的表示的用户界面5060的实施例。响应于显示虚拟对象11002的请求,如参照图11A所描述的,登台用户界面6010中虚拟对象11002的显示替换用户界面5060的显示,如图11E所示。图11E中的虚拟对象11002的视角与图11A中的虚拟对象11002的表示的视角相同。在一些实施方案中,虚拟三维对象的表示在被虚拟三维对象替换之前被放大(例如,放大到与虚拟三维对象的尺寸匹配的尺寸)。在一些实施方案中,虚拟三维对象最初以虚拟三维对象的表示的尺寸显示,并且随后被放大。在一些实施方案中,在从虚拟三维对象的表示到虚拟三维对象的过渡期间,设备逐渐放大虚拟三维对象的表示,将虚拟三维对象的表示与虚拟三维物体叠象渐变,然后逐渐放大虚拟三维对象,以在虚拟三维对象的表示与虚拟三维对象之间形成平滑过渡。在一些实施方案中,选择虚拟三维对象的初始位置以对应于虚拟三维对象的表示的位置。在一些实施方案中,虚拟三维对象的表示被移位到被选择为与将显示虚拟三维对象的位置对应的位置。用旋转的虚拟三维对象替换虚拟三维对象的(二维)表示的显示以匹配(二维)表示的视角提供视觉反馈(例如,以指示三维对象是与虚拟三维对象的二维表示相同的对象)。为用户提供改进的视觉反馈增强了设备的可操作性并且使用户-设备界面更高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, prior to displaying the representation of the first perspective of the virtual three-dimensional object, the device displays ( 18050 ) a user interface that includes a representation of the virtual three-dimensional object (eg, a thumbnail or icon) including data from a corresponding A representation (eg, a static representation such as a two-dimensional image corresponding to the virtual three-dimensional object) of a view of a virtual three-dimensional object from a viewpoint. While displaying the representation of the virtual three-dimensional object, the device detects (18052) a request to display the virtual three-dimensional object (eg, a tap input or other selection input directed to the representation of the virtual three-dimensional object). In response to detecting the request to display the virtual three-dimensional object, the device replaces (18054) the display of the representation of the virtual three-dimensional object with the virtual three-dimensional object rotated to match the corresponding viewing angle of the representation of the virtual three-dimensional object. 11A-11E provide an embodiment of auser interface 5060 displaying a representation ofvirtual object 11002. In response to the request to displayvirtual object 11002, as described with reference to FIG. 11A, display ofvirtual object 11002 in staginguser interface 6010 replaces the display ofuser interface 5060, as shown in FIG. 11E. The viewing angle ofvirtual object 11002 in FIG. 11E is the same as that of the representation ofvirtual object 11002 in FIG. 11A . In some embodiments, the representation of the virtual three-dimensional object is enlarged (eg, to a size that matches the size of the virtual three-dimensional object) before being replaced by the virtual three-dimensional object. In some embodiments, the virtual three-dimensional object is initially displayed at the size of the representation of the virtual three-dimensional object, and then enlarged. In some embodiments, during the transition from the representation of the virtual 3D object to the virtual 3D object, the device gradually enlarges the representation of the virtual 3D object, superimposes the representation of the virtual 3D object with the virtual 3D object, and then gradually enlarges the virtual 3D object , to form a smooth transition between the representation of the virtual 3D object and the virtual 3D object. In some embodiments, the initial position of the virtual three-dimensional object is selected to correspond to the position of the representation of the virtual three-dimensional object. In some embodiments, the representation of the virtual three-dimensional object is shifted to a position selected to correspond to the position where the virtual three-dimensional object will be displayed. Replacing the display of a (2D) representation of a virtual 3D object with a rotated virtual 3D object provides visual feedback from a viewing angle that matches the (2D) representation (e.g., to indicate that the 3D object is the same object as the 2D representation of the virtual 3D object) . Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在显示第一用户界面之前,设备显示(18056)包括虚拟三维对象的二维表示的二维用户界面。在显示包括虚拟三维对象的二维表示的二维用户界面时,设备在触敏表面上与虚拟三维对象的二维表示对应的位置处检测(18058)满足预览标准(例如,预览标准要求按压输入的强度超过第一强度阈值(例如,轻按压强度阈值)并且/或者预览标准要求按压输入的持续时间超过第一持续时间阈值)的触摸输入的第一部分(例如,接触强度的增加)。响应于检测到满足预览标准的触摸输入的第一部分,设备显示(18060)虚拟三维对象的预览,该预览大于虚拟三维对象的二维表示(例如,该预览被动画化以显示虚拟三维对象的不同视角);在一些实施方案中,设备显示三维对象的动画逐渐放大(例如,基于输入的持续时间或压力或基于动画的预先确定的速率)。显示虚拟三维对象的预览(例如,不会用不同的用户界面替换当前显示的用户界面的显示)增强了设备的可操作性(例如,通过使用户能够显示虚拟三维对象并且返回到查看虚拟三维对象的二维表示,而不必提供用于在用户界面之间导航的输入)。减少执行操作所需的输入数量提高了设备的可操作性,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, prior to displaying the first user interface, the device displays (18056) a two-dimensional user interface that includes a two-dimensional representation of the virtual three-dimensional object. While displaying a two-dimensional user interface that includes a two-dimensional representation of a virtual three-dimensional object, the device detects (18058) that a preview criterion is satisfied at a location on the touch-sensitive surface that corresponds to the two-dimensional representation of the virtual three-dimensional object (eg, the preview criterion requires a press input A first portion (eg, an increase in contact strength) of the touch input whose strength exceeds a first strength threshold (eg, a light press strength threshold) and/or the preview criteria requires that the duration of the press input exceeds the first duration threshold). In response to detecting the first portion of the touch input that meets the preview criteria, the device displays (18060) a preview of the virtual three-dimensional object that is larger than the two-dimensional representation of the virtual three-dimensional object (eg, the preview is animated to display differences in the virtual three-dimensional object). angle of view); in some embodiments, the device displays an animation of the three-dimensional object gradually zooming in (eg, based on the duration or pressure of the input or based on a predetermined rate of the animation). Displaying a preview of the virtual 3D object (eg, without replacing the display of the currently displayed user interface with a different user interface) enhances the operability of the device (eg, by enabling the user to display the virtual 3D object and return to viewing the virtual 3D object) , without having to provide input for navigating between user interfaces). Reducing the number of inputs required to perform an operation improves the operability of the device, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在显示虚拟三维对象的预览时,设备(例如,通过相同的连续保持的接触)检测(18062)触摸输入的第二部分。响应于检测到触摸输入的第二部分(18064):根据确定触摸输入的第二部分满足菜单显示标准(例如,菜单显示标准要求接触沿预定义方向(例如,向上)移动超过阈值量),设备显示对应于与虚拟对象相关联的多个操作的多个可选选项(例如,共享菜单)(例如,共享选项,诸如与另一个设备或用户共享虚拟对象的各种手段);并且根据确定触摸输入的第二部分满足登台标准(例如,登台标准要求接触的强度超过大于第一阈值强度的第二阈值强度(例如,深按压强度阈值)),设备用包括虚拟三维对象的第一用户界面替换包括虚拟三维对象的二维表示的二维用户界面的显示。根据是否满足登台标准,显示与虚拟对象相关联的菜单或用包括虚拟三维对象的第一用户界面替换包括虚拟三维对象的二维表示的二维用户界面的显示,这使得多种不同类型的操作能够响应于输入而执行。使得多种不同类型的操作能够用第一类型的输入执行提高了用户能够执行这些操作的效率,从而增强了设备的可操作性,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, while displaying the preview of the virtual three-dimensional object, the device detects (18062) the second portion of the touch input (eg, through the same continuously maintained contact). In response to detecting the second portion of the touch input (18064): upon determining that the second portion of the touch input satisfies the menu display criteria (eg, the menu display criteria requires the contact to move in a predefined direction (eg, upward) by more than a threshold amount), the device displaying multiple selectable options (eg, sharing menus) corresponding to multiple operations associated with the virtual object (eg, sharing options, such as various means of sharing the virtual object with another device or user); and determining touch The second portion of the input satisfies the staging criteria (eg, the staging criteria requires that the strength of the contact exceeds a second threshold strength greater than the first threshold strength (eg, a deep press strength threshold)), the device replaces with the first user interface including the virtual three-dimensional object Display of a two-dimensional user interface including a two-dimensional representation of a virtual three-dimensional object. Depending on whether the staging criteria are met, displaying a menu associated with the virtual object or replacing the display of a two-dimensional user interface including a two-dimensional representation of the virtual three-dimensional object with a first user interface including the virtual three-dimensional object enables many different types of operations Can be executed in response to input. Enabling multiple different types of operations to be performed with the first type of input increases the efficiency with which the user can perform these operations, thereby enhancing the operability of the device, which in turn reduces the cost by enabling the user to use the device more quickly and efficiently. Electricity is used and the battery life of the device is extended.

在一些实施方案中,第一用户界面包括(18066)多个控件(例如,用于切换到世界视图、用于返回等的按钮)。在显示第一用户界面之前,设备显示(18068)包括虚拟三维对象的二维表示的二维用户界面。响应于检测到在第一用户界面中显示虚拟三维对象的请求,设备(18070)在第一用户界面中显示虚拟三维对象,而不显示与虚拟三维对象相关联的一组一个或多个控件;并且在第一用户界面中显示虚拟三维对象之后,设备显示一组一个或多个控件。例如,如参照图11A至图11E所描述的,在登台用户界面6010之前显示包括虚拟对象11002的二维表示的用户界面5060的显示。响应于在登台用户界面6010中显示虚拟对象11002的请求(如参照图11A所描述的),显示虚拟对象11002(如图11B至图11C所示),而没有登台用户界面6010的控件6016、6018和6020。在图11D至图11E中,登台用户界面6010的控件6016、6018和6020淡入用户界面中的视图。在一些实施方案中,该组一个或多个控件包括用于在增强现实环境中显示虚拟三维对象的控件,其中虚拟三维对象相对于在设备的一个或多个相机的视场中检测到的平面放置在固定位置。在一些实施方案中,响应于检测到在第一用户界面中显示虚拟三维对象的请求:根据确定虚拟三维对象未准备好在第一用户界面中显示(例如,在准备好显示第一用户界面时,虚拟对象的三维模型没有被完全加载)(例如,虚拟对象的加载时间超过阈值时间量(例如,对于用户而言明显且可察觉)),设备显示第一用户界面的一部分(例如,第一用户界面的背景窗口),而不在第一用户界面上显示多个控件;并且根据确定虚拟三维对象准备好在第一用户界面中显示(例如,在没有控件的情况下显示第一用户界面的部分之后),设备在第一用户界面中显示(例如,淡入)虚拟三维对象;并且在第一用户界面中显示虚拟三维对象之后,设备显示(例如,淡入)控件。响应于检测到在第一用户界面中显示虚拟三维对象的请求并且根据确定虚拟三维对象准备好被显示(例如,在准备好显示第一用户界面时虚拟对象的三维模型已被加载(例如,虚拟对象的加载时间小于阈值时间量(例如,对于用户而言可忽略且不可察觉)):设备显示第一用户界面,且在第一用户界面上有多个控件;并且设备在具有多个控件的第一用户界面中显示(例如,不淡入)虚拟三维对象。在一些实施方案中,当存在登台用户界面以返回到二维用户界面时(例如,响应于“返回”的请求),控件在虚拟三维对象被转换为虚拟三维对象的二维表示之前首先淡出。在用户界面中显示虚拟三维对象之后显示控件提供视觉反馈(例如,指示在加载虚拟对象所需的时间量期间操纵虚拟对象的控件不可用)。为用户提供改进的视觉反馈增强了设备的可操作性,并且使用户-设备界面更高效(例如,通过帮助用户当在虚拟对象的加载时间期间操纵操作不可用时避免提供操纵对象的输入),这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the first user interface includes (18066) a plurality of controls (eg, buttons to switch to world view, to go back, etc.). Before displaying the first user interface, the device displays (18068) a two-dimensional user interface that includes a two-dimensional representation of the virtual three-dimensional object. In response to detecting the request to display the virtual three-dimensional object in the first user interface, the device (18070) displays the virtual three-dimensional object in the first user interface without displaying a set of one or more controls associated with the virtual three-dimensional object; And after displaying the virtual three-dimensional object in the first user interface, the device displays a set of one or more controls. For example, the display ofuser interface 5060 including a two-dimensional representation ofvirtual object 11002 is displayed prior to staginguser interface 6010, as described with reference to FIGS. 11A-11E. In response to a request to displayvirtual object 11002 in staging user interface 6010 (as described with reference to FIG. 11A ), virtual object 11002 (as shown in FIGS. 11B-11C ) is displayed withoutcontrols 6016 , 6018 of staginguser interface 6010 and 6020. In Figures 11D-11E, controls 6016, 6018, and 6020 of staginguser interface 6010 fade into view in the user interface. In some embodiments, the set of one or more controls includes controls for displaying virtual three-dimensional objects in an augmented reality environment, wherein the virtual three-dimensional objects are relative to a plane detected in the field of view of one or more cameras of the device placed in a fixed position. In some embodiments, in response to detecting the request to display the virtual three-dimensional object in the first user interface: upon determining that the virtual three-dimensional object is not ready for display in the first user interface (eg, when ready to display the first user interface , the three-dimensional model of the virtual object is not fully loaded) (eg, the virtual object takes more than a threshold amount of time to load (eg, is apparent and perceptible to the user)), the device displays a portion of the first user interface (eg, the first the background window of the user interface) without displaying the plurality of controls on the first user interface; and upon determining that the virtual three-dimensional object is ready to be displayed in the first user interface (eg, displaying a portion of the first user interface without the controls afterwards), the device displays (eg, fades in) the virtual three-dimensional object in the first user interface; and after displaying the virtual three-dimensional object in the first user interface, the device displays (eg, fades in) the control. In response to detecting a request to display the virtual three-dimensional object in the first user interface and upon determining that the virtual three-dimensional object is ready to be displayed (eg, a three-dimensional model of the virtual object has been loaded when the first user interface is ready to be displayed (eg, virtual The load time of the object is less than a threshold amount of time (eg, negligible and imperceptible to the user): the device displays a first user interface, and there are multiple controls on the first user interface; The virtual three-dimensional object is displayed (eg, does not fade in) in the first user interface. In some embodiments, when there is a staging user interface to return to the two-dimensional user interface (eg, in response to a "return" request), the controls are displayed in the virtual The three-dimensional object is first faded out before being converted to a two-dimensional representation of the virtual three-dimensional object. Displaying the controls after the virtual three-dimensional object is displayed in the user interface provides visual feedback (e.g., controls indicating that the virtual object is not available for manipulation during the amount of time required to load the virtual object). Providing the user with improved visual feedback enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user avoid providing input to manipulate objects when manipulation operations are not available during the virtual object's load time). ), which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

应当理解,对图18A至图18I中的操作进行描述的特定顺序仅仅是一个示例,并非旨在表明所述顺序是可以执行这些操作的唯一顺序。本领域的普通技术人员会想到多种方式来对本文所述的操作进行重新排序。另外,应当注意,本文相对于本文所述的其他方法(例如,方法800、900、1000、16000、17000、19000和20000)描述的其他过程的细节同样以类似的方式适用于上文相对于图18A至图18I所述的方法18000。例如,上文参考方法18000所述的接触、输入、虚拟对象、用户界面区域、视场、触觉输出、移动和/或动画任选地具有本文参考本文所述的其他方法(例如,方法800、900、1000、17000、18000、19000和20000)所述的接触、输入、虚拟对象、用户界面区域、视场、触觉输出、移动和/或动画的特征中的一者或多者。为了简明起见,此处不再重复这些细节。It should be understood that the particular order in which the operations in FIGS. 18A-18I are described is merely an example, and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize numerous ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (eg,methods 800, 900, 1000, 16000, 17000, 19000, and 20000) also apply in a similar manner to the above with respect to Figures Themethod 18000 described in 18A-18I. For example, the contacts, inputs, virtual objects, user interface areas, fields of view, haptic outputs, movements, and/or animations described above with reference tomethod 18000 optionally have other methods described herein with reference to (eg,method 800, 900, 1000, 17000, 18000, 19000, and 20000) of one or more of the touch, input, virtual object, user interface area, field of view, haptic output, movement and/or animation features. For the sake of brevity, these details are not repeated here.

图19A至图19H是示出根据确定第一对象操纵行为满足第一阈值移动量值来增大第二对象操纵行为所需的第二阈值移动量值的方法19000的流程图。方法19000在具有显示生成部件(例如,显示器、投影仪、平视显示器等)和触敏表面(例如,触敏表面或同时充当显示生成部件和触敏表面的触摸屏显示器)的电子设备(例如,图3的设备300或图1A的便携式多功能设备100)处执行。方法19000中的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。19A-19H are flowcharts illustrating amethod 19000 of increasing a second threshold movement magnitude required for a second object manipulation behavior based on determining that the first object manipulation behavior satisfies a first threshold movement magnitude.Method 19000 is useful in an electronic device (eg, a graph) having a display-generating component (eg, a display, a projector, a head-up display, etc.) and a touch-sensitive surface (eg, a touch-sensitive surface or a touch-screen display that acts as both a display-generating component and a touch-sensitive surface). 3 or theportable multifunction device 100 of FIG. 1A ). Some operations inMethod 19000 are optionally combined, and/or the order of some operations is optionally changed.

设备经由显示生成部件显示(19002)第一用户界面区域,该第一用户界面区域包括与多个对象操纵行为相关联的用户界面对象(例如,包括虚拟对象的表示的用户界面区域),所述多个对象操纵行为包括响应于满足第一手势识别标准(例如,旋转标准)的输入而执行的第一对象操纵行为(例如,围绕相应轴的用户界面对象的旋转)和响应于满足第二手势识别标准(例如,平移标准和缩放标准中的一者)的输入而执行的第二对象操纵行为(例如,用户界面对象的平移或用户界面对象的缩放中的一者)。例如,所显示的虚拟对象11002与操纵行为相关联,所述操纵行为包括围绕相应轴的旋转(例如,如参照图14B至图14E所描述的)、平移(例如,如参照图14K至图14M所描述的)以及缩放(例如,如参照图14G至图14I所描述的)。The device displays (19002), via the display generation component, a first user interface area including user interface objects (eg, a user interface area including representations of virtual objects) associated with a plurality of object manipulation behaviors, the The plurality of object manipulation behaviors include a first object manipulation behavior (eg, a rotation of a user interface object about a corresponding axis) performed in response to an input satisfying a first gesture recognition criterion (eg, a rotation criterion) and a second hand manipulation behavior performed in response to an input satisfying a first gesture recognition criterion (eg, rotation criterion). A second object manipulation behavior (eg, one of panning of the user interface object or zooming of the user interface object) performed in response to input of a potential recognition criterion (eg, one of a translation criterion and a zoom criterion). For example, the displayedvirtual object 11002 is associated with manipulation behaviors including rotation about respective axes (eg, as described with reference to FIGS. 14B-14E ), translation (eg, as described with reference to FIGS. 14K-14M ), described) and scaling (eg, as described with reference to FIGS. 14G-14I ).

在显示第一用户界面区域时,设备检测(19004)指向用户界面对象的输入的第一部分(例如,设备检测触敏表面上与用户界面对象的显示位置对应的位置处的一个或多个接触),包括检测一个或多个接触在触敏表面上的移动,并且当在触敏表面上检测到一个或多个接触时,设备结合第一手势识别标准和第二手势识别标准两者评估一个或多个接触的移动。While displaying the first user interface area, the device detects (19004) the first portion of the input directed to the user interface object (eg, the device detects one or more contacts on the touch-sensitive surface at locations corresponding to where the user interface object is displayed) , comprising detecting movement of one or more contacts on the touch-sensitive surface, and when the one or more contacts are detected on the touch-sensitive surface, the device evaluates a or movement of multiple contacts.

响应于检测到输入的第一部分,设备基于输入的第一部分更新用户界面对象的外观,包括(19006):根据确定输入的第一部分在满足第二手势识别标准之前满足第一手势识别标准:基于输入的第一部分(例如,基于输入的第一部分的方向和/或量值)根据第一对象操纵行为改变用户界面对象的外观(例如,旋转用户界面对象);并且(例如,在不根据第二对象操纵行为改变用户界面对象的外观的情况下)通过增大第二手势识别标准的阈值(例如,增大第二手势识别标准中的移动参数(例如,移动距离、速度等)所需的阈值)来更新第二手势识别标准。例如,在图14E中,虚拟对象1102已经根据确定已满足旋转标准(在满足缩放标准之前)旋转,并且用于缩放标准的阈值ST增加到ST′。在一些实施方案中,在用于识别用于旋转对象的手势的标准得到满足之前,通过满足用于识别用于平移或缩放的手势的标准(假设用于平移或缩放的标准之前未得到满足)对对象发起平移或缩放操作相对容易。一旦用于识别用于旋转对象的手势的标准得到满足,发起对对象的平移或缩放操作就变得更难(例如,用于平移和缩放的标准被更新为具有增大的移动参数阈值),并且对象操纵偏向于与已经识别并用于操纵该对象的手势对应的操纵行为。根据确定在满足第一手势识别标准之前输入满足第二手势识别标准:设备基于输入的第一部分(例如,基于输入的第一部分的方向和/或量值)根据第二对象操纵行为来改变用户界面对象的外观(例如,平移用户界面对象或重新设定用户界面对象的尺寸);并(例如,在不根据第一对象操纵行为改变用户界面对象的外观的情况下)通过增大第一手势识别标准的阈值来更新第一手势识别标准(例如,增大第一手势识别标准中的移动参数(例如,移动距离、速度等)所需的阈值)。例如,在图14I中,虚拟对象1102的尺寸已经根据确定缩放标准已经(在旋转标准得到满足之前)得到满足而增大,并且用于旋转标准的阈值RT增加到RT’。在一些实施方案中,在满足用于识别用于平移或缩放对象的手势的标准之前,通过满足用于识别用于旋转的手势的标准来对对象发起旋转操作相对容易(假设用于识别用于旋转对象的手势的标准之前未得到满足)。一旦用于识别用于平移或缩放对象的手势的标准得到满足,发起对对象的旋转操作就变得更难(例如,用于旋转对象的标准被更新为具有增大的移动参数阈值),并且对象操纵行为偏向于与已经识别并用于操纵该对象的手势对应的操纵行为。在一些实施方案中,根据输入的相应移动参数的值动态且连续地(例如,显示不同的尺寸、位置、视角、反射、阴影等)改变用户界面对象的外观。在一些实施方案中,设备遵循移动参数(例如,每种类型的操纵行为的相应移动参数)与对用户界面对象的外观所做的改变(例如,每种类型的操纵行为的外观的相应方面)之间的预设对应关系(例如,每种类型的操纵行为的相应对应关系)。当输入移动增大到大于第二对象操纵的第二阈值时,增大第一对象操纵所需的输入移动的第一阈值增强了设备的可操作性(例如,通过帮助用户避免在尝试提供用于执行第一对象操纵的输入时意外地执行第二对象操纵)。提高用户控制不同类型的对象操纵的能力增强了设备的可操作性并使用户设备界面更加高效。In response to detecting the first portion of the input, the device updates the appearance of the user interface object based on the first portion of the input, comprising (19006): satisfying the first gesture recognition criterion before satisfying the second gesture recognition criterion according to determining that the first portion of the input: based on The first portion of the input (eg, based on the direction and/or magnitude of the first portion of the input) alters the appearance of the user interface object (eg, rotates the user interface object) in accordance with the first object manipulation behavior; and (eg, when not based on the second Object manipulation behavior changes the appearance of user interface objects) required by increasing the threshold of the second gesture recognition criterion (eg, increasing the movement parameters (eg, movement distance, speed, etc.) in the second gesture recognition criterion) threshold) to update the second gesture recognition criterion. For example, in Figure 14E, the virtual object 1102 has been rotated according to the determination that the rotation criterion has been met (before the scaling criterion is met), and the threshold ST for the scaling criterion is increased to ST'. In some embodiments, by satisfying the criteria for identifying the gesture for pan or zoom before the criteria for identifying the gesture for rotating the object is met (assuming the criteria for pan or zoom has not been met before) It is relatively easy to initiate pan or zoom operations on objects. Once the criteria for recognizing gestures for rotating objects are met, it becomes more difficult to initiate pan or zoom operations on objects (eg, criteria for pan and zoom are updated to have increased movement parameter thresholds), And object manipulation is biased towards manipulation behaviors corresponding to gestures that have been identified and used to manipulate the object. Based on determining that the input satisfies the second gesture recognition criterion before the first gesture recognition criterion: the device alters the user based on the second object manipulation behavior based on the first portion of the input (eg, based on the direction and/or magnitude of the first portion of the input) the appearance of the interface object (eg, panning or resizing the user interface object); and (eg, without changing the appearance of the user interface object in accordance with the first object manipulation behavior) by increasing the first gesture The thresholds of the recognition criteria are used to update the first gesture recognition criteria (eg, increasing the thresholds required for movement parameters (eg, movement distance, speed, etc.) in the first gesture recognition criteria). For example, in Figure 14I, the size of the virtual object 1102 has been increased based on determining that the scaling criteria have been met (before the rotation criteria are met), and the threshold RT for the rotation criteria is increased to RT'. In some embodiments, it is relatively easy to initiate a rotation operation on an object by meeting the criteria for identifying a gesture for The criteria for gestures to rotate objects were not met before). Once the criteria for recognizing gestures for panning or zooming the object are met, it becomes more difficult to initiate a rotation operation on the object (eg, the criteria for rotating the object are updated to have an increased movement parameter threshold), and Object manipulation behaviors are biased toward manipulation behaviors that correspond to gestures that have been identified and used to manipulate the object. In some implementations, the appearance of the user interface objects is dynamically and continuously (eg, displaying different sizes, positions, viewing angles, reflections, shadows, etc.) according to the entered values of the corresponding movement parameters. In some embodiments, the device follows movement parameters (eg, corresponding movement parameters for each type of manipulation behavior) and changes made to the appearance of user interface objects (eg, corresponding aspects of the appearance for each type of manipulation behavior) Preset correspondences between (eg, corresponding correspondences for each type of manipulation behavior). Increasing the first threshold of input movement required for the first object manipulation enhances the operability of the device when the input movement increases above the second threshold for the second object manipulation (eg, by helping the user avoid attempts to provide useful A second object manipulation was unexpectedly performed while the input of the first object manipulation was performed). Improving the user's ability to control different types of object manipulation enhances the operability of the device and makes the user device interface more efficient.

在一些实施方案中,在基于输入的第一部分更新用户界面对象的外观之后,设备检测(19008)输入的第二部分(例如,通过输入的第一部分中的相同的连续保持的接触,或者在输入的第一部分中的接触的终止(例如,抬离)之后检测到的不同的接触)。在一些实施方案中,基于连续检测到的指向用户界面对象的输入来检测输入的第二部分。响应于检测到输入的第二部分,设备基于输入的第二部分更新(19010)用户界面对象的外观,包括:根据确定输入的第一部分满足第一手势识别标准并且输入的第二部分不满足更新的第二手势识别标准:(例如,不考虑输入的第二部分是否满足第一手势识别标准或原始第二手势识别标准)基于输入的第二部分根据第一对象操纵行为来改变用户界面对象的外观(例如,基于输入的第二部分的方向和/或量值),而不是根据第二对象操纵行为改变用户界面对象的外观(例如,即使输入的第二部分在更新之前确实满足原始第二手势识别标准);根据确定输入的第一部分满足第二手势识别标准并且输入的第二部分不满足更新的第一手势识别标准:(例如,不考虑输入的第二部分是否满足第二手势识别标准或原始第一手势识别标准)基于输入的第二部分根据第二对象操纵行为来改变用户界面对象的外观(例如,基于输入的第二部分的方向和/或量值),而不是根据第一对象操纵行为改变用户界面对象的外观(例如,即使输入的第二部分在更新之前确实满足原始第一手势识别)。In some embodiments, after updating the appearance of the user interface object based on the first portion of the input, the device detects (19008) the second portion of the input (eg, through the same continuously held contact in the first portion of the input, or after the input A different contact detected after termination of the contact in the first part (eg, lift-off). In some embodiments, the second portion of the input is detected based on successively detected inputs directed to the user interface object. In response to detecting the second portion of the input, the device updates (19010) the appearance of the user interface object based on the second portion of the input, comprising updating based on determining that the first portion of the input satisfies the first gesture recognition criterion and the second portion of the input does not satisfy The second gesture recognition criteria for: (e.g., regardless of whether the second portion of the input satisfies the first gesture recognition criterion or the original second gesture recognition criterion) Change the user interface based on the first object manipulation behavior based on the second portion of the input The appearance of the object (e.g., based on the orientation and/or magnitude of the second part of the input), rather than changing the appearance of the user interface object based on the second object manipulation behavior (e.g., even if the second part of the input did satisfy the original second gesture recognition criteria); according to determining that the first part of the input satisfies the second gesture recognition criteria and the second part of the input does not meet the updated first gesture recognition criteria: (for example, regardless of whether the second part of the input satisfies the first gesture recognition criteria two gesture recognition criteria or the original first gesture recognition criteria) based on the second portion of the input to change the appearance of the user interface object based on the second object manipulation behavior (eg, based on the direction and/or magnitude of the second portion of the input), Rather than changing the appearance of the user interface object based on the first object manipulation behavior (eg, even if the second portion of the input did satisfy the original first gesture recognition prior to the update).

在一些实施方案(19012)中,在输入的第一部分满足第一手势识别标准之后,基于输入的第二部分,根据第一对象操纵行为来改变用户界面对象的外观时,输入的第二部分包括在更新第二手势识别标准之前满足第二手势识别标准的输入(例如,在阈值增大之前,用第二手势识别标准中的输入的移动参数的原始阈值)(例如,输入的第二部分不包括满足更新的第二手势识别标准的输入)。In some embodiments (19012), after the first portion of the input satisfies the first gesture recognition criteria, the second portion of the input, when changing the appearance of the user interface object according to the first object manipulation behavior based on the second portion of the input, includes Inputs that satisfy the second gesture recognition criterion before updating the second gesture recognition criterion (eg, before the threshold is increased, with the original threshold of the movement parameter of the input in the second gesture recognition criterion) (eg, the first threshold of the input The second part does not include inputs that meet the updated second gesture recognition criteria).

在一些实施方案(19014)中,在输入的第一部分满足第二手势识别标准之后,基于输入的第二部分,根据第二对象操纵行为来改变用户界面对象的外观时,输入的第二部分包括在更新第一手势识别标准之前满足第一手势识别标准的输入(例如,在阈值增大之前,用第一手势识别标准中的输入的移动参数的原始阈值)(例如,输入的第二部分不包括满足更新的第一手势识别标准的输入)。In some embodiments (19014), after the first portion of the input satisfies the second gesture recognition criteria, the second portion of the input changes the appearance of the user interface object according to the second object manipulation behavior based on the second portion of the input, the second portion of the input Include input that satisfies the first gesture recognition criterion before updating the first gesture recognition criterion (e.g., with the original threshold of the movement parameter of the input in the first gesture recognition criterion before the threshold is increased) (e.g., the second portion of the input Inputs that meet the updated first gesture recognition criteria are not included).

在一些实施方案(19016)中,在输入的第一部分满足第一手势识别标准之后,基于输入的第二部分,根据第一对象操纵行为改变用户界面对象的外观时,输入的第二部分不包括满足第一手势识别标准的输入(例如,具有第一手势识别标准中的输入的移动参数的原始阈值)。例如,在满足第一手势识别标准一次之后,输入不再需要继续满足第一手势识别标准以便引起第一对象操纵行为。In some embodiments (19016), after the first portion of the input satisfies the first gesture recognition criteria, the second portion of the input does not include when changing the appearance of the user interface object according to the first object manipulation behavior based on the second portion of the input An input that satisfies the first gesture recognition criterion (eg, has a raw threshold of movement parameters of the input in the first gesture recognition criterion). For example, after satisfying the first gesture recognition criterion once, the input need not continue to satisfy the first gesture recognition criterion in order to cause the first object manipulation behavior.

在一些实施方案(19018)中,在输入的第一部分满足第二手势识别标准之后,基于输入的第二部分,根据第二对象操纵行为改变用户界面对象的外观时,输入的第二部分不包括满足第二手势识别标准的输入(例如,具有第二手势识别标准中的输入的移动参数的原始阈值)。例如,在满足第二手势识别标准一次之后,输入不再需要继续满足第二手势识别标准以便引起第二对象操纵行为。当输入的第二部分包括增大到大于增大的阈值的移动时执行第一对象操纵行为增强了设备的可操作性(例如,通过在满足增大的标准执行第一对象操作之后向用户提供有意执行第二对象操纵的能力,而无需用户提供新输入)。减少执行操作所需的输入数量提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments (19018), after the first portion of the input satisfies the second gesture recognition criteria, the second portion of the input does not change the appearance of the user interface object according to the second object manipulation behavior based on the second portion of the input Include inputs that satisfy the second gesture recognition criteria (eg, have raw thresholds for movement parameters of the inputs in the second gesture recognition criteria). For example, after satisfying the second gesture recognition criterion once, the input need not continue to satisfy the second gesture recognition criterion in order to cause the second object manipulation behavior. Performing the first object manipulation behavior when the second portion of the input includes a movement that increases above an increased threshold enhances the operability of the device (eg, by providing the user with a The ability to intentionally perform a second object manipulation without requiring new input from the user). Reducing the amount of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,基于输入的第二部分更新用户界面对象的外观包括(19020):根据确定输入的第一部分满足第二手势识别标准并且输入的第二部分满足更新的第一手势识别标准:基于输入的第二部分根据第一对象操纵行为来改变用户界面对象的外观;并且根据基于输入的第二部分的第二对象操纵行为来改变用户界面对象的外观;并且,根据确定输入的第一部分满足第一手势识别标准并且输入的第二部分满足更新的第二手势识别标准:基于输入的第二部分,根据第一对象操纵行为来改变用户界面对象的外观;并且根据基于输入的第二部分的第二对象操纵行为来改变用户界面对象的外观。例如,在首先满足第一手势识别标准并且输入然后满足更新的第二手势识别标准之后,输入现在可以引起第一对象操纵行为和第二对象操纵行为。例如,在首先满足第二手势识别标准并且输入然后满足更新的第一手势识别标准之后,输入现在可以引起第一对象操纵行为和第二对象操纵行为。响应于在满足第二手势识别标准和更新的第一手势识别标准之后检测到的输入的一部分,根据第一对象操纵行为和第二对象操纵行为更新对象增强了设备的可操作性(例如,通过在满足增大的阈值之后使用第一对象操纵和第二对象操纵来向用户提供自由操纵对象的能力,而无需用户提供新输入)。减少执行操作所需的输入数量提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, updating the appearance of the user interface object based on the second portion of the input includes (19020): based on determining that the first portion of the input satisfies the second gesture recognition criterion and the second portion of the input satisfies the updated first gesture recognition criterion : changing the appearance of the user interface object according to the first object manipulation behavior based on the second part of the input; and changing the appearance of the user interface object according to the second object manipulation behavior based on the second part of the input; A portion satisfies a first gesture recognition criterion and a second portion of the input satisfies an updated second gesture recognition criterion: based on the second portion of the input, changing the appearance of the user interface object according to the first object manipulation behavior; and according to the second portion of the input based on the first object manipulation behavior; Two-part second object manipulation behavior to change the appearance of user interface objects. For example, after first satisfying the first gesture recognition criteria and inputting and then satisfying the updated second gesture recognition criteria, the input may now cause the first object manipulation behavior and the second object manipulation behavior. For example, after first satisfying the second gesture recognition criteria and inputting and then satisfying the updated first gesture recognition criteria, the input may now cause the first object manipulation behavior and the second object manipulation behavior. Updating the object according to the first object manipulation behavior and the second object manipulation behavior enhances the operability of the device in response to a portion of the input detected after the second gesture recognition criteria and the updated first gesture recognition criteria are met (eg, The user is provided the ability to freely manipulate the object by using the first object manipulation and the second object manipulation after the increased threshold is met, without requiring the user to provide new input). Reducing the amount of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在基于输入的第二部分更新用户界面对象的外观之后,(例如,在满足第一手势识别标准和更新的第二手势识别标准之后,或者在满足第二手势识别标准和更新的第一手势识别标准之后)设备检测(19022)输入的第三部分(例如,通过输入的第一和第二部分中相同的连续保持的接触,或者在输入的第一部分和第二部分中的接触的终止(例如,抬离)之后检测到的不同接触)。响应于检测到输入的第三部分,设备基于输入的第三部分更新(19024)用户界面对象的外观,包括:基于输入的第三部分根据第一对象操纵行为来改变用户界面对象的外观;并且基于输入的第三部分根据第二对象操纵行为来改变用户界面对象的外观。例如,在满足第一手势识别标准和更新的第二手势识别标准之后,或者在满足第二手势识别标准和更新的第一手势识别标准两者之后,输入可以随后引起第一对象操纵行为和第二对象操纵行为,而不考虑原始或更新的第一和第二手势识别标准中的阈值。响应于在满足第二手势识别标准和更新的第一手势识别标准之后检测到的输入的一部分,根据第一对象操纵行为和第二对象操纵行为更新对象增强了设备的可操作性(例如,通过在通过满足增大的阈值来演示执行第一对象操纵类型的意图之后使用第一对象操纵和第二对象操纵来向用户提供自由操纵对象的能力,而无需用户提供新输入)。减少执行操作所需的输入数量提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, after updating the appearance of the user interface object based on the second portion of the input, (eg, after satisfying the first gesture recognition criteria and the updated second gesture recognition criteria, or after satisfying the second gesture recognition Standard and updated first gesture recognition standard) the device detects (19022) the third part of the input (e.g., through the same continuously held contact in the first and second parts of the input, or between the first and second parts of the input A different contact detected after termination of a contact in a section (eg, lift-off). In response to detecting the third portion of the input, the device updates (19024) the appearance of the user interface object based on the third portion of the input, comprising: changing the appearance of the user interface object according to the first object manipulation behavior based on the third portion of the input; and The third portion based on the input changes the appearance of the user interface object according to the second object manipulation behavior. For example, after meeting the first gesture recognition criterion and the updated second gesture recognition criterion, or after both the second gesture recognition criterion and the updated first gesture recognition criterion, the input may subsequently cause the first object manipulation behavior and the second object manipulation behavior regardless of the thresholds in the original or updated first and second gesture recognition criteria. Updating the object according to the first object manipulation behavior and the second object manipulation behavior enhances the operability of the device in response to a portion of the input detected after the second gesture recognition criteria and the updated first gesture recognition criteria are met (eg, By using the first object manipulation and the second object manipulation after demonstrating the intent to perform the first object manipulation type by satisfying an increased threshold, the user is provided the ability to freely manipulate the object without requiring the user to provide new input). Reducing the amount of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案(19026)中,输入的第三部分不包括满足第一手势识别标准的输入或满足第二手势识别标准的输入。例如,在满足第一手势识别标准和更新的第二手势识别标准之后,或者在满足第二手势识别标准和更新的第一手势识别标准两者之后,输入可以随后引起第一对象操纵行为和第二对象操纵行为,而不考虑原始或更新的第一和第二手势识别标准中的阈值。响应于在满足第二手势识别标准和更新的第一手势识别标准之后检测到的输入的一部分,根据第一对象操纵行为和第二对象操纵行为更新对象增强了设备的可操作性(例如,通过在满足提高的标准之后使用第一对象操纵和第二对象操纵来向用户提供自由操纵对象的能力,而无需用户提供新输入)。减少执行操作所需的输入数量提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments (19026), the third portion of the input does not include input that meets the first gesture recognition criterion or input that meets the second gesture recognition criterion. For example, after meeting the first gesture recognition criterion and the updated second gesture recognition criterion, or after both the second gesture recognition criterion and the updated first gesture recognition criterion, the input may subsequently cause the first object manipulation behavior and the second object manipulation behavior regardless of the thresholds in the original or updated first and second gesture recognition criteria. Updating the object according to the first object manipulation behavior and the second object manipulation behavior enhances the operability of the device in response to a portion of the input detected after the second gesture recognition criteria and the updated first gesture recognition criteria are met (eg, Provides the user with the ability to freely manipulate the object by using the first object manipulation and the second object manipulation after the increased criteria are met, without requiring the user to provide new input). Reducing the amount of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,多个对象操纵行为包括(19028)第三对象操纵行为(例如,使用户界面对象围绕相应轴旋转),该第三对象操纵行为响应于满足第三手势识别标准(例如,缩放标准)的输入而执行。基于输入的第一部分更新用户界面对象的外观包括(19030):根据确定在满足第二手势识别标准或满足第三手势识别标准之前输入的第一部分满足第一手势识别标准:基于输入的第一部分(例如,基于输入的第一部分的方向和/或量值),根据第一对象操纵行为改变用户界面对象的外观(例如,旋转用户界面对象);并且(例如,在不根据第二对象操纵行为改变用户界面对象的外观的情况下)通过增大第二手势识别标准的阈值(例如,增大第二手势识别标准中的移动参数(例如,移动距离、速度等)所需的阈值)来更新第二手势识别标准。例如,在用于识别用于旋转对象的手势的标准得到满足之前,通过满足用于识别用于平移或缩放的手势的标准(假设用于平移或缩放的标准之前未得到满足)对对象发起平移或缩放操作相对容易。一旦用于识别用于旋转对象的手势的标准得到满足,发起对对象的平移或缩放操作就变得更难(例如,用于平移和缩放的标准被更新为具有增大的移动参数阈值),并且对象操纵偏向于与已经识别并用于操纵该对象的手势对应的操纵行为。设备通过增大第三手势识别标准的阈值来更新第三手势识别标准(例如,增大第三手势识别标准中的移动参数(例如,移动距离、速度等)所需的阈值)。例如,在用于识别用于旋转对象的手势的标准得到满足之前,通过满足用于识别用于平移或缩放的手势的标准(假设用于平移或缩放的标准之前未得到满足)对对象发起平移或缩放操作相对容易。一旦用于识别用于旋转对象的手势的标准得到满足,发起对对象的平移或缩放操作就变得更难(例如,用于平移和缩放的标准被更新为具有增大的移动参数阈值),并且对象操纵偏向于与已经识别并用于操纵该对象的手势对应的操纵行为。根据确定在满足第一手势识别标准或满足第三手势识别标准之前输入满足第二手势识别标准:设备基于输入的第一部分(例如,基于输入的第一部分的方向和/或量值)根据第二对象操纵行为来改变用户界面对象的外观(例如,平移用户界面对象或重新设定用户界面对象的尺寸);并(例如,在不根据第一对象操纵行为改变用户界面对象的外观的情况下)通过增大第一手势识别标准的阈值来更新第一手势识别标准(例如,增大第一手势识别标准中的移动参数(例如,移动距离、速度等)所需的阈值)。例如,在满足用于识别用于平移或缩放对象的手势的标准之前,通过满足用于识别用于旋转的手势的标准来对对象发起旋转操作相对容易(假设用于识别用于旋转对象的手势的标准之前未得到满足)。一旦用于识别用于平移或缩放对象的手势的标准得到满足,发起对对象的旋转操作就变得更难(例如,用于旋转对象的标准被更新为具有增大的移动参数阈值),并且对象操纵行为偏向于与已经识别并用于操纵该对象的手势对应的操纵行为。在一些实施方案中,根据输入的相应移动参数的值动态且连续地(例如,显示不同的尺寸、位置、视角、反射、阴影等)改变用户界面对象的外观。在一些实施方案中,设备遵循移动参数(例如,每种类型的操纵行为的相应移动参数)与对用户界面对象的外观所做的改变(例如,每种类型的操纵行为的外观的相应方面)之间的预设对应关系(例如,每种类型的操纵行为的相应对应关系)。设备通过增大第三手势识别标准的阈值来更新第三手势识别标准(例如,增大第三手势识别标准中的移动参数(例如,移动距离、速度等)所需的阈值)。例如,在用于识别用于旋转对象的手势的标准得到满足之前,通过满足用于识别用于平移或缩放的手势的标准(假设用于平移或缩放的标准之前未得到满足)对对象发起平移或缩放操作相对容易。一旦用于识别用于旋转对象的手势的标准得到满足,发起对对象的平移或缩放操作就变得更难(例如,用于平移和缩放的标准被更新为具有增大的移动参数阈值),并且对象操纵偏向于与已经识别并用于操纵该对象的手势对应的操纵行为。根据确定在满足第一手势识别标准或满足第二手势识别标准之前输入满足第三手势识别标准:设备基于输入的第一部分(例如,基于输入的第一部分的方向和/或量值)根据第三对象操纵行为来改变用户界面对象的外观(例如,重新设定用户界面对象的尺寸);并且设备(例如,在不根据第一对象操纵行为和第二对象操纵行为改变用户界面对象的外观的情况下)通过增大第一手势识别标准的阈值来更新第一手势识别标准(例如,增大第一手势识别标准中的移动参数(例如,移动距离、速度等)所需的阈值)。例如,在满足用于识别用于平移或缩放对象的手势的标准之前,通过满足用于识别用于旋转的手势的标准来对对象发起旋转操作相对容易(假设用于识别用于旋转对象的手势的标准之前未得到满足)。一旦用于识别用于平移或缩放对象的手势的标准得到满足,发起对对象的旋转操作就变得更难(例如,用于旋转对象的标准被更新为具有增大的移动参数阈值),并且对象操纵行为偏向于与已经识别并用于操纵该对象的手势对应的操纵行为。在一些实施方案中,根据输入的相应移动参数的值动态且连续地(例如,显示不同的尺寸、位置、视角、反射、阴影等)改变用户界面对象的外观。在一些实施方案中,设备遵循移动参数(例如,每种类型的操纵行为的相应移动参数)与对用户界面对象的外观所做的改变(例如,每种类型的操纵行为的外观的相应方面)之间的预设对应关系(例如,每种类型的操纵行为的相应对应关系)。设备通过增大第二手势识别标准的阈值来更新第二手势识别标准(例如,增大第二手势识别标准中的移动参数(例如,移动距离、速度等)所需的阈值)。例如,在用于识别用于旋转对象的手势的标准得到满足之前,通过满足用于识别用于平移或缩放的手势的标准(假设用于平移或缩放的标准之前未得到满足)对对象发起平移或缩放操作相对容易。一旦用于识别用于旋转对象的手势的标准得到满足,发起对对象的平移或缩放操作就变得更难(例如,用于平移和缩放的标准被更新为具有增大的移动参数阈值),并且对象操纵偏向于与已经识别并用于操纵该对象的手势对应的操纵行为。响应于仅在满足对应的第三手势识别标准时检测到的输入的一部分,根据第三对象操纵行为更新对象增强了设备的可操作性(例如,通过帮助用户避免在尝试提供用于执行第一对象操纵或第二对象操纵的输入时意外地执行第三对象操纵)。减少执行操作所需的输入数量提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the plurality of object manipulation behaviors include (19028) a third object manipulation behavior (eg, rotating a user interface object about a corresponding axis) in response to satisfying a third gesture recognition criterion (eg, scaling standard) input. Updating the appearance of the user interface object based on the first portion of the input includes ( 19030 ): according to determining that the first portion of the input satisfies the first gesture recognition criterion before the second gesture recognition criterion or the third gesture recognition criterion is satisfied: based on the first portion of the input (eg, based on the direction and/or magnitude of the first portion of the input), changing the appearance of the user interface object (eg, rotating the user interface object) in accordance with the first object manipulation behavior; and (eg, without the second object manipulation behavior) In the case of changing the appearance of user interface objects) by increasing the threshold of the second gesture recognition criterion (eg, increasing the threshold required for the movement parameters (eg, movement distance, speed, etc.) in the second gesture recognition criterion) to update the second gesture recognition criterion. For example, a translation of an object is initiated by satisfying the criteria for identifying a gesture for panning or zooming (assuming the criteria for panning or zooming have not been met before) before the criteria for identifying the gesture for rotating the object are met Or zoom operations are relatively easy. Once the criteria for recognizing gestures for rotating objects are met, it becomes more difficult to initiate pan or zoom operations on objects (eg, criteria for pan and zoom are updated to have increased movement parameter thresholds), And object manipulation is biased towards manipulation behaviors corresponding to gestures that have been identified and used to manipulate the object. The device updates the third gesture recognition criterion by increasing the threshold of the third gesture recognition criterion (eg, increasing the threshold required for movement parameters (eg, movement distance, speed, etc.) in the third gesture recognition criterion). For example, a translation of an object is initiated by satisfying the criteria for identifying a gesture for panning or zooming (assuming the criteria for panning or zooming have not been met before) before the criteria for identifying the gesture for rotating the object are met Or zoom operations are relatively easy. Once the criteria for recognizing gestures for rotating objects are met, it becomes more difficult to initiate pan or zoom operations on objects (eg, criteria for pan and zoom are updated to have increased movement parameter thresholds), And object manipulation is biased towards manipulation behaviors corresponding to gestures that have been identified and used to manipulate the object. Upon determining that the input satisfies the second gesture recognition criterion before the first gesture recognition criterion is satisfied or the third gesture recognition criterion is satisfied: the device based on the first portion of the input (eg, based on the direction and/or magnitude of the first portion of the input) according to the first two object manipulation behaviors to change the appearance of the user interface object (eg, panning the user interface object or resizing the user interface object); and (eg, without changing the appearance of the user interface object in accordance with the first object manipulation behavior ) updates the first gesture recognition criteria by increasing the thresholds of the first gesture recognition criteria (eg, increasing the thresholds required for movement parameters (eg, movement distance, speed, etc.) in the first gesture recognition criteria). For example, it is relatively easy to initiate a rotation operation on an object by meeting the criteria for recognizing a gesture for rotating an object until the criteria for recognizing a gesture for panning or zooming the object are met (assuming a gesture for recognizing a gesture for rotating an object is criteria have not been met before). Once the criteria for recognizing gestures for panning or zooming the object are met, it becomes more difficult to initiate a rotation operation on the object (eg, the criteria for rotating the object are updated to have an increased movement parameter threshold), and Object manipulation behaviors are biased toward manipulation behaviors that correspond to gestures that have been identified and used to manipulate the object. In some implementations, the appearance of the user interface objects is dynamically and continuously (eg, displaying different sizes, positions, viewing angles, reflections, shadows, etc.) according to the entered values of the corresponding movement parameters. In some embodiments, the device follows movement parameters (eg, corresponding movement parameters for each type of manipulation behavior) and changes made to the appearance of user interface objects (eg, corresponding aspects of the appearance for each type of manipulation behavior) Preset correspondences between (eg, corresponding correspondences for each type of manipulation behavior). The device updates the third gesture recognition criterion by increasing the threshold of the third gesture recognition criterion (eg, increasing the threshold required for movement parameters (eg, movement distance, speed, etc.) in the third gesture recognition criterion). For example, a translation of an object is initiated by satisfying the criteria for identifying a gesture for panning or zooming (assuming the criteria for panning or zooming have not been met before) before the criteria for identifying the gesture for rotating the object are met Or zoom operations are relatively easy. Once the criteria for recognizing gestures for rotating objects are met, it becomes more difficult to initiate pan or zoom operations on objects (eg, criteria for pan and zoom are updated to have increased movement parameter thresholds), And object manipulation is biased towards manipulation behaviors corresponding to gestures that have been identified and used to manipulate the object. Upon determining that the input satisfies a third gesture recognition criterion before the first gesture recognition criterion is satisfied or the second gesture recognition criterion is satisfied: the device based on the first portion of the input (eg, based on the direction and/or magnitude of the first portion of the input) according to the first three-object manipulation behavior to change the appearance of the user interface object (eg, resizing the user interface object); and the device (eg, without changing the appearance of the user interface object based on the first object manipulation behavior and the second object manipulation behavior); case) to update the first gesture recognition criteria by increasing the thresholds of the first gesture recognition criteria (eg, increasing the thresholds required for movement parameters (eg, moving distance, speed, etc.) in the first gesture recognition criteria). For example, it is relatively easy to initiate a rotation operation on an object by meeting the criteria for recognizing a gesture for rotating an object until the criteria for recognizing a gesture for panning or zooming the object are met (assuming a gesture for recognizing a gesture for rotating an object is criteria have not been met before). Once the criteria for recognizing gestures for panning or zooming the object are met, it becomes more difficult to initiate a rotation operation on the object (eg, the criteria for rotating the object are updated to have an increased movement parameter threshold), and Object manipulation behaviors are biased toward manipulation behaviors that correspond to gestures that have been identified and used to manipulate the object. In some implementations, the appearance of the user interface objects is dynamically and continuously (eg, displaying different sizes, positions, viewing angles, reflections, shadows, etc.) according to the entered values of the corresponding movement parameters. In some embodiments, the device follows movement parameters (eg, corresponding movement parameters for each type of manipulation behavior) and changes made to the appearance of user interface objects (eg, corresponding aspects of the appearance for each type of manipulation behavior) Preset correspondences between (eg, corresponding correspondences for each type of manipulation behavior). The device updates the second gesture recognition criteria by increasing the thresholds of the second gesture recognition criteria (eg, increasing the required thresholds for movement parameters (eg, movement distance, speed, etc.) in the second gesture recognition criteria). For example, a translation of an object is initiated by satisfying the criteria for identifying a gesture for panning or zooming (assuming the criteria for panning or zooming have not been met before) before the criteria for identifying the gesture for rotating the object are met Or zoom operations are relatively easy. Once the criteria for recognizing gestures for rotating objects are met, it becomes more difficult to initiate pan or zoom operations on objects (eg, criteria for pan and zoom are updated to have increased movement parameter thresholds), And object manipulation is biased towards manipulation behaviors corresponding to gestures that have been identified and used to manipulate the object. Updating the object in accordance with the third object manipulation behavior in response to a portion of the input detected only when the corresponding third gesture recognition criterion is met enhances the operability of the device (eg, by helping the user avoid attempts to provide a tool for executing the first object) A third object manipulation is performed unexpectedly while the input of a manipulation or a second object manipulation is performed). Reducing the amount of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,多个对象操纵行为包括(19032)响应于满足第三手势识别标准的输入而执行的第三对象操纵行为,在满足第一手势识别标准或第二手势识别标准之前输入的第一部分不满足第三手势识别标准,在输入的第一部分满足第一手势识别标准或第二手势识别标准之后,设备通过增大第三手势识别标准的阈值来更新第三手势识别标准,在满足更新的第一手势识别标准或更新的第二手势识别标准之前输入的第二部分不满足更新的第三手势识别标准(例如,在输入的第一部分满足第一手势识别标准或第二手势识别标准中的一者之后,设备通过增大第三手势识别标准的阈值来更新第三手势识别标准)。响应于检测到输入的第三部分(19034):根据确定输入的第三部分满足更新的第三手势识别标准(例如,不考虑输入的第三部分是否满足第一手势识别标准或第二手势识别标准(例如,更新的或原始的)),设备基于输入的第三部分(例如,基于输入的第三部分的方向和/或量值)根据第三对象操纵行为来改变用户界面对象的外观(例如,在根据第一对象操纵行为和第二对象操纵行为改变用户界面对象的外观时(例如,即使输入的第三部分不满足原始的第一手势识别标准和第二手势识别标准))。根据确定输入的第三部分不满足更新的第三手势识别标准,设备放弃基于输入的第三部分根据第三对象操纵行为来改变用户界面对象的外观(例如,在根据第一对象操纵行为和第二对象操纵行为改变用户界面对象的外观时(例如,即使输入的第三部分不满足原始的第一手势识别标准和第二手势识别标准))。在第二手势识别标准、更新的第一手势识别标准和更新的第三手势识别标准得到满足之后,响应于检测到输入的一部分,根据第一对象操纵行为、第二对象操纵行为和第三对象操纵行为来更新对象,增强了设备的可操作性(例如,通过在建立通过满足增大的阈值来执行所有三种对象操纵类型的意图之后,通过向用户提供使用第一对象操纵类型、第二对象操纵类型和第三对象操纵类型自由地操纵对象的能力,而无需用户提供新输入)。减少执行操作所需的输入数量提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the plurality of object manipulation actions include (19032) a third object manipulation action performed in response to an input satisfying a third gesture recognition criterion, the input prior to satisfying the first gesture recognition criterion or the second gesture recognition criterion The first part of the input does not meet the third gesture recognition standard, after the first part of the input meets the first gesture recognition standard or the second gesture recognition standard, the device updates the third gesture recognition standard by increasing the threshold of the third gesture recognition standard, The second portion of the input does not meet the updated third gesture recognition criterion before the updated first gesture recognition criterion or the updated second gesture recognition criterion is met (eg, the first portion of the input meets the first gesture recognition criterion or the second After one of the gesture recognition criteria, the device updates the third gesture recognition criterion by increasing the threshold of the third gesture recognition criterion). In response to detecting the third portion of the input (19034): based on determining that the third portion of the input satisfies the updated third gesture recognition criterion (eg, regardless of whether the third portion of the input satisfies the first gesture recognition criterion or the second gesture Recognizing criteria (eg, updated or original)), the device changes the appearance of the user interface object based on the third portion of the input (eg, based on the direction and/or magnitude of the third portion of the input) according to the third object manipulation behavior (eg, when changing the appearance of the user interface object based on the first and second object manipulation behaviors (eg, even if the third portion of the input does not satisfy the original first and second gesture recognition criteria)) . Upon determining that the third portion of the input does not meet the updated third gesture recognition criteria, the device abandons changing the appearance of the user interface object based on the third portion of the input based on the third object manipulation behavior (eg, after When the two-object manipulation behavior changes the appearance of the user interface object (eg, even if the third portion of the input does not satisfy the original first and second gesture recognition criteria)). After the second gesture recognition criteria, the updated first gesture recognition criteria, and the updated third gesture recognition criteria are satisfied, in response to detecting a portion of the input, according to the first object manipulation behavior, the second object manipulation behavior, and the third Object manipulation behavior to update objects, enhances the operability of the device (e.g., by providing the user with the first object manipulation type, the first object manipulation type, the first object manipulation type, the The ability of the second and third object manipulation types to freely manipulate objects without requiring new input from the user). Reducing the amount of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案(19036)中,输入的第三部分满足更新的第三手势识别标准。在基于输入的第三部分更新用户界面对象的外观之后(例如,在第一手势识别标准和更新的第二手势识别标准和第三手势识别标准均得到满足之后,或者在第二手势识别标准和更新的第一手势识别标准和第三手势识别标准均得到满足之后),设备检测(19038)输入的第四部分(例如,通过输入的第一部分、第二部分和第三部分中相同的连续保持的接触,或者在输入的第一部分、第二部分和第三部分中的接触终止(例如,抬离)之后检测到的不同接触)。响应于检测到输入的第四部分,设备基于输入的第四部分更新(19040)用户界面对象的外观,包括:基于输入的第四部分根据第一对象操纵行为来改变用户界面对象的外观;基于输入的第四部分根据第二对象操纵行为来改变用户界面对象的外观;并且基于输入的第四部分根据第三对象操纵行为来改变用户界面对象的外观。例如,在第一手势识别标准和更新的第二手势识别标准和第三手势识别标准得到满足之后,或者在第二手势识别标准和更新的第一手势识别标准和第三手势识别标准得到满足之后,输入可随后引起所有三种类型的操纵行为,而不考虑原始或更新的第一手势识别标准、第二手势识别标准和第三手势识别标准中的阈值。In some embodiments (19036), the third portion of the input satisfies the updated third gesture recognition criterion. After updating the appearance of the user interface object based on the third portion of the input (eg, after both the first gesture recognition criterion and the updated second gesture recognition criterion and the third gesture recognition criterion are satisfied, or after the second gesture recognition Criteria and updated first and third gesture recognition criteria are met), the device detects (19038) the fourth part of the input (eg, by the same in the first, second and third parts of the input) Continuously maintained contacts, or distinct contacts detected after termination (eg, lift-off) of contacts in the first, second, and third portions of the input. In response to detecting the fourth portion of the input, the device updates (19040) the appearance of the user interface object based on the fourth portion of the input, comprising: changing the appearance of the user interface object according to the first object manipulation behavior based on the fourth portion of the input; The fourth portion of the input changes the appearance of the user interface object according to the second object manipulation behavior; and the fourth portion based on the input changes the appearance of the user interface object according to the third object manipulation behavior. For example, after the first gesture recognition standard and the updated second gesture recognition standard and the third gesture recognition standard are satisfied, or after the second gesture recognition standard and the updated first gesture recognition standard and the third gesture recognition standard are obtained Once satisfied, the input can then cause all three types of manipulative behavior, regardless of the thresholds in the original or updated first, second, and third gesture recognition criteria.

在一些实施方案中,输入的第四部分不包括(19042):满足第一手势识别标准的输入,满足第二手势识别标准的输入,或满足第三手势识别标准的输入。例如,在第一手势识别标准和更新的第二手势识别标准和第三手势识别标准得到满足之后,或者在第二手势识别标准和更新的第一手势识别标准和第三手势识别标准得到满足之后,输入可随后引起所有三种类型的操纵行为,而不考虑原始或更新的第一手势识别标准、第二手势识别标准和第三手势识别标准中的阈值。需要针对手势的多个同时检测到的接触增强了设备的可操作性(例如,通过在以小于所需数量的同时检测到的接触提供输入时帮助用户避免意外地执行对象操纵)。减少执行操作所需的输入数量提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the fourth portion of the input does not include (19042): an input that satisfies the first gesture recognition criterion, an input that satisfies a second gesture recognition criterion, or an input that satisfies a third gesture recognition criterion. For example, after the first gesture recognition standard and the updated second gesture recognition standard and the third gesture recognition standard are satisfied, or after the second gesture recognition standard and the updated first gesture recognition standard and the third gesture recognition standard are obtained Once satisfied, the input can then cause all three types of manipulative behavior, regardless of the thresholds in the original or updated first, second, and third gesture recognition criteria. The need for multiple simultaneously detected contacts for gestures enhances the operability of the device (eg, by helping the user avoid accidentally performing object manipulations when providing input with less than a desired number of simultaneously detected contacts). Reducing the amount of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案(19044)中,第一手势识别标准和第二手势识别标准(和第三手势识别标准)都需要第一数量的同时检测到的接触(例如,两个接触)以便得到满足。在一些实施方案中,单指手势也可以用于平移,并且单指平移阈值低于双指平移阈值。在一些实施方案中,针对双指平移手势设置的原始和更新的移动阈值分别是通过接触的重心移动40个点和70个点。在一些实施方案中,针对双指旋转手势设置的原始和更新的移动阈值分别是通过接触进行的12度和18度的旋转移动。在一些实施方案中,针对双指缩放手势设置的原始和更新的移动阈值分别是50个点(接触之间的距离)和90个点。在一些实施方案中,针对单指拖动手势设置的阈值是30个点。In some embodiments (19044), both the first gesture recognition criterion and the second gesture recognition criterion (and the third gesture recognition criterion) require a first number of simultaneously detected contacts (eg, two contacts) in order to be satisfied . In some embodiments, single-finger gestures can also be used to pan, and the single-finger pan threshold is lower than the two-finger pan threshold. In some embodiments, the original and updated movement thresholds set for a two-finger pan gesture are 40 and 70 points, respectively, of moving the center of gravity of the contact. In some embodiments, the original and updated movement thresholds set for the two-finger rotation gesture are 12 degrees and 18 degrees of rotational movement by contact, respectively. In some embodiments, the original and updated movement thresholds set for the pinch-to-zoom gesture are 50 points (distance between contacts) and 90 points, respectively. In some implementations, the threshold set for a single finger drag gesture is 30 points.

在一些实施方案(19046)中,第一对象操纵行为改变用户界面对象的缩放水平或显示尺寸(例如,通过捏合手势来重新设定对象的尺寸(例如,在基于第一手势识别标准(例如,原始的或更新的)识别捏合手势之后,接触朝向彼此的移动)),并且第二对象操纵行为改变用户界面对象的旋转角度(例如,通过扭曲/旋转手势改变用户界面对象围绕外轴或内轴的观察视角(例如,在通过第二手势识别标准(例如,原始的或更新的)识别扭转/旋转手势之后,接触围绕公共轨迹的移动))。例如,第一对象操纵行为改变虚拟对象11002的显示尺寸,如参照图14G至图14I所描述的,并且第二对象操纵行为改变虚拟对象11002的旋转角度,如关于图14B至图14E所描述的。在一些实施方案中,第二对象操纵行为改变用户界面对象的缩放水平或显示尺寸(例如,通过捏合手势来重新设定对象的尺寸(例如,在基于第二手势识别标准(例如,原始的或更新的)识别捏合手势之后,接触朝向彼此的移动)),并且第一对象操纵行为改变用户界面对象的旋转角度(例如,通过扭曲/旋转手势改变用户界面对象围绕外轴或内轴的观察视角(例如,在通过第一手势识别标准(例如,原始的或更新的)识别扭转/旋转手势之后,接触围绕公共轨迹的移动))。In some embodiments (19046), the first object manipulation behavior changes the zoom level or display size of the user interface object (eg, re-sizing the object by a pinch gesture (eg, when based on the first gesture recognition criteria (eg, original or updated) after the pinch gesture is recognized, the contacts move towards each other)), and the second object manipulation behavior changes the rotation angle of the UI object (e.g. by twisting/rotating the gesture to change the UI object around the outer or inner axis) (eg, contact movement around a common trajectory after recognizing a twist/rotate gesture by a second gesture recognition criterion (eg, original or updated)). For example, the first object manipulation behavior changes the display size of thevirtual object 11002, as described with reference to Figures 14G-14I, and the second object manipulation behavior changes the rotation angle of thevirtual object 11002, as described with respect to Figures 14B-14E . In some embodiments, the second object manipulation behavior alters the zoom level or display size of the user interface object (eg, resizing the object through a pinch gesture (eg, when based on the second gesture recognition criteria (eg, the original) or newer) after a pinch gesture is recognized, the contacts move towards each other)), and the first object manipulation behavior changes the rotation angle of the UI object (e.g., by twisting/rotating gestures to change the viewing of the UI object around the outer or inner axis) Perspective (eg, movement of contacts around a common trajectory after recognizing a twist/rotate gesture by a first gesture recognition criterion (eg, original or updated)).

在一些实施方案(19048)中,第一对象操纵行为改变用户界面对象的缩放水平或显示尺寸(例如,通过捏合手势来重新设定对象的尺寸(例如,在基于第一手势识别标准(例如,原始的或更新的)识别捏合手势之后,接触朝向彼此的移动)),并且第二对象操纵行为改变第一用户界面区域中的用户界面对象的位置(例如,通过单指或双指拖动手势拖动用户界面对象(例如,在通过第二手势识别标准(例如,原始的或更新的)识别拖动手势之后,接触在相应方向上的移动))。例如,第一对象操纵行为改变虚拟对象11002的显示尺寸,如参照图14G至图14I所描述的,并且第二对象操纵行为改变用户界面中的虚拟对象11002的位置,如关于图14B至图14E所描述的。在一些实施方案中,第二对象操纵行为改变用户界面对象的缩放水平或显示尺寸(例如,通过捏合手势来重新设定对象的尺寸(例如,在基于第二手势识别标准(例如,原始的或更新的)识别捏合手势之后,接触朝向彼此的移动)),并且第二对象操纵行为改变第一用户界面区域中的用户界面对象的位置(例如,通过单指或双指拖动手势拖动用户界面对象(例如,在通过第一手势识别标准(例如,原始的或更新的)识别拖动手势之后,接触在相应方向上的移动))。In some embodiments (19048), the first object manipulation behavior changes the zoom level or display size of the user interface object (eg, re-sizing the object by a pinch gesture (eg, when based on the first gesture recognition criteria (eg, (original or updated) after the pinch gesture is recognized, the contacts move towards each other)), and the second object manipulation behavior changes the position of the user interface object in the first user interface area (eg, by a one- or two-finger drag gesture) The user interface object is dragged (eg, the movement of the contact in the corresponding direction after the drag gesture is recognized by the second gesture recognition criterion (eg, original or updated)). For example, the first object manipulation behavior changes the display size of thevirtual object 11002, as described with reference to Figures 14G-14I, and the second object manipulation behavior changes the position of thevirtual object 11002 in the user interface, as described with respect to Figures 14B-14E described. In some embodiments, the second object manipulation behavior alters the zoom level or display size of the user interface object (eg, resizing the object through a pinch gesture (eg, when based on the second gesture recognition criteria (eg, the original) or newer) after the pinch gesture is recognized, the contacts move towards each other)), and the second object manipulation behavior changes the position of the user interface object in the first user interface area (e.g., dragged by a one- or two-finger drag gesture) A user interface object (eg, movement of a contact in a corresponding direction after a drag gesture is recognized by a first gesture recognition criterion (eg, original or updated)).

在一些实施方案(19050)中,第一对象操纵行为改变第一用户界面区域中的用户界面对象的位置(例如,通过单指或双指拖动手势拖动对象(例如,在第一手势识别标准(例如,原始的或更新的)识别拖动手势之后,接触在相应方向上的移动)),并且第二对象操纵行为改变用户界面对象的旋转角度(例如,通过扭曲/旋转手势改变用户界面对象围绕外轴或内轴的观察视角(例如,在通过第二手势识别标准(例如,原始的或更新的)识别扭转/旋转手势之后,接触围绕公共轨迹的移动))。例如,第一对象操纵行为改变虚拟对象11002的显示尺寸,如参照图14B至图14E所描述的,并且第二对象操纵行为改变用户界面中的虚拟对象11002的位置,如关于图14B至图14E所描述的。在一些实施方案中,第二对象操纵行为改变第一用户界面区域中的用户界面对象的位置(例如,通过单指或双指拖动手势拖动对象(例如,在第二手势识别标准(例如,原始的或更新的)识别拖动手势之后,接触在相应方向上的移动)),并且第一对象操纵行为改变用户界面对象的旋转角度(例如,通过扭曲/旋转手势改变用户界面对象围绕外轴或内轴的观察视角(例如,在通过第一手势识别标准(例如,原始的或更新的)识别扭转/旋转手势之后,接触围绕公共轨迹的移动))。In some embodiments (19050), the first object manipulation behavior changes the position of the user interface object in the first user interface area (eg, dragging the object with a one-finger or two-finger drag gesture (eg, in the first gesture recognition Standard (e.g., original or updated) recognizes the drag gesture, the movement of the contact in the corresponding direction)), and the second object manipulation behavior changes the rotation angle of the UI object (e.g., changes the UI through twist/rotate gestures) The viewing angle of the object around the outer or inner axis (eg, contact movement around a common trajectory after recognizing a twist/rotate gesture by a second gesture recognition criterion (eg, original or updated)). For example, the first object manipulation behavior changes the display size of thevirtual object 11002, as described with reference to Figures 14B-14E, and the second object manipulation behavior changes the position of thevirtual object 11002 in the user interface, as described with respect to Figures 14B-14E described. In some embodiments, the second object manipulation behavior changes the position of the user interface object in the first user interface area (eg, dragging the object with a one-finger or two-finger drag gesture (eg, in the second gesture recognition criterion ( For example, the original or updated) after the drag gesture is recognized, the movement of the contact in the corresponding direction)), and the first object manipulation behavior changes the rotation angle of the user interface object (eg, by twisting/rotating the gesture to change the user interface object around the The viewing angle of the outer or inner axis (eg, the movement of the contact around the common trajectory after the twist/rotate gesture is recognized by the first gesture recognition criterion (eg, original or updated)).

在一些实施方案(19052)中,输入的第一部分和输入的第二部分由多个连续保持的接触提供。设备重新建立(19054)第一手势识别标准和第二手势识别标准(例如,具有原始阈值),以在检测到多个连续保持的接触的抬离之后发起另外的第一对象操纵行为和第二对象操纵行为。例如,在接触抬离之后,设备重新建立用于新检测到的触摸输入的旋转、平移和缩放的手势识别阈值。在通过接触的抬离来结束输入之后重新建立用于输入移动的阈值增强了设备的可操作性(例如,通过在每次提供新输入时重置增大的移动阈值来减少执行对象操纵所需的输入程度)。减少执行操作所需的输入程度提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments (19052), the first portion of the input and the second portion of the input are provided by a plurality of continuously maintained contacts. The device re-establishes (19054) the first gesture recognition criterion and the second gesture recognition criterion (eg, with original thresholds) to initiate additional first object manipulation behaviors and first object manipulation behaviors upon detection of lift-off of multiple consecutively maintained contacts. Two object manipulation behavior. For example, after contact lift-off, the device re-establishes gesture recognition thresholds for rotation, translation, and zooming of newly detected touch inputs. Re-establishing the threshold for input movement after the input is terminated by lift-off of the contact enhances the operability of the device (eg, reduces the need to perform object manipulation by resetting the increased movement threshold each time a new input is provided level of input). Reducing the level of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案(19056)中,第一手势识别标准与围绕第一轴的旋转对应,并且第二手势识别标准与围绕与第一轴正交的第二轴的旋转对应。在一些实施方案中,代替更新用于不同类型的手势的阈值,更新还适用于针对与所识别的手势类型(例如,扭曲/枢转手势)对应的一种类型的操纵行为内的不同子类型的操纵行为(例如,围绕第一轴旋转而不是围绕不同轴旋转)设置的阈值。例如,一旦识别并执行围绕第一轴的旋转,则围绕不同轴的旋转阈值组被更新(例如,增大)并且必须由随后的输入克服,以便触发围绕不同轴的旋转。当输入移动增大到高于用于使对象围绕第二轴旋转所需的输入移动的阈值时,增大用于使对象围绕第一轴旋转所需的输入移动的阈值增强了设备的可操作性(例如,通过帮助用户避免在试图使对象围绕第一轴旋转时意外地使对象围绕第二轴旋转)。减少执行操作所需的输入数量提高了设备的可操作性并且使用户-设备界面更加高效,这又通过使用户能够更快速且高效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some implementations (19056), the first gesture recognition criterion corresponds to rotation about a first axis, and the second gesture recognition criterion corresponds to rotation about a second axis orthogonal to the first axis. In some embodiments, instead of updating thresholds for different types of gestures, the updates are also applied for different subtypes within one type of manipulation behavior corresponding to the identified gesture type (eg, twist/pivot gesture). Threshold for the manipulation behavior (eg, rotation around a first axis instead of a different axis). For example, once a rotation about a first axis is identified and performed, a set of rotation thresholds about a different axis is updated (eg, increased) and must be overcome by subsequent inputs in order to trigger a rotation about a different axis. Increasing the threshold of input movement required to rotate the object about the first axis enhances the operability of the device when the input movement increases above the threshold of input movement required to rotate the object about the second axis (eg, by helping the user avoid accidentally rotating an object about a second axis when trying to rotate it about a first axis). Reducing the amount of input required to perform an operation improves the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

应当理解,对图19A至图19H中的操作进行描述的特定顺序仅仅是一个示例,并非旨在表明所述顺序是可以执行这些操作的唯一顺序。本领域的普通技术人员会想到多种方式来对本文所述的操作进行重新排序。另外,应当注意,本文相对于本文所述的其他方法(例如,方法800、900、1000、16000、17000、18000和20000)描述的其他过程的细节同样以类似的方式适用于上文相对于图19A至图19H所述的方法19000。例如,上文参考方法19000所述的接触、输入、虚拟对象、用户界面区域、视场、触觉输出、移动和/或动画任选地具有本文参考本文所述的其他方法(例如,方法800、900、1000、16000、17000、18000和20000)所述的接触、输入、虚拟对象、用户界面区域、视场、触觉输出、移动和/或动画的特征中的一者或多者。为了简明起见,此处不再重复这些细节。It should be understood that the particular order in which the operations in FIGS. 19A-19H are described is merely an example, and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize numerous ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (eg,methods 800, 900, 1000, 16000, 17000, 18000, and 20000) also apply in a similar manner to the above with respect to Figures Themethod 19000 described in 19A-19H. For example, the contacts, inputs, virtual objects, user interface areas, fields of view, haptic outputs, movements, and/or animations described above with reference tomethod 19000 optionally have other methods described herein with reference to (eg,method 800, 900, 1000, 16000, 17000, 18000, and 20000) of one or more of the touch, input, virtual object, user interface area, field of view, haptic output, movement and/or animation features. For the sake of brevity, these details are not repeated here.

图20A至图20F是流程图,示出了用于根据确定设备的移动使虚拟对象移动到所显示的一个或多个设备相机的视场之外来生成音频警报的方法20000。方法20000在具有显示生成部件(例如,显示器、投影仪、平视显示器等)、一个或多个输入设备(例如,触敏表面或同时充当显示生成部件和触敏表面的触摸屏显示器)、一个或多个音频输出发生器,以及一个或多个相机的电子设备(例如,图3的设备300或图1A的便携式多功能设备100)处执行。方法20000中的一些操作任选地被组合,并且/或者一些操作的顺序任选地被改变。20A-20F are flowcharts illustrating amethod 20000 for generating an audio alert based on determining movement of the device to move a virtual object out of the field of view of one or more displayed device cameras.Method 20000 has a display generating component (eg, a display, a projector, a heads-up display, etc.), one or more input devices (eg, a touch-sensitive surface or a touch screen display that acts as both the display generating component and the touch-sensitive surface), one or more An audio output generator, and one or more cameras are implemented at the electronic device (eg, device 300 of FIG. 3 or portablemultifunction device 100 of FIG. 1A ). Some operations inMethod 20000 are optionally combined, and/or the order of some operations is optionally changed.

设备经由显示生成部件在包括一个或多个相机的视场的表示的第一用户界面区域中显示(20002)(例如,响应于将虚拟对象放置在包括相机的设备周围的物理环境的增强现实视图中的请求(例如,响应于轻击与虚拟对象的登台视图一起显示的“世界”按钮))虚拟对象的表示(例如,第一用户界面区域是显示包括相机的设备周围的物理环境的增强现实视图的用户界面),其中显示包括保持虚拟对象的表示与在一个或多个相机的视场中捕获的物理环境内检测到的平面之间的第一空间关系(例如,虚拟对象以使得虚拟对象的表示与平面之间的固定角度得以保持(例如,虚拟对象看起来保持在平面上的固定位置处或者沿视场平面滚动)的取向和位置显示在显示器上)。例如,如图15V所示,虚拟对象11002显示在包括一个或多个相机的视场6036的用户界面区域中。The device displays (20002), via the display generation component, in a first user interface area including a representation of a field of view of the one or more cameras (eg, an augmented reality view of a physical environment responsive to placing a virtual object around the device including the cameras) A request in (e.g., in response to tapping a "world" button displayed with a staging view of the virtual object)) a representation of the virtual object (e.g., the first user interface area is an augmented reality showing the physical environment around the device including the camera) a user interface of a view), wherein displaying includes maintaining a first spatial relationship between a representation of the virtual object and a detected plane within the physical environment captured in the field of view of one or more cameras (eg, the virtual object such that the virtual object The fixed angle between the representation and the plane is maintained (eg, the orientation and position of the virtual object appearing to remain at a fixed position on the plane or scrolling along the field of view plane is displayed on the display). For example, as shown in Figure 15V, avirtual object 11002 is displayed in a user interface area that includes a field ofview 6036 of one or more cameras.

设备检测(20004)调整一个或多个相机的视场的设备的移动(例如,包括一个或多个相机的设备的横向移动和/或旋转)。例如,如参照图15V至图15W所描述的,设备100的移动调整一个或多个相机的视场。The device detects (20004) movement of the device that adjusts the field of view of the one or more cameras (eg, lateral movement and/or rotation of the device including the one or more cameras). For example, as described with reference to Figures 15V-15W, movement of thedevice 100 adjusts the field of view of one or more cameras.

响应于检测到调整一个或多个相机的视场的设备的移动(20006):在调整一个或多个相机的视场时,设备根据虚拟对象与在一个或多个相机的视场内检测到的平面之间的第一空间关系(例如,取向和/或位置),调整第一用户界面区域中虚拟对象的表示的显示,并且根据确定设备的移动引起大于阈值量(例如,100%、50%或20%)的虚拟对象移动到一个或多个相机的视场的显示部分之外(例如,因为在设备相对于物理环境的移动期间,虚拟对象的表示与在一个或多个相机的视场中捕获的物理环境内检测到的平面之间的空间关系保持固定),设备通过一个或多个音频输出发生器产生第一音频警报(例如,指示在相机视图中不再显示大于阈值量的虚拟对象的语音通知)。例如,如参照图15W所描述的,响应于设备100的移动引起虚拟对象11002移动到一个或多个相机的视场6036的显示部分之外,生成音频警报15118。根据确定设备的移动引起虚拟对象移动到所显示的增强现实视图之外生成音频输出向用户提供反馈,该反馈指示设备的移动影响虚拟对象相对于增强现实视图的显示的程度。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知虚拟对象是否已移出显示器,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In response to detecting movement of the device adjusting the field of view of the one or more cameras (20006): In adjusting the field of view of the one or more cameras, the device is based on the virtual object and detected within the field of view of the one or more cameras the first spatial relationship (eg, orientation and/or position) between the planes of the device, adjusts the display of the representation of the virtual object in the first user interface area, and determines that the movement of the device causes movement by greater than a threshold amount (eg, 100%, 50 % or 20%) of virtual objects move outside the displayed portion of the field of view of one or more cameras (e.g., because during movement of the device relative to the physical environment, the representation of the virtual object is The spatial relationship between the detected planes within the physical environment captured in the field remains fixed), the device generates a first audio alert via one or more audio output generators (e.g., indicating that no longer than a threshold amount is displayed in the camera view voice announcements for virtual objects). For example, as described with reference to FIG. 15W, in response to movement ofdevice 100 causingvirtual object 11002 to move outside the displayed portion of field ofview 6036 of one or more cameras,audio alert 15118 is generated. Based on determining that the movement of the device caused the virtual object to move out of the displayed augmented reality view, an audio output is generated to provide feedback to the user indicating the extent to which the movement of the device affects the display of the virtual object relative to the augmented reality view. Providing the user with improved feedback enhances the operability of the device (e.g., by providing a device that allows the user to perceive whether a virtual object has moved off the display without cluttering the display with additional display information and without requiring the user to look at the display), and enables the user to - The device interface is more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,输出第一音频警报包括(20008)生成音频输出,该音频输出指示在一个或多个相机的视场的显示部分上保持可见的虚拟对象的量(例如,保持可见的虚拟对象的量相对于从当前观看视角看的虚拟对象的总尺寸来测量(例如,20%、25%、50%等))(例如,音频输出说,“对象x为20%可见。”)。例如,响应于设备100的移动引起虚拟对象11002部分地移动到一个或多个相机的视场6036的显示部分之外,如参照图15X至图15Y所描述的,音频警报15126生成为包括通知15128,指示“椅子90%可见,占据屏幕的20%”。生成指示在所显示的增强现实视图中可见的虚拟对象的量的音频输出为用户提供反馈(例如,指示设备的移动改变虚拟对象可见程度的程度)。为用户提供改进的反馈(例如,通过提供允许用户感知虚拟对象是否已移出显示器,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器)增强了设备的可操作性,并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, outputting the first audio alert includes (20008) generating an audio output indicating an amount of virtual objects remaining visible on the displayed portion of the field of view of the one or more cameras (eg, virtual objects remaining visible) The amount of object is measured relative to the total size of the virtual object from the current viewing angle (eg, 20%, 25%, 50%, etc.) (eg, the audio output says, "Object x is 20% visible."). For example, in response to movement ofdevice 100 causingvirtual object 11002 to move partially outside of the displayed portion of field ofview 6036 of one or more cameras, as described with reference to Figures 15X-15Y,audio alert 15126 is generated to includenotification 15128 , indicating "The chair is 90% visible and occupies 20% of the screen". Generating audio output indicating the amount of virtual objects visible in the displayed augmented reality view provides feedback to the user (eg, indicating the degree to which movement of the device changes the degree to which the virtual objects are visible). Providing improved feedback to the user (e.g., by providing the ability to allow the user to perceive whether a virtual object has moved off the display without cluttering the display with additional display information, and without requiring the user to look at the display) enhances the operability of the device and enables the user to - The device interface is more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,输出第一音频警报包括(20010)生成音频输出,该音频输出指示被虚拟对象占据的视场的所显示部分的量(例如,虚拟对象所占据的物理环境的增强现实视图的量(例如,20%、25%、50%等))(例如,音频输出包括通知,说“对象x占据了世界视图的15%”)。在一些实施方案中,音频输出还包括由用户执行的引起虚拟对象的显示状态的改变的动作的描述。例如,音频输出包括通知,说“设备向左移动;对象x为20%可见,占世界视图的15%。“例如,在图15Y中,生成音频警报15126,该音频警报包括指示“椅子90%可见,占据屏幕的20%”的通知15128。生成指示由虚拟对象占据的增强现实视图的量的音频输出为用户提供反馈(例如,指示设备的移动改变增强现实视图被占据的程度的程度)。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知虚拟对象相对于显示器的尺寸的信息,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, outputting the first audio alert includes (20010) generating an audio output indicating an amount of the displayed portion of the field of view occupied by the virtual object (eg, an augmented reality view of the physical environment occupied by the virtual object) (eg, 20%, 25%, 50%, etc.)) (eg, the audio output includes a notification saying "object x occupies 15% of the world view"). In some embodiments, the audio output also includes a description of the action performed by the user that caused the change in the display state of the virtual object. For example, the audio output includes a notification saying "Device moved left; object x is 20% visible and occupies 15% of the world view." For example, in Figure 15Y,audio alert 15126 is generated that includes an indication of "chair 90% Visible, takes 20% of the screen"notification 15128. Generating audio output that indicates the amount of the augmented reality view occupied by the virtual object provides feedback to the user (eg, indicating the degree to which movement of the device changes the degree to which the augmented reality view is occupied). Providing the user with improved feedback enhances the operability of the device (eg, by providing information that allows the user to perceive the size of virtual objects relative to the display without cluttering the display with additional display information and without requiring the user to view the display), and Making the user-device interface more efficient, in turn, reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,设备检测(20012)在触敏表面上与一个或多个相机的视场的表示对应的位置处通过接触进行的输入(例如,检测触摸屏的显示物理环境的增强现实视图的一部分上的轻击输入或双击输入)。响应于检测到输入,并且根据确定在触敏表面上的与未被虚拟对象占据的一个或多个相机的视场的第一部分对应的第一位置处检测到输入,设备生成(20014)第二音频警报(例如,指示无法在所轻击的区域中定位虚拟对象的点击声或嗡嗡声)。例如,如参照图15Z所描述的,响应于在触摸屏112上的与一个或多个相机的视场6036的未被虚拟对象11002占据的一部分对应的位置处检测到的输入,设备生成音频警报15130。在一些实施方案中,响应于检测到输入,根据确定在与一个或多个相机的视场的由虚拟对象占据的第二部分对应的第二位置处检测到输入,放弃生成第二音频警报。在一些实施方案中,代替生成第二音频警报来指示用户未能定位虚拟对象,设备生成指示用户已定位虚拟对象的不同音频警报。在一些实施方案中,代替生成第二音频警报,设备输出描述对虚拟对象执行的操作的音频通知(例如,“对象x被选中。”、“对象x的尺寸被重新设定为默认尺寸。”、“对象x被旋转到默认方向”等)或虚拟对象的状态(例如,对象x,20%可见,占据世界视图的15%。)。In some embodiments, the device detects (20012) an input by contact at a location on the touch-sensitive surface that corresponds to a representation of the field of view of the one or more cameras (eg, detecting an augmented reality view of a touch screen displaying an augmented reality view of the physical environment). tap to enter or double-tap to enter on a section). In response to detecting the input, and based on determining that the input is detected at a first location on the touch-sensitive surface that corresponds to a first portion of the field of view of the one or more cameras not occupied by the virtual object, the device generates (20014) a second Audio alerts (eg, clicks or buzzes indicating that virtual objects cannot be located in the tapped area). For example, as described with reference to FIG. 15Z, the device generates an audio alert 15130 in response to an input detected at a location on thetouch screen 112 corresponding to a portion of the field ofview 6036 of the one or more cameras that is not occupied by thevirtual object 11002 . In some embodiments, in response to detecting the input, the generation of the second audio alert is discarded based on the determination that the input is detected at a second location corresponding to the second portion of the field of view of the one or more cameras that is occupied by the virtual object. In some embodiments, instead of generating a second audio alert to indicate that the user failed to locate the virtual object, the device generates a different audio alert that indicates that the user has located the virtual object. In some embodiments, instead of generating the second audio alert, the device outputs an audio notification describing the operation performed on the virtual object (eg, "Object x is selected.", "Object x was resized to default size." , "object x is rotated to default orientation", etc.) or the state of a dummy object (eg, object x, 20% visible, occupies 15% of the world view.).

响应于在与所显示的增强现实视图的未被虚拟对象占据的一部分对应的位置处检测到的输入而生成音频输出为用户提供反馈(例如,指示必须在不同位置处提供输入)(例如,获得关于虚拟对象的信息和/或执行操作))。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知输入是否与虚拟对象成功连接的信息,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。generating audio output in response to input detected at a location corresponding to a portion of the displayed augmented reality view not occupied by the virtual object provides feedback to the user (eg, indicating that input must be provided at a different location) (eg, obtaining information about virtual objects and/or perform actions )). Providing the user with improved feedback enhances the operability of the device (e.g., by providing information that allows the user to perceive whether the input is successfully connected to the virtual object without cluttering the display with additional display information, and without requiring the user to look at the display), and Making the user-device interface more efficient, in turn, reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,输出第一音频警报包括生成(20016)音频输出,该音频输出指示对虚拟对象执行的操作(例如,在生成音频输出之前,设备确定当前选择的操作并响应于确认用户执行当前选择的操作的意图的输入(例如,双击)来执行操作),以及执行操作后虚拟对象的结果状态。例如,音频输出包括通知,说“设备向左移动;对象x为20%可见,占据世界视图的15%,“对象x顺时针旋转30度;对象围绕y轴旋转50度,”或“对象x放大20%并占据世界视图的50%”。例如,如参照图15AH至图15AI所描述的,响应于相对于虚拟对象11002的旋转操作的执行,生成音频警报15190,该音频警报包括指示“椅子逆时针旋转五度”的通知15192。椅子现在相对于屏幕旋转零度”生成指示对虚拟对象执行的操作的音频输出为用户提供指示所提供的输入如何影响虚拟对象的反馈。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知操作如何改变虚拟对象的信息,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, outputting the first audio alert includes generating (20016) an audio output indicative of an operation performed on the virtual object (eg, prior to generating the audio output, the device determines the currently selected operation and responsive to confirming that the user performs the operation) Input of the intent of the currently selected action (eg, double-click to perform the action), and the resulting state of the virtual object after the action is performed. For example, the audio output includes notifications saying "Device moved left; object x is 20% visible and occupies 15% of the world view, "object x is rotated 30 degrees clockwise; object is rotated 50 degrees around the y axis," or "object x is Zoom in by 20% and occupy 50% of the world view". For example, as described with reference to Figures 15AH-15AI, in response to performing a rotation operation relative tovirtual object 11002, an audio alert 15190 is generated that includes an indication of "chair" "Rotate five degrees counterclockwise" notification 15192. Chair is now rotated zero degrees relative to the screen" Generates audio output indicating actions performed on virtual objects Provides feedback to the user indicating how the provided input affects the virtual objects. Providing the user with improved feedback enhances the operability of the device (e.g., by providing information that allows the user to perceive how manipulation changes virtual objects without cluttering the display with additional display information, and without requiring the user to look at the display), and - The device interface is more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案(20018)中,在第一音频警报的音频输出中相对于与在一个或多个相机的视场中捕获的物理环境对应的参考帧描述了在执行操作之后虚拟对象的结果状态(例如,在(例如,响应于基于触摸的手势或设备的移动)操纵对象之后,设备生成语音描述对象的新状态(例如,当虚拟物体最初放入物理环境的增强现实视图中时,相对于虚拟物体的初始位置/取向旋转30度、旋转60度或向左移动))。例如,如参照图15AH至图15AI所描述的,响应于相对于虚拟对象11002的旋转操作的执行,生成音频警报15190,该音频警报包括指示“椅子逆时针旋转五度”的通知15192。椅子现在相对于屏幕旋转零度”在一些实施方案中,操作包括设备相对于物理环境的移动(例如,引起虚拟对象相对于在一个或多个相机的视场中捕获的物理环境的部分的表示的移动),并且语音响应于设备相对于物理环境的移动来描述虚拟对象的新状态。在对对象执行操作之后生成指示虚拟对象的状态的音频输出,为用户提供允许用户感知操作如何改变虚拟对象的反馈。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知操作如何改变虚拟对象的信息,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments (20018), the resulting state of the virtual object after performing the operation is described in the audio output of the first audio alert relative to a reference frame corresponding to the physical environment captured in the field of view of the one or more cameras (eg, after manipulating the object (eg, in response to a touch-based gesture or movement of the device), the device generates a voice describing the new state of the object (eg, relative to when the virtual object is initially placed in the augmented reality view of the physical environment) The initial position/orientation of the virtual object is rotated 30 degrees, rotated 60 degrees or moved to the left)). For example, as described with reference to FIGS. 15AH-15AI, in response to performance of a rotation operation relative tovirtual object 11002, audio alert 15190 is generated, including notification 15192 indicating "the chair is rotated five degrees counterclockwise." "The chair is now rotated zero degrees relative to the screen." In some embodiments, the operation includes movement of the device relative to the physical environment (eg, causing a representation of a virtual object relative to a portion of the physical environment captured in the field of view of one or more cameras) move), and the new state of the virtual object is described by the voice in response to the movement of the device relative to the physical environment. After performing the operation on the object, an audio output indicating the state of the virtual object is generated, providing the user with a way to allow the user to perceive how the operation changes the virtual object. Feedback. Providing the user with improved feedback enhances the operability of the device (eg, by providing information that allows the user to perceive how manipulation changes the virtual object without cluttering the display with additional display information and without requiring the user to view the display), and Making the user-device interface more efficient, in turn, reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,设备检测(20020)设备的另外移动(例如,包括一个或多个相机的设备的横向移动和/或旋转),该另外移动在生成第一音频警报之后进一步调整一个或多个相机的视场。例如,如关于图15W至图15X所描述的,设备100的移动进一步调整一个或多个相机的视场(在响应于设备100从图15V至图15W的移动而发生的一个或多个相机的视场调整之后)。响应于检测到进一步调整一个或多个相机的视场的设备的另外移动(20022):在进一步调整一个或多个相机的视场时,设备根据虚拟对象与在一个或多个相机的视场内检测到的平面之间的第一空间关系(例如,取向和/或位置),调整第一用户界面区域中虚拟对象的表示的显示,并且根据确定设备的另外移动引起大于第二阈值量(例如,50%、80%或100%)的虚拟对象移动到一个或多个相机的视场的显示部分之内(例如,因为在设备相对于物理环境的移动期间,虚拟对象的表示与在一个或多个相机的视场中捕获的物理环境内检测到的平面之间的空间关系保持固定),设备通过一个或多个音频输出发生器产生第一音频警报(例如,包括通知的音频输出,该通知指示大于阈值量的虚拟对象被移回到相机视图中)。例如,如参照图15X所描述的,响应于设备100的移动引起虚拟对象11002移动到一个或多个相机的视场6036的显示部分之内,生成音频警报15122(例如,包括公告,“椅子现在在世界上投影,100%可见,占据屏幕的10%”)。根据确定设备的移动引起虚拟对象移动到所显示的增强现实视图之内生成音频输出向用户提供反馈,该反馈指示设备的移动影响虚拟对象相对于增强现实视图的显示的程度。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知虚拟对象是否已移入显示器,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device detects (20020) additional movement of the device (eg, lateral movement and/or rotation of the device including one or more cameras), which further adjusts one or more of the devices after the first audio alert is generated field of view of a camera. For example, as described with respect to Figures 15W-15X, movement ofdevice 100 further adjusts the field of view of one or more cameras (in the case of one or more cameras that occur in response to movement ofdevice 100 from Figures 15V to 15W) after field of view adjustment). In response to detecting further movement of the device that further adjusts the field of view of the one or more cameras (20022): In further adjusting the field of view of the one or more cameras, the device is based on the virtual object and the field of view of the one or more cameras A first spatial relationship (eg, orientation and/or position) between the detected planes within the first user interface area, adjusting the display of the representation of the virtual object in the first user interface area, and determining that further movement of the device causes an amount greater than a second threshold ( For example, 50%, 80%, or 100%) of the virtual object moves within the displayed portion of the field of view of one or more cameras (eg, because during movement of the device relative to the physical environment, the representation of the virtual object differs from that in a the spatial relationship between the detected planes within the physical environment captured in the field of view of the camera or cameras remains fixed), the device generates a first audio alert via one or more audio output generators (e.g., an audio output including a notification, The notification indicates that virtual objects greater than a threshold amount are moved back into the camera view). For example, as described with reference to Figure 15X, in response to movement of thedevice 100 causing thevirtual object 11002 to move within the displayed portion of the field ofview 6036 of the one or more cameras, anaudio alert 15122 is generated (eg, including an announcement, "The chair is now Projected in the world, 100% visible, occupying 10% of the screen"). Based on determining that movement of the device caused the virtual object to move within the displayed augmented reality view, an audio output is generated to provide feedback to the user indicating the extent to which the movement of the device affects the display of the virtual object relative to the augmented reality view. Providing the user with improved feedback enhances the operability of the device (e.g., by providing a device that allows the user to perceive whether a virtual object has moved into the display without cluttering the display with additional display information, and without requiring the user to look at the display), and enables the user to - The device interface is more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,当在第一用户界面区域中显示虚拟对象的表示并且当前为虚拟对象选择适用于虚拟对象的多个对象操纵类型的第一对象操纵类型时,设备检测到(20024)切换到适用于虚拟对象的另一对象操纵类型的请求(例如,检测在触敏表面上与显示一个或多个相机的视场的表示的第一用户界面区域的一部分对应的位置处通过接触进行的轻扫输入(例如,包括接触在水平方向上的移动))。例如,如参照图15AG所描述的,当前选择顺时针旋转控件15170时,检测到轻扫输入以切换到逆时针旋转控件15180(用于使虚拟对象15160逆时针旋转)。响应于检测到切换到适用于虚拟对象的另一对象操纵类型的请求,设备生成(20026)在适用于虚拟对象的多个对象操纵类型中命名第二对象操纵类型的音频输出(例如,音频输出包括通知,说“围绕x轴旋转对象”,“调整对象大小”或“在平面上移动对象”等),其中第二对象操纵类型与第一对象操纵类型不同。例如,在图15AH中,响应于检测到参照15AG描述的请求,生成音频警报15182,包括通知15184(“选中:逆时针旋转”)。在一些实施方案中,设备响应于在相同方向上的连续轻扫输入而遍历预定义的可应用对象操纵类型列表。在一些实施方案中,响应于检测到来自紧接在前的轻扫输入的反方向的轻扫输入,设备生成音频输出,该音频输出包括命名先前通知的适用于虚拟对象的对象操纵类型的通知(例如,在最近通知的对象操纵类型之前的通知)。在一些实施方案中,设备不为适用于虚拟对象的每个对象操纵类型显示对应的控件(例如,未显示用于由手势发起的操作(例如,旋转、调整尺寸、平移等)的按钮或控件)。响应于切换对象操纵类型的请求而生成音频输出为用户提供指示已经执行了切换操作的反馈。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供确认切换输入成功执行的信息,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device detects (20024) a switch when a representation of the virtual object is displayed in the first user interface area and a first object manipulation type of a plurality of object manipulation types applicable to the virtual object is currently selected for the virtual object A request to another object manipulation type applicable to a virtual object (eg, detecting a contact made at a location on the touch-sensitive surface that corresponds to a portion of the first user interface area that displays a representation of the field of view of one or more cameras) Swipe input (eg, including movement of a touch in a horizontal direction). For example, as described with reference to FIG. 15AG, whenclockwise rotation control 15170 is currently selected, a swipe input is detected to switch to counterclockwise rotation control 15180 (for rotatingvirtual object 15160 counterclockwise). In response to detecting a request to switch to another object manipulation type applicable to the virtual object, the device generates (20026) an audio output (eg, an audio output) naming the second object manipulation type among the plurality of object manipulation types applicable to the virtual object Including notifications saying "rotate the object around the x-axis", "resize the object" or "move the object on the plane", etc.) where the second object manipulation type is different from the first object manipulation type. For example, in Figure 15AH, in response to detecting the request described with reference to 15AG, anaudio alert 15182 is generated, including notification 15184 ("Checked: Rotate Counterclockwise"). In some implementations, the device traverses a predefined list of applicable object manipulation types in response to successive swipe inputs in the same direction. In some embodiments, in response to detecting a swipe input in the opposite direction from the immediately preceding swipe input, the device generates an audio output including a notification naming the previously notified object manipulation type applicable to the virtual object (e.g. notifications prior to the most recently notified object manipulation type). In some embodiments, the device does not display corresponding controls for each object manipulation type applicable to virtual objects (eg, does not display buttons or controls for gesture-initiated operations (eg, rotate, resize, pan, etc.) ). Generating audio output in response to a request to switch object manipulation types provides feedback to the user indicating that a switching operation has been performed. Providing the user with improved feedback enhances the operability of the device (e.g., by providing information confirming the successful execution of the switching input, without cluttering the display with additional display information, and without requiring the user to view the display), and enabling the user-device interface More efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在生成(20028)在适用于虚拟对象的多个对象操纵类型中命名第二对象操纵类型的音频输出之后(例如,音频输出包括通知,说“围绕x轴旋转对象”,“重新设定对象尺寸”或“在平面上移动对象”等),设备检测执行与当前选择的对象操纵类型对应的对象操纵行为的请求(例如,检测在触敏表面上与显示一个或多个相机的视场的表示的第一用户界面区域的一部分对应的位置处通过接触进行的双击输入)。例如,如参照图15AH所描述的,检测到双击输入以逆时针旋转虚拟对象11002。响应于检测到执行与当前选择的对象操纵类型对应的对象操纵行为的请求,设备执行(20030)与第二对象操纵类型对应的对象操纵行为(例如,使虚拟对象围绕y轴旋转5度,或者使对象的尺寸增加5%,或者使平面上的对象移动20个像素)(例如,根据第二对象操纵类型调整第一用户界面区域中虚拟对象的表示的显示)。例如,在图15AI中,响应于检测到参照15AH所描述的请求,使虚拟对象11002逆时针旋转。在一些实施方案中,除了执行与第二对象操纵类型对应的对象操纵行为之外,设备还输出音频输出,该音频输出包括指示相对于虚拟对象执行的对象操纵行为以及在执行对象操纵行为之后虚拟对象的结果状态的通知。例如,在图15AI中,生成音频输出15190,其包括通知15192(“椅子逆时针旋转五度。现在,椅子相对于屏幕“旋转零度”。)响应于在选择操作时检测到的输入执行对象操纵操作提供了用于执行操作的附加控件选项(例如,允许用户通过提供轻击输入而不是需要双击输入来执行操作)。提供用于提供输入的附加控件选项而不用另外的显示控件使用户界面杂乱增强了设备的可操作性(例如,通过为提供多接触手势的能力有限的用户提供操纵对象的选项),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, after generating (20028) an audio output naming a second object manipulation type among the plurality of object manipulation types applicable to virtual objects (eg, the audio output includes a notification saying "rotate the object around the x-axis", "Resize an object" or "Move an object on a plane", etc.), the device detects a request to perform an object manipulation behavior corresponding to the currently selected object manipulation type (e.g. A double tap input by contact at a location corresponding to a portion of the first user interface area of the representation of the camera's field of view). For example, as described with reference to FIG. 15AH, a double tap input is detected to rotatevirtual object 11002 counterclockwise. In response to detecting a request to perform an object manipulation behavior corresponding to the currently selected object manipulation type, the device performs (20030) an object manipulation behavior corresponding to the second object manipulation type (eg, rotating the virtual object by 5 degrees about the y-axis, or Increase the size of the object by 5%, or move the object on the plane by 20 pixels) (eg, adjust the display of the representation of the virtual object in the first user interface area according to the second object manipulation type). For example, in Figure 15AI, in response to detecting the request described with reference to 15AH,virtual object 11002 is rotated counterclockwise. In some embodiments, in addition to performing the object manipulation behavior corresponding to the second object manipulation type, the device outputs an audio output including an indication of the object manipulation behavior performed relative to the virtual object and the virtual object after performing the object manipulation behavior. Notification of the result status of the object. For example, in Figure 15AI, an audio output 15190 is generated that includes a notification 15192 ("The chair is rotated five degrees counterclockwise. The chair is now "rotated zero degrees" relative to the screen.) Object manipulation is performed in response to input detected at the time of the selection operation Actions provide additional control options for performing actions (e.g., allowing the user to perform actions by providing a tap input instead of requiring a double tap input). Providing additional control options for providing input without cluttering the user interface with additional display controls Enhances the operability of the device (for example, by providing users with limited ability to provide multi-touch gestures the option to manipulate objects) and makes the user-device interface more efficient, which in turn by enabling users to use the device more quickly and efficiently This reduces power usage and extends the battery life of the device.

在一些实施方案中,响应于检测到切换到适用于虚拟对象的另一对象操纵类型的请求(20032):根据确定第二对象操纵类型是连续可调操纵类型,设备生成音频警报以及命名第二对象操纵类型的音频输出,以指示第二对象操纵类型是连续可调的操纵类型(例如,在命名第二对象操纵类型的音频通知之后输出说出“可调”的音频输出(例如,“使对象围绕y轴顺时针旋转”));设备检测到执行与第二对象操纵类型对应的对象操纵行为的请求,包括检测在触敏表面上与显示一个或多个相机的视场的表示的第一用户界面区域的一部分对应的位置处的轻扫输入(例如,在触敏表面上与显示一个或多个相机的视场的表示的第一用户界面区域的一部分对应的位置处检测到通过接触进行的双击输入之后);并且响应于检测到执行与第二对象操纵类型相对应的对象操纵行为的请求,设备执行与第二对象操纵类型相对应的对象操纵行为,其量对应于滑动输入的大小(例如,围绕y轴旋转虚拟对象5度或10度,或者将对象的尺寸增加5%或10%,或者将平面上的对象移动20像素或40像素,具体取决于轻扫输入的量值是第一量还是大于第一量的第二量)。例如,如参照图15J至图15K所描述的,当前选择顺时针旋转控件15038时,检测轻扫输入以切换到缩放控件15064。生成音频警报15066,其包括通知15068(“比例:可调”)。如参照图15K至图15L所描述的,检测轻扫输入以用于放大虚拟对象11002,并且响应于该输入,对虚拟对象11002执行缩放操作(在图15K至图15L的例示性示例中,在显示登台视图界面6010时检测到用于连续可调操纵的输入,但是应当认识到,可在触敏表面上的与显示一个或多个相机的视场的表示的第一用户界面区域的一部分对应的位置处检测到类似的输入)。在一些实施方案中,除了执行第二对象操纵行为之外,设备还输出音频通知,该音频通知指示相对于虚拟对象执行的对象操纵行为的量,以及在执行对象操纵行为之后虚拟对象的结果状态。响应于轻扫输入执行对象操纵操作提供了用于执行操作的附加控件选项(例如,允许用户通过提供轻扫输入而不是需要双接触输入来执行操作)。提供用于提供输入的附加控件选项而不用另外的显示控件使用户界面杂乱(例如,通过为提供多接触手势的能力有限的用户提供操纵对象的选项),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, in response to detecting a request to switch to another object manipulation type applicable to the virtual object (20032): upon determining that the second object manipulation type is a continuously adjustable manipulation type, the device generates an audio alert and names the second object manipulation type Audio output of object manipulation type to indicate that the second object manipulation type is a continuously adjustable manipulation type (e.g., outputting an audio output saying "adjustable" (e.g., "make The object rotates clockwise around the y-axis")); the device detects a request to perform an object manipulation behavior corresponding to a second object manipulation type, including detecting a Swipe input at a location corresponding to a portion of a user interface area (eg, detection of a touch after a double-click input made); and in response to detecting a request to perform an object manipulation behavior corresponding to the second object manipulation type, the device executes the object manipulation behavior corresponding to the second object manipulation type by an amount corresponding to the amount of the sliding input Size (for example, rotate the virtual object 5 or 10 degrees around the y-axis, or increase the size of the object by 5% or 10%, or move the object on the plane by 20 or 40 pixels, depending on the magnitude of the swipe input is the first amount or a second amount greater than the first amount). For example, when theclockwise rotation control 15038 is currently selected, a swipe input is detected to switch to thezoom control 15064, as described with reference to FIGS. 15J-15K. An audio alert 15066 is generated that includes a notification 15068 ("Scale: Adjustable"). As described with reference to FIGS. 15K-15L, a swipe input is detected for zooming in onvirtual object 11002, and in response to the input, a zoom operation is performed on virtual object 11002 (in the illustrative examples of FIGS. 15K-15L, in the Input for continuously adjustable manipulation is detected when thestaging view interface 6010 is displayed, but it should be appreciated that a portion of the first user interface area on the touch-sensitive surface that corresponds to displaying a representation of the field of view of one or more cameras A similar input was detected at the location of ). In some embodiments, in addition to performing the second object manipulation behavior, the device outputs an audio notification indicating the amount of the object manipulation behavior performed relative to the virtual object, and the resulting state of the virtual object after performing the object manipulation behavior . Performing an object manipulation operation in response to a swipe input provides additional control options for performing the operation (eg, allowing the user to perform the operation by providing a swipe input rather than requiring a two-touch input). Providing additional control options for providing input without cluttering the user interface with additional display controls (for example, by providing the option to manipulate objects for users with limited ability to provide multi-touch gestures), and making the user-device interface more efficient, This in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在第一用户界面区域中显示虚拟对象的表示之前,设备在第二用户界面区域中显示(20034)虚拟对象的表示(例如,登台用户界面)其中第二用户界面区域不包括一个或多个相机的视场的表示(例如,第二用户界面区域是登台用户界面,其中可以操纵(例如,旋转、重新设定尺寸和移动)虚拟对象而不保持与在相机的视场中捕获的物理环境中检测到的平面的固定关系)。当在第二用户界面区域中显示虚拟对象的表示时,当前为虚拟对象选择了适用于虚拟对象的多个操作的第一操作,设备检测(20036)切换到适用于虚拟对象的另一操作的请求(例如,包括切换适用于第二用户界面区域中的虚拟对象的对象操纵类型的请求(例如,重新设定尺寸、旋转、倾斜等)或者适用于第二用户界面区域中的虚拟对象的用户界面操作(例如,返回2D用户界面、使对象落入物理环境的增强现实视图中))(例如,检测请求包括检测在触敏表面上与第一用户界面区域对应的位置处通过接触进行的轻扫输入(例如,包括接触在水平方向上的移动))。例如,如参照图15F至图15G所描述的,当显示登台用户界面6010并且当前选择向下倾斜控件15022时,检测到轻扫输入以切换到顺时针旋转控件15038。响应于检测到切换到适用于第二用户界面区域中的虚拟对象的另一操作的请求,设备生成(20038)音频输出,该音频输出在适用于虚拟对象的多个操作中命名第二操作(例如,音频输出包括通知,说“使对象围绕x轴旋转”、“重新设定对象的尺寸”、“使对象朝显示器倾斜”或“在增强现实视图中显示对象”等),其中第二操作不同于第一操作。在一些实施方案中,设备响应于在相同方向上的连续轻扫输入而遍历预定义的可应用操作列表。例如,在图15G中,响应于检测到参照15F描述的请求,生成音频警报15040,包括通知15042(“选中:顺时针旋转按钮”)。响应于切换操作类型的请求,生成命名所选择的操作类型的音频输出,为用户提供指示已成功接收到切换输入的反馈。响应于切换操作类型的请求,生成命名所选择的操作类型的音频输出,为用户提供指示已成功接收到切换输入的反馈。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知所选择的控件何时发生变化的信息,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, prior to displaying the representation of the virtual object in the first user interface area, the device displays (20034) the representation of the virtual object in a second user interface area (eg, a staging user interface) wherein the second user interface area is not A representation that includes the field of view of one or more cameras (eg, the second user interface area is a staging user interface where virtual objects can be manipulated (eg, rotated, resized, and moved) without remaining in the camera's field of view fixed relationship of the detected planes in the physical environment captured in ). When the representation of the virtual object is displayed in the second user interface area, the first operation of the plurality of operations applicable to the virtual object is currently selected for the virtual object, the device detection (20036) switches to another operation applicable to the virtual object A request (eg, including a request to switch the type of object manipulation applicable to the virtual object in the second user interface area (eg, resize, rotate, tilt, etc.) or the user applicable to the virtual object in the second user interface area Interface operations (eg, returning to a 2D user interface, causing objects to fall into an augmented reality view of the physical environment) (eg, the detection request includes detecting a tap by contact at a location on the touch-sensitive surface that corresponds to the first user interface area) Swipe input (eg, including movement of the touch in a horizontal direction). For example, when staginguser interface 6010 is displayed and tilt downcontrol 15022 is currently selected, a swipe input is detected to switch toclockwise rotation control 15038, as described with reference to FIGS. 15F-15G. In response to detecting a request to switch to another operation applicable to the virtual object in the second user interface area, the device generates (20038) an audio output naming the second operation ( For example, the audio output includes a notification saying "rotate the object around the x-axis," "resize the object," "tilt the object toward the display," or "show the object in an augmented reality view," etc.), where the second action different from the first operation. In some implementations, the device traverses a predefined list of applicable actions in response to successive swipe inputs in the same direction. For example, in Figure 15G, in response to detecting the request described with reference to 15F, anaudio alert 15040 is generated, including a notification 15042 ("Checked: Turn Button Clockwise"). In response to the request to switch the operation type, an audio output is generated naming the selected operation type, providing feedback to the user indicating that the switch input has been successfully received. In response to the request to switch the operation type, an audio output is generated naming the selected operation type, providing feedback to the user indicating that the switch input has been successfully received. Providing the user with improved feedback enhances the operability of the device (for example, by providing information that allows the user to perceive when a selected control has changed without cluttering the display with additional display information and without requiring the user to look at the display), And make the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,在第一用户界面区域中显示虚拟对象的表示之前(20040):在不包括一个或多个相机的视场的表示的第二用户界面区域(例如,登台用户界面)中显示虚拟对象的表示时(例如,第二用户界面区域是登台用户界面,其中可以操纵(例如,旋转、重新设定大小和移动)虚拟对象而不保持与物理环境中的平面的固定关系),设备检测在包括一个或多个相机的视场的表示的第一用户界面区域中显示虚拟对象的表示的请求(例如,在当前选择的操作是“在增强现实视图中显示对象”,并且在设备刚刚响应于轻扫输入(例如,刚好在双击输入之前接收)输出命名当前选择的操作的音频通知之后,检测到双击输入)。例如,如参照图15P至图15V所描述的,当显示登台用户界面6010并且选择切换控件6018时,检测到双击输入以向包括一个或多个相机的视场6036的表示的用户界面区域显示虚拟对象11002的表示。响应于检测到在包括一个或多个相机的视场的表示的第一用户界面区域中显示虚拟对象的表示的请求:设备根据虚拟对象的表示与在一个或多个相机的视场中捕获的物理环境内检测到的平面之间的第一空间关系,在第一用户界面区域中显示虚拟对象的表示(例如,当使虚拟对象落入增强现实视图中表示的物理环境中时,在登台视图中虚拟对象的旋转角度和尺寸保持在增强现实视图中,并且根据在视场中捕获的物理环境中检测到的平面的取向,在增强现实视图中重置倾斜角度。);并且设备生成第四音频警报,该第四音频警报指示虚拟对象相对于在一个或多个相机的视场中捕获的物理环境放置在增强现实视图中。例如,如参照图15V所描述的,响应于用于在包括一个或多个相机的视场6036的表示的用户界面区域中显示虚拟对象11002的表示的输入,在包括一个或多个相机的视场6036的用户界面区域中显示虚拟对象11002的表示,并且生成音频警报15114,该音频警报包括通知15116(“椅子现在在世界上投影,100%可见,占据屏幕的10%”)。响应于将对象放置在增强现实视图中的请求而生成音频输出,为用户提供指示成功执行放置虚拟对象的操作的反馈。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知对象在增强现实视图中显示的信息,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, before displaying the representation of the virtual object in the first user interface area (20040): in a second user interface area (eg, a staging user interface) that does not include a representation of the field of view of the one or more cameras when a representation of a virtual object is displayed (e.g., the second user interface area is a staging user interface in which the virtual object can be manipulated (e.g., rotated, resized, and moved) without maintaining a fixed relationship to a plane in the physical environment), The device detects a request to display a representation of the virtual object in a first user interface area that includes a representation of the field of view of the one or more cameras (eg, when the currently selected action is "display object in augmented reality view" and the device The double tap input is detected just after outputting an audio notification naming the currently selected action in response to the swipe input (eg, received just before the double tap input). For example, as described with reference to FIGS. 15P-15V, when thestaging user interface 6010 is displayed and thetoggle control 6018 is selected, a double-tap input is detected to display a virtualized user interface area that includes a representation of the field ofview 6036 of one or more cameras Representation ofobject 11002. In response to detecting a request to display the representation of the virtual object in the first user interface area that includes the representation of the field of view of the one or more cameras: the device matches the representation of the virtual object with the representation of the virtual object captured in the field of view of the one or more cameras A first spatial relationship between detected planes within the physical environment, displaying a representation of the virtual object in the first user interface area (eg, when the virtual object is caused to fall into the physical environment represented in the augmented reality view, in the staging view The rotation angle and size of the virtual objects in the augmented reality view are maintained in the augmented reality view, and the tilt angle is reset in the augmented reality view according to the orientation of the plane detected in the physical environment captured in the field of view.); and the device generates a fourth An audio alert, the fourth audio alert indicating that the virtual object is placed in the augmented reality view relative to the physical environment captured in the field of view of the one or more cameras. For example, as described with reference to FIG. 15V, in response to an input for displaying a representation ofvirtual object 11002 in a user interface area that includes a representation of a field ofview 6036 of one or more cameras, in a view that includes one or more cameras A representation ofvirtual object 11002 is displayed in the user interface area offield 6036, andaudio alert 15114 is generated that includes notification 15116 ("The chair is now projected in the world, 100% visible, occupying 10% of the screen"). Audio output is generated in response to the request to place the object in the augmented reality view, providing feedback to the user indicating that the operation of placing the virtual object was successfully performed. Providing the user with improved feedback enhances the operability of the device (eg, by providing information that allows the user to perceive objects displayed in the augmented reality view without cluttering the display with additional display information and without requiring the user to view the display), and Making the user-device interface more efficient, in turn, reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,第三音频警报指示(20042)关于虚拟对象相对于一个或多个相机的视场部分的外观的信息(例如,第三音频警报包括音频输出,其包括通知,说“对象x放置在世界中,对象x为30%可见,占据屏幕的90%。”)。例如,如参考图15V所描述的,生成音频警报15114,其包括通知15116(“椅子现在在世界中投影,100%可见,占据屏幕的10%”)。生成指示相对于所显示的增强现实视图可见的虚拟对象的外观的音频输出为用户提供反馈(例如,指示对象在增强现实视图中的放置的程度影响虚拟对象的外观)。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知对象在增强现实视图中如何显示的信息,而不用另外的显示信息使显示器杂乱,并且不需要用户查看显示器),并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the third audio alert indicates (20042) information about the appearance of the virtual object relative to the portion of the field of view of the one or more cameras (eg, the third audio alert includes an audio output that includes a notification saying "object x is placed in the world, object x is 30% visible and occupies 90% of the screen."). For example, as described with reference to Figure 15V, anaudio alert 15114 is generated that includes a notification 15116 ("The chair is now projected in the world, 100% visible, occupying 10% of the screen"). Generating audio output indicating the appearance of virtual objects that are visible relative to the displayed augmented reality view provides feedback to the user (eg, indicating the extent to which the placement of objects in the augmented reality view affects the appearance of the virtual objects). Providing the user with improved feedback enhances the operability of the device (for example, by providing information that allows the user to perceive how objects are displayed in the augmented reality view without cluttering the display with additional display information, and without requiring the user to view the display), And make the user-device interface more efficient, which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,结合虚拟对象相对于在一个或多个相机的视场中捕获的物理环境在增强现实视图中的放置,设备生成(20044)触觉输出。例如,当对象被放置在相机视场中检测到的平面上时,设备生成触觉输出,指示对象落在平面上。在一些实施方案中,当对象在重新设定对象的尺寸期间达到预定义的默认尺寸时,设备生成触觉输出。在一些实施方案中,设备针对相对于虚拟对象执行的每个操作生成触觉输出(例如,针对以预设角度量进行的每个旋转、针对将虚拟对象拖动到不同平面上、针对将对象重置到原始取向和/或尺寸等)。在一些实施方案中,这些触觉输出先于描述执行的操作和虚拟对象的结果状态的对应的音频警报。例如,如参照图15V所描述的,结合虚拟对象11002在一个或多个相机的视场6036中的放置,生成触觉输出15118。结合相对于由一个或多个相机捕获的物理环境放置虚拟对象来生成触觉输出为用户提供反馈(例如,指示放置虚拟对象的操作被成功执行)。为用户提供改进的反馈增强了设备的可操作性(例如,通过提供允许用户感知虚拟对象的放置已经发生的感官信息,而不由于显示的信息使用户界面杂乱)并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device generates (20044) a haptic output in conjunction with the placement of virtual objects in the augmented reality view relative to the physical environment captured in the field of view of the one or more cameras. For example, when an object is placed on a plane detected in the camera's field of view, the device generates a haptic output indicating that the object falls on the plane. In some embodiments, the device generates a haptic output when the object reaches a predefined default size during resizing of the object. In some embodiments, the device generates a haptic output for each operation performed relative to the virtual object (eg, for each rotation by a preset angular amount, for dragging the virtual object onto a different plane, for resetting the object to original orientation and/or size, etc.). In some embodiments, these haptic outputs are preceded by corresponding audio alerts describing the operation performed and the resulting state of the virtual object. For example,haptic output 15118 is generated in conjunction with placement ofvirtual object 11002 in field ofview 6036 of one or more cameras, as described with reference to Figure 15V. Generating a haptic output in conjunction with placing the virtual object relative to the physical environment captured by one or more cameras provides feedback to the user (eg, indicating that an operation to place the virtual object was performed successfully). Providing the user with improved feedback enhances the operability of the device (eg, by providing sensory information that allows the user to perceive that placement of virtual objects has occurred without cluttering the user interface with displayed information) and makes the user-device interface more efficient , which in turn reduces power usage and extends the battery life of the device by enabling the user to use the device more quickly and efficiently.

在一些实施方案中,设备在第一用户界面区域中的第一位置处(例如,在第一用户界面区域中的不同位置处显示的多个控件之中)显示(20046)第一控件,同时显示一个或多个相机的视场的表示。根据确定控制渐淡标准得到满足(例如,当第一用户界面区域被显示至少阈值时间量而没有在触敏表面上检测到触摸输入时,控制渐淡标准得到满足),设备停止(20048)在第一用户界面区域中显示第一控件(例如,以及第一用户界面区域中的所有其他控件),同时保持在第一用户界面区域中显示一个或多个相机的视场的表示(例如,当用户相对于物理环境移动设备时,不会重新显示控件)。在显示第一用户界面区域而不在第一用户界面区域中显示第一控件时,设备检测(20050)触敏表面上与第一用户界面区域中的第一位置对应的相应位置处的触摸输入。响应于检测到触摸输入,设备生成(20052)第五音频警报,该第五音频警报包括指定与第一控件对应的操作的音频输出(例如,“返回登台视图”或“使对象围绕y轴旋转”)。在一些实施方案中,响应于检测到触摸输入,设备还在第一位置处重新显示第一控件。在一些实施方案中,一旦用户知道显示器上控件的位置,重新显示控件并且在显示器上控件的通常位置处进行触摸输入时使其成为当前选择的控件,相比使用一系列轻扫输入浏览可用控件提供更快捷的方式来访问控件。响应于确定控制渐淡标准得到满足而自动停止显示控件减少了停止显示控件所需的输入的数量。减少执行操作所需的输入的数量增强了设备的可操作性,并且使用户-设备界面更有效,这又通过使用户能够更快速且有效地使用设备而减少了电力使用并且延长了设备的电池寿命。In some embodiments, the device displays (20046) the first control at a first location in the first user interface area (eg, among a plurality of controls displayed at different locations in the first user interface area), while Displays a representation of the field of view of one or more cameras. Upon determining that the control fading criterion is met (eg, the control fading criterion is met when the first user interface area is displayed for at least a threshold amount of time without detecting touch input on the touch-sensitive surface), the device stops (20048) at The first control (eg, and all other controls in the first user interface area) is displayed in the first user interface area, while maintaining a representation of the field of view of the one or more cameras displayed in the first user interface area (eg, when The controls are not redisplayed when the user moves the device relative to the physical environment). While displaying the first user interface area without displaying the first control in the first user interface area, the device detects (20050) a touch input at a corresponding location on the touch-sensitive surface that corresponds to the first location in the first user interface area. In response to detecting the touch input, the device generates (20052) a fifth audio alert that includes an audio output specifying an action corresponding to the first control (eg, "return to staging view" or "rotate object around y-axis"). ”). In some implementations, in response to detecting the touch input, the device also redisplays the first control at the first location. In some embodiments, once the user knows the location of the control on the display, the control is redisplayed and made the currently selected control when touch input is made at the control's usual location on the display, rather than browsing the available controls using a series of swipe inputs Provides a faster way to access controls. Automatically stopping the display of the control in response to determining that the control fade criteria are met reduces the amount of input required to stop the display of the control. Reducing the number of inputs required to perform an operation enhances the operability of the device and makes the user-device interface more efficient, which in turn reduces power usage and prolongs the battery of the device by enabling the user to use the device more quickly and efficiently life.

应当理解,对图20A至图20F中的操作进行描述的特定顺序仅仅是一个示例,并非旨在表明所述顺序是可以执行这些操作的唯一顺序。本领域的普通技术人员会想到多种方式来对本文所述的操作进行重新排序。另外,应当注意,本文相对于本文所述的其他方法(例如,方法800、900、1000、16000、17000、18000和20000)描述的其他过程的细节同样以类似的方式适用于上文相对于图20A至图20F所述的方法20000。例如,上文参考方法20000所述的接触、输入、虚拟对象、用户界面区域、视场、触觉输出、移动和/或动画任选地具有本文参考本文所述的其他方法(例如,方法800、900、1000、16000、17000、18000和19000)所述的接触、输入、虚拟对象、用户界面区域、视场、触觉输出、移动和/或动画的特征中的一者或多者。为了简明起见,此处不再重复这些细节。It should be understood that the particular order in which the operations in FIGS. 20A-20F are described is merely an example, and is not intended to indicate that the described order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize numerous ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (eg,methods 800, 900, 1000, 16000, 17000, 18000, and 20000) also apply in a similar manner to the above with respect to Figures Themethod 20000 described in 20A-20F. For example, the contacts, inputs, virtual objects, user interface areas, fields of view, haptic outputs, movements, and/or animations described above with reference tomethod 20000 optionally have other methods described herein with reference to (eg,method 800, 900, 1000, 16000, 17000, 18000, and 19000) of one or more of the features of contact, input, virtual object, user interface area, field of view, haptic output, movement, and/or animation. For the sake of brevity, these details are not repeated here.

以上参考图8A至图8E、图9A至图9D、图10A至图10D、图16A至图16G、图17A至图17D、图18A至图18I、图19A至图19H以及图20A至图20F所述的操作任选地由图1A至图1B中所描绘的部件来实施。例如,显示操作802、806、902、906、910、1004、1008、16004、17004、18002、19002和20002;检测操作804、904、908、17006、18004、19004和20004;改变操作910、接收操作1002、1006、16002和17002;停止操作17008;旋转操作18006;更新操作19006;调整操作20006;以及生成操作20006任选地由事件分类器170、事件识别器180和事件处理程序190来实现。事件分类器170中的事件监视器171检测在触敏显示器112上的接触,并且事件分配器模块174将事件信息传送到应用程序136-1。应用程序136-1的相应事件识别器180将事件信息与相应事件定义186进行比较,并且确定触敏表面上第一位置处的第一接触是否(或该设备的旋转是否)对应于预定义的事件或子事件,诸如对用户界面上的对象的选择、或该设备从一个取向到另一个取向的旋转。当检测到相应的预定义的事件或子事件时,事件识别器180激活与对该事件或子事件的检测相关联的事件处理程序190。事件处理程序190任选地使用或调用数据更新器176或对象更新器177来更新应用程序内部状态192。在一些实施例中,事件处理程序190访问相应GUI更新器178来更新应用程序所显示的内容。类似地,本领域的技术人员会清楚地知道基于在图1A至图1B中所描绘的部件可如何实现其他过程。8A to 8E, 9A to 9D, 10A to 10D, 16A to 16G, 17A to 17D, 18A to 18I, 19A to 19H, and 20A to 20F The described operations are optionally performed by the components depicted in Figures 1A-1B. For example,display operations 802, 806, 902, 906, 910, 1004, 1008, 16004, 17004, 18002, 19002, and 20002; detectoperations 804, 904, 908, 17006, 18004, 19004, and 20004; change operations 910, receiveoperations 1002, 1006, 16002, and 17002;stop operation 17008; rotateoperation 18006;update operation 19006; adjustoperation 20006; The event monitor 171 in the event sorter 170 detects a contact on the touch-sensitive display 112, and theevent dispatcher module 174 communicates the event information to the application 136-1. Thecorresponding event recognizer 180 of the application 136-1 compares the event information to thecorresponding event definition 186 and determines whether the first contact at the first location on the touch-sensitive surface (or whether the rotation of the device) corresponds to a predefined Events or sub-events, such as selection of an object on a user interface, or rotation of the device from one orientation to another. When the corresponding predefined event or sub-event is detected, theevent recognizer 180 activates theevent handler 190 associated with the detection of the event or sub-event.Event handler 190 optionally uses or calls data updater 176 or objectupdater 177 to update applicationinternal state 192 . In some embodiments, theevent handler 190 accesses the correspondingGUI updater 178 to update the content displayed by the application. Similarly, those skilled in the art will clearly know how other processes may be implemented based on the components depicted in FIGS. 1A-1B .

出于解释的目的,前面的描述是通过参考具体实施方案来描述的。然而,上面的例示性论述并非旨在是穷尽的或将本发明限制为所公开的精确形式。根据以上教导内容,很多修改形式和变型形式都是可能的。选择和描述实施方案是为了最佳地阐明本发明的原理及其实际应用,以便由此使得本领域的其他技术人员能够最佳地使用具有适合于所构想的特定用途的各种修改的本发明以及各种所描述的实施方案。For purposes of explanation, the foregoing description has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teachings. The embodiment was chosen and described in order to best explain the principles of the invention and its practical application, to thereby enable others skilled in the art to best utilize the invention with various modifications as are suited to the particular use contemplated and various described embodiments.

Claims (57)

1. A method, comprising:
at a device having a display generation component, one or more input devices, and one or more cameras:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
detecting a first movement of the one or more cameras while displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a first portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the first movement of the one or more cameras, displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a second portion of the physical environment captured in the field of view of the one or more cameras, wherein the second portion of the physical environment is different from the first portion of the physical environment.
2. The method of claim 1, comprising:
while displaying the representation of the virtual object in the first set of visual properties and the first orientation, detecting that the object placement criteria are satisfied.
3. The method of claim 2, comprising:
in response to detecting that the object placement criteria are satisfied, displaying, via the display generation component, an animated transition that shows the representation of the virtual object moving from the first orientation to the second orientation and changing from having the first set of visual properties to having the second set of visual properties.
4. The method of claim 2, wherein detecting that the object placement criteria are satisfied comprises one or more of:
detecting that a plane has been identified in the field of view of the one or more cameras;
detecting less than a threshold amount of movement between the device and the physical environment for at least a threshold amount of time; and
detecting that at least a predetermined amount of time has elapsed since the request to display the virtual object in the first user interface area was received.
5. The method of claim 1, comprising:
detecting a second movement of the one or more cameras while displaying the representation of the virtual object in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the second movement of the device, when the physical environment captured in the field of view of the one or more cameras moves in accordance with the second movement of the device and the second orientation continues to correspond to the plane in the physical environment detected in the field of view of the one or more cameras, maintaining the representation of the virtual object displayed in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras.
6. The method of claim 1, comprising:
in accordance with a determination that the object placement criteria are satisfied, generate a haptic output in conjunction with displaying the representation of the virtual object having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
7. The method of claim 1, comprising:
while displaying the representation of the virtual object in the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras, receiving an update regarding at least a position or orientation of the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to receiving an update regarding at least the position or the orientation of the plane in the physical environment detected in the field of view of the one or more cameras, adjusting at least a position and/or an orientation of the representation of the virtual object in accordance with the update.
8. The method of claim 1, wherein:
the first set of visual attributes comprises a first size and a first level of translucency; and is
The second set of visual attributes includes a second dimension different from the first dimension, and a second level of translucency lower than the first level of translucency.
9. The method of claim 1, wherein:
while the virtual object is displayed in a respective user interface that does not include at least a portion of the field of view of the one or more cameras, receiving the request to display the virtual object in the first user interface area that includes at least a portion of the field of view of the one or more cameras, and
the first orientation corresponds to an orientation of the virtual object when the virtual object is displayed in the respective user interface when the request is received.
10. The method of claim 1, wherein the first orientation corresponds to a predefined orientation.
11. The method of claim 1, comprising:
detecting a request to change a simulated physical dimension of the virtual object from a first simulated physical dimension to a second simulated physical dimension relative to the physical environment captured in the field of view of the one or more cameras while the virtual object is displayed in the first user interface area with the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to detecting the request to change the simulated physical dimensions of the virtual object:
gradually changing a display size of the representation of the virtual object in the first user interface area in accordance with a gradual change in the simulated physical size of the virtual object from the first simulated physical size to the second simulated physical size; and
in accordance with a determination that the simulated physical dimension of the virtual object has reached a predefined simulated physical dimension during a gradual change in the displayed size of the representation of the virtual object in the first user interface region, generating a haptic output to indicate that the simulated physical dimension of the virtual object has reached the predefined simulated physical dimension.
12. The method of claim 11, comprising:
while displaying the virtual object in the first user interface area at the second simulated physical dimension of the virtual object that is different from the predefined simulated physical dimension, detecting a request to return the virtual object to the predefined simulated physical dimension; and
in response to detecting the request to return the virtual object to the predefined simulated physical dimension, changing the display size of the representation of the virtual object in the first user interface area in accordance with a change in a simulated physical dimension of the virtual object to the predefined simulated physical dimension.
13. The method of claim 1, comprising:
selecting a plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes according to respective positions and orientations of the one or more cameras relative to the physical environment, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a third portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a fourth portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the third portion of the physical environment is different from the fourth portion of the physical environment and the first plane is different from the second plane.
14. The method of claim 1, comprising:
displaying a snapshot affordance while the virtual object is displayed in the first user interface area in the second set of visual attributes and the second orientation; and
in response to activation of the snapshot affordance, capturing a snapshot image including a current view of the representation of the virtual object, the representation of the virtual object being located at a placement position in the physical environment in the field of view of the one or more cameras and having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
15. The method of claim 1, comprising:
displaying one or more control affordances in the first user interface area with the representation of the virtual object having the second set of visual attributes; and
detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
16. The method of claim 1, comprising:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
17. A computer system, comprising:
a display generation section;
one or more input devices;
one or more cameras;
one or more processors; and
memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
detecting a first movement of the one or more cameras while displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a first portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the first movement of the one or more cameras, displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a second portion of the physical environment captured in the field of view of the one or more cameras, wherein the second portion of the physical environment is different from the first portion of the physical environment.
18. The computer system of claim 17, wherein the one or more programs include instructions for:
while displaying the representation of the virtual object in the first set of visual properties and the first orientation, detecting that the object placement criteria are satisfied.
19. The computer system of claim 18, wherein the one or more programs include instructions for:
in response to detecting that the object placement criteria are satisfied, displaying, via the display generation component, an animated transition that shows the representation of the virtual object moving from the first orientation to the second orientation and changing from having the first set of visual properties to having the second set of visual properties.
20. The computer system of claim 18, wherein detecting that the object placement criteria are satisfied comprises one or more of:
detecting that a plane has been identified in the field of view of the one or more cameras;
detecting less than a threshold amount of movement between the device and the physical environment for at least a threshold amount of time; and
detecting that at least a predetermined amount of time has elapsed since the request to display the virtual object in the first user interface area was received.
21. The computer system of claim 17, wherein the one or more programs include instructions for:
detecting a second movement of the one or more cameras while displaying the representation of the virtual object in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the second movement of the device, when the physical environment captured in the field of view of the one or more cameras moves in accordance with the second movement of the device and the second orientation continues to correspond to the plane in the physical environment detected in the field of view of the one or more cameras, maintaining the representation of the virtual object displayed in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras.
22. The computer system of claim 17, wherein the one or more programs include instructions for:
in accordance with a determination that the object placement criteria are satisfied, generate a haptic output in conjunction with displaying the representation of the virtual object having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
23. The computer system of claim 17, wherein the one or more programs include instructions for:
while displaying the representation of the virtual object in the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras, receiving an update regarding at least a position or orientation of the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to receiving an update regarding at least the position or the orientation of the plane in the physical environment detected in the field of view of the one or more cameras, adjusting at least a position and/or an orientation of the representation of the virtual object in accordance with the update.
24. The computer system of claim 17, wherein:
the first set of visual attributes comprises a first size and a first level of translucency; and is
The second set of visual attributes includes a second dimension different from the first dimension, and a second level of translucency lower than the first level of translucency.
25. The computer system of claim 17, wherein:
while the virtual object is displayed in a respective user interface that does not include at least a portion of the field of view of the one or more cameras, receiving the request to display the virtual object in the first user interface area that includes at least a portion of the field of view of the one or more cameras, and
the first orientation corresponds to an orientation of the virtual object when the virtual object is displayed in the respective user interface when the request is received.
26. The computer system of claim 17, wherein the first orientation corresponds to a predefined orientation.
27. The computer system of claim 17, wherein the one or more programs include instructions for:
detecting a request to change a simulated physical dimension of the virtual object from a first simulated physical dimension to a second simulated physical dimension relative to the physical environment captured in the field of view of the one or more cameras while the virtual object is displayed in the first user interface area with the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to detecting the request to change the simulated physical dimensions of the virtual object:
gradually changing a display size of the representation of the virtual object in the first user interface area in accordance with a gradual change in the simulated physical size of the virtual object from the first simulated physical size to the second simulated physical size; and
in accordance with a determination that the simulated physical dimension of the virtual object has reached a predefined simulated physical dimension during a gradual change in the displayed size of the representation of the virtual object in the first user interface region, generating a haptic output to indicate that the simulated physical dimension of the virtual object has reached the predefined simulated physical dimension.
28. The computer system of claim 17, wherein the one or more programs include instructions for:
while displaying the virtual object in the first user interface area at the second simulated physical dimension of the virtual object that is different from the predefined simulated physical dimension, detecting a request to return the virtual object to the predefined simulated physical dimension; and
in response to detecting the request to return the virtual object to the predefined simulated physical dimension, changing the display size of the representation of the virtual object in the first user interface area in accordance with a change in a simulated physical dimension of the virtual object to the predefined simulated physical dimension.
29. The computer system of claim 17, wherein the one or more programs include instructions for:
selecting a plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes according to respective positions and orientations of the one or more cameras relative to the physical environment, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a third portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a fourth portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the third portion of the physical environment is different from the fourth portion of the physical environment and the first plane is different from the second plane.
30. The computer system of claim 17, wherein the one or more programs include instructions for:
displaying a snapshot affordance while the virtual object is displayed in the first user interface area in the second set of visual attributes and the second orientation; and
in response to activation of the snapshot affordance, capturing a snapshot image including a current view of the representation of the virtual object, the representation of the virtual object being located at a placement position in the physical environment in the field of view of the one or more cameras and having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
31. The computer system of claim 17, wherein the one or more programs include instructions for:
displaying one or more control affordances in the first user interface area with the representation of the virtual object having the second set of visual attributes; and
detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
32. The computer system of claim 17, wherein the one or more programs include instructions for:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
33. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system with a display generation component, one or more input devices, and one or more cameras, cause the computer system to:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
detecting a first movement of the one or more cameras while displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a first portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the first movement of the one or more cameras, displaying the representation of the virtual object with the first set of visual attributes and the first orientation on a second portion of the physical environment captured in the field of view of the one or more cameras, wherein the second portion of the physical environment is different from the first portion of the physical environment.
34. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
while displaying the representation of the virtual object in the first set of visual properties and the first orientation, detecting that the object placement criteria are satisfied.
35. The non-transitory computer readable storage medium of claim 34, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
in response to detecting that the object placement criteria are satisfied, displaying, via the display generation component, an animated transition that shows the representation of the virtual object moving from the first orientation to the second orientation and changing from having the first set of visual properties to having the second set of visual properties.
36. The non-transitory computer-readable storage medium of claim 34, wherein detecting that the object placement criteria are satisfied comprises one or more of:
detecting that a plane has been identified in the field of view of the one or more cameras;
detecting less than a threshold amount of movement between the device and the physical environment for at least a threshold amount of time; and
detecting that at least a predetermined amount of time has elapsed since the request to display the virtual object in the first user interface area was received.
37. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
detecting a second movement of the one or more cameras while displaying the representation of the virtual object in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras; and
in response to detecting the second movement of the device, when the physical environment captured in the field of view of the one or more cameras moves in accordance with the second movement of the device and the second orientation continues to correspond to the plane in the physical environment detected in the field of view of the one or more cameras, maintaining the representation of the virtual object displayed in the second set of visual attributes and the second orientation on a third portion of the physical environment captured in the field of view of the one or more cameras.
38. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
in accordance with a determination that the object placement criteria are satisfied, generate a haptic output in conjunction with displaying the representation of the virtual object having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
39. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
while displaying the representation of the virtual object in the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras, receiving an update regarding at least a position or orientation of the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to receiving an update regarding at least the position or the orientation of the plane in the physical environment detected in the field of view of the one or more cameras, adjusting at least a position and/or an orientation of the representation of the virtual object in accordance with the update.
40. The non-transitory computer-readable storage medium of claim 33, wherein:
the first set of visual attributes comprises a first size and a first level of translucency; and is
The second set of visual attributes includes a second dimension different from the first dimension, and a second level of translucency lower than the first level of translucency.
41. The non-transitory computer-readable storage medium of claim 33, wherein:
while the virtual object is displayed in a respective user interface that does not include at least a portion of the field of view of the one or more cameras, receiving the request to display the virtual object in the first user interface area that includes at least a portion of the field of view of the one or more cameras, and
the first orientation corresponds to an orientation of the virtual object when the virtual object is displayed in the respective user interface when the request is received.
42. The non-transitory computer-readable storage medium of claim 33, wherein the first orientation corresponds to a predefined orientation.
43. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
detecting a request to change a simulated physical dimension of the virtual object from a first simulated physical dimension to a second simulated physical dimension relative to the physical environment captured in the field of view of the one or more cameras while the virtual object is displayed in the first user interface area with the second set of visual attributes and the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras; and
in response to detecting the request to change the simulated physical dimensions of the virtual object:
gradually changing a display size of the representation of the virtual object in the first user interface area in accordance with a gradual change in the simulated physical size of the virtual object from the first simulated physical size to the second simulated physical size; and
in accordance with a determination that the simulated physical dimension of the virtual object has reached a predefined simulated physical dimension during a gradual change in the displayed size of the representation of the virtual object in the first user interface region, generating a haptic output to indicate that the simulated physical dimension of the virtual object has reached the predefined simulated physical dimension.
44. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
while displaying the virtual object in the first user interface area at the second simulated physical dimension of the virtual object that is different from the predefined simulated physical dimension, detecting a request to return the virtual object to the predefined simulated physical dimension; and
in response to detecting the request to return the virtual object to the predefined simulated physical dimension, changing the display size of the representation of the virtual object in the first user interface area in accordance with a change in a simulated physical dimension of the virtual object to the predefined simulated physical dimension.
45. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
selecting a plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes according to respective positions and orientations of the one or more cameras relative to the physical environment, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a third portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a fourth portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the third portion of the physical environment is different from the fourth portion of the physical environment and the first plane is different from the second plane.
46. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
displaying a snapshot affordance while the virtual object is displayed in the first user interface area in the second set of visual attributes and the second orientation; and
in response to activation of the snapshot affordance, capturing a snapshot image including a current view of the representation of the virtual object, the representation of the virtual object being located at a placement position in the physical environment in the field of view of the one or more cameras and having the second set of visual attributes and the second orientation, the second orientation corresponding to the plane in the physical environment detected in the field of view of the one or more cameras.
47. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
displaying one or more control affordances in the first user interface area with the representation of the virtual object having the second set of visual attributes; and
detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
48. The non-transitory computer readable storage medium of claim 33, wherein the one or more programs include instructions that, when executed by the computer system, cause the computer system to:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
49. A method, comprising:
at a device having a display generation component, one or more input devices, and one or more cameras:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a first portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a second portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the first portion of the physical environment is different from the second portion of the physical environment and the first plane is different from the second plane.
50. A computer system, comprising:
a display generation section;
one or more input devices;
one or more cameras;
one or more processors; and
memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a first portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a second portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the first portion of the physical environment is different from the second portion of the physical environment and the first plane is different from the second plane.
51. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system with a display generation component, one or more input devices, and one or more cameras, cause the computer system to:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein selecting the plane comprises:
in accordance with a determination that the object placement criteria are satisfied when displaying the representation of the virtual object on a first portion of the physical environment captured in the field of view of the one or more cameras, selecting a first plane of a plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes; and
in accordance with a determination that the object placement criteria are satisfied when the representation of the virtual object is displayed on a second portion of the physical environment captured in the field of view of the one or more cameras, selecting a second plane of the plurality of planes detected in the physical environment in the field of view of the one or more cameras as the plane for setting the second orientation of the representation of the virtual object having the second set of visual attributes, wherein the first portion of the physical environment is different from the second portion of the physical environment and the first plane is different from the second plane.
52. A method, comprising:
at a device having a display generation component, one or more input devices, and one or more cameras:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
while displaying the representation of the virtual object having the second set of visual properties and the second orientation, displaying one or more control affordances with the representation of the virtual object;
detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
53. A computer system, comprising:
a display generation section;
one or more input devices;
one or more cameras;
one or more processors; and
memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
while displaying the representation of the virtual object having the second set of visual properties and the second orientation, displaying one or more control affordances with the representation of the virtual object;
detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
54. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system with a display generation component, one or more input devices, and one or more cameras, cause the computer system to:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras;
while displaying the representation of the virtual object having the second set of visual properties and the second orientation, displaying one or more control affordances with the representation of the virtual object;
Detecting that a control fade criterion is satisfied while displaying the one or more control affordances with the representation of the virtual object having the second set of visual attributes; and the number of the first and second groups,
in response to detecting that the control fade criteria are satisfied, ceasing to display the one or more control affordances while continuing to display the representation of the virtual object having the second set of visual attributes in the first user interface area that includes the field of view of the one or more cameras.
55. A method, comprising:
at a device having a display generation component, one or more input devices, and one or more cameras:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties and a second orientation, the second set of visual properties being different from the first set of visual properties, the second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein the method further comprises:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
56. A computer system, comprising:
a display generation section;
one or more input devices;
one or more cameras;
one or more processors; and
memory storing one or more programs, wherein the one or more programs are configured to be executed by the one or more processors, the one or more programs including instructions for:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties different from the first set of visual properties and a second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein the one or more programs further include instructions for:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
57. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computer system with a display generation component, one or more input devices, and one or more cameras, cause the computer system to:
receiving a request to display a virtual object in a first user interface area, the first user interface area including at least a portion of a field of view of the one or more cameras;
in response to the request to display the virtual object in the first user interface area, displaying, via the display generation component, a representation of the virtual object over at least a portion of the field of view of the one or more cameras included in the first user interface area, wherein the field of view of the one or more cameras is a view of a physical environment in which the one or more cameras are located, and wherein displaying the representation of the virtual object comprises:
in accordance with a determination that object placement criteria are not satisfied, displaying the representation of the virtual object having a first set of visual attributes and a first orientation, wherein the object placement criteria require that a placement location of the virtual object be identified in the field of view of the one or more cameras in order for the object placement criteria to be satisfied, the first orientation being independent of which portion of the physical environment is displayed in the field of view of the one or more cameras; and
in accordance with a determination that the object placement criteria are satisfied, displaying the representation of the virtual object having a second set of visual properties different from the first set of visual properties and a second orientation corresponding to a plane in the physical environment detected in the field of view of the one or more cameras, wherein the one or more programs further include instructions that, when executed by the computer system, cause the computer system to:
in response to the request to display the virtual object in the first user interface area: prior to displaying the representation of the virtual object on at least a portion of the field of view of the one or more cameras included in the first user interface area, in accordance with a determination that calibration criteria are not satisfied, displaying a prompt for the user to move the device relative to the physical environment.
CN201911078900.7A2018-01-242018-09-29 Apparatus, method and graphical user interface for system-level behavior of 3D modelsPendingCN110851053A (en)

Applications Claiming Priority (11)

Application NumberPriority DateFiling DateTitle
US201862621529P2018-01-242018-01-24
US62/621,5292018-01-24
US201862679951P2018-06-032018-06-03
US62/679,9512018-06-03
DKPA2018703462018-06-11
DKPA2018703472018-06-11
DKPA201870346ADK201870346A1 (en)2018-01-242018-06-11Devices, Methods, and Graphical User Interfaces for System-Wide Behavior for 3D Models
DKPA2018703482018-06-11
DKPA201870348ADK180842B1 (en)2018-01-242018-06-11 Devices, procedures, and graphical user interfaces for System-Wide behavior for 3D models
DKPA201870347ADK201870347A1 (en)2018-01-242018-06-11Devices, Methods, and Graphical User Interfaces for System-Wide Behavior for 3D Models
CN201811165504.3ACN110069190B (en)2018-01-242018-09-29 Device, method and graphical user interface for system-level behavior of 3D models

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811165504.3ADivisionCN110069190B (en)2018-01-242018-09-29 Device, method and graphical user interface for system-level behavior of 3D models

Publications (1)

Publication NumberPublication Date
CN110851053Atrue CN110851053A (en)2020-02-28

Family

ID=67365888

Family Applications (2)

Application NumberTitlePriority DateFiling Date
CN201911078900.7APendingCN110851053A (en)2018-01-242018-09-29 Apparatus, method and graphical user interface for system-level behavior of 3D models
CN201811165504.3AActiveCN110069190B (en)2018-01-242018-09-29 Device, method and graphical user interface for system-level behavior of 3D models

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
CN201811165504.3AActiveCN110069190B (en)2018-01-242018-09-29 Device, method and graphical user interface for system-level behavior of 3D models

Country Status (2)

CountryLink
JP (1)JP6745852B2 (en)
CN (2)CN110851053A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111672121A (en)*2020-06-112020-09-18腾讯科技(深圳)有限公司Virtual object display method and device, computer equipment and storage medium
CN116301418A (en)*2022-12-062023-06-23京东方科技集团股份有限公司 A trigger method, device and electronic equipment for tactile waveform

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114845122B (en)2018-05-072024-04-30苹果公司User interface for viewing live video feeds and recording video
US10939047B2 (en)2019-07-222021-03-02Himax Technologies LimitedMethod and apparatus for auto-exposure control in a depth sensing system
TWI722542B (en)*2019-08-222021-03-21奇景光電股份有限公司Method and apparatus for performing auto-exposure control in depth sensing system including projector
US12175010B2 (en)*2019-09-282024-12-24Apple Inc.Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
CN110865704B (en)*2019-10-212021-04-27浙江大学 A gesture interaction device and method for a 360° suspended light field three-dimensional display system
US11348253B2 (en)*2020-01-092022-05-31Alibaba Group Holding LimitedSingle-channel and multi-channel source separation enhanced by lip motion
WO2021161719A1 (en)*2020-02-122021-08-19パナソニックIpマネジメント株式会社Nursing care equipment provision assistance system, nursing care equipment provision assistance method, and program
CN111340962B (en)*2020-02-242023-08-15维沃移动通信有限公司 Control method, electronic device and storage medium
WO2021247872A1 (en)*2020-06-032021-12-09Apple Inc.Camera and visitor user interfaces
JP6801138B1 (en)*2020-07-162020-12-16株式会社バーチャルキャスト Terminal device, virtual object operation method, and virtual object operation program
JP6919050B1 (en)*2020-12-162021-08-11株式会社あかつき Game system, program and information processing method
CN112419511B (en)*2020-12-262024-02-13董丽萍Three-dimensional model file processing method and device, storage medium and server
US20220365667A1 (en)2021-05-152022-11-17Apple Inc.User interfaces for managing accessories
US11941750B2 (en)*2022-02-112024-03-26Shopify Inc.Augmented reality enabled dynamic product presentation
US12379827B2 (en)2022-06-032025-08-05Apple Inc.User interfaces for managing accessories
WO2025169962A1 (en)*2024-02-092025-08-14日本電気株式会社Portable terminal, augmented reality object display control method, and program
CN119942021A (en)*2024-12-162025-05-06江淮前沿技术协同创新中心 A visualization method and device for three-dimensional laser scanner point cloud

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20140285522A1 (en)*2013-03-252014-09-25Qualcomm IncorporatedSystem and method for presenting true product dimensions within an augmented real-world setting
CN104081317A (en)*2012-02-102014-10-01索尼公司Image processing device, and computer program product

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20080071559A1 (en)*2006-09-192008-03-20Juha ArrasvuoriAugmented reality assisted shopping
JP5573238B2 (en)*2010-03-042014-08-20ソニー株式会社 Information processing apparatus, information processing method and program
JP5799521B2 (en)*2011-02-152015-10-28ソニー株式会社 Information processing apparatus, authoring method, and program
US10078384B2 (en)*2012-11-202018-09-18Immersion CorporationMethod and apparatus for providing haptic cues for guidance and alignment with electrostatic friction
TWI600322B (en)*2014-09-022017-09-21蘋果公司Method for operating an electronic device with an integratd camera and related electronic device and non-transitory computer readable storage medium
CN104486430A (en)*2014-12-182015-04-01北京奇虎科技有限公司Method, device and client for realizing data sharing in mobile browser client
TWI567691B (en)*2016-03-072017-01-21粉迷科技股份有限公司Method and system for editing scene in three-dimensional space
CN105824412A (en)*2016-03-092016-08-03北京奇虎科技有限公司Method and device for presenting customized virtual special effects on mobile terminal
US10176641B2 (en)*2016-03-212019-01-08Microsoft Technology Licensing, LlcDisplaying three-dimensional virtual objects based on field of view
WO2017208637A1 (en)*2016-05-312017-12-07ソニー株式会社Information processing device, information processing method, and program
CN107071392A (en)*2016-12-232017-08-18网易(杭州)网络有限公司Image processing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104081317A (en)*2012-02-102014-10-01索尼公司Image processing device, and computer program product
US20140285522A1 (en)*2013-03-252014-09-25Qualcomm IncorporatedSystem and method for presenting true product dimensions within an augmented real-world setting

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111672121A (en)*2020-06-112020-09-18腾讯科技(深圳)有限公司Virtual object display method and device, computer equipment and storage medium
CN116301418A (en)*2022-12-062023-06-23京东方科技集团股份有限公司 A trigger method, device and electronic equipment for tactile waveform
CN116301418B (en)*2022-12-062025-06-24京东方科技集团股份有限公司Touch waveform triggering method and device and electronic equipment

Also Published As

Publication numberPublication date
CN110069190A (en)2019-07-30
CN110069190B (en)2024-12-10
JP2019128941A (en)2019-08-01
JP6745852B2 (en)2020-08-26

Similar Documents

PublicationPublication DateTitle
US20210333979A1 (en)Devices, Methods, and Graphical User Interfaces for System-Wide Behavior for 3D Models
CN110069190B (en) Device, method and graphical user interface for system-level behavior of 3D models
KR102766569B1 (en)Devices and methods for measuring using augmented reality
AU2022201389B2 (en)Devices, methods, and graphical user interfaces for system-wide behavior for 3D models
CN114327096A (en) Apparatus, method and graphical user interface for displaying objects in 3D context
AU2019101597A4 (en)Devices, methods, and graphical user interfaces for system-wide behavior for 3D models
EP3901741B1 (en)Devices and methods for measuring using augmented reality

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp