Movatterモバイル変換


[0]ホーム

URL:


CN116301357A - Display method and device and wearable equipment - Google Patents

Display method and device and wearable equipment
Download PDF

Info

Publication number
CN116301357A
CN116301357ACN202310190310.3ACN202310190310ACN116301357ACN 116301357 ACN116301357 ACN 116301357ACN 202310190310 ACN202310190310 ACN 202310190310ACN 116301357 ACN116301357 ACN 116301357A
Authority
CN
China
Prior art keywords
scene
wearable device
interactive object
present application
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310190310.3A
Other languages
Chinese (zh)
Inventor
邝平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co LtdfiledCriticalVivo Mobile Communication Co Ltd
Priority to CN202310190310.3ApriorityCriticalpatent/CN116301357A/en
Publication of CN116301357ApublicationCriticalpatent/CN116301357A/en
Priority to PCT/CN2024/078481prioritypatent/WO2024179392A1/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application discloses a display method, a display device and wearable equipment, and belongs to the technical field of wearable equipment. The method comprises the following steps: acquiring a point of gaze of a wearer of the wearable device in a scene displayed by the wearable device, the scene comprising a virtual scene and/or a real scene; the target interactive object within the scene is highlighted according to the gaze point and the scene.

Description

Translated fromChinese
显示方法、装置及佩戴式设备Display method, device and wearable device

技术领域technical field

本申请属于可穿戴设备技术领域,具体涉及一种显示方法、一种显示装置以及一种佩戴式设备。The present application belongs to the technical field of wearable devices, and in particular relates to a display method, a display device and a wearable device.

背景技术Background technique

随着科技与经济的发展,配置有智能手柄的VR(Virtual Reality,虚拟现实)/AR(Augmented Reality,增强现实)/MR(Mixed Reality,混合现实)佩戴式设备已经被越来越广泛的使用。With the development of technology and economy, VR (Virtual Reality, virtual reality)/AR (Augmented Reality, augmented reality)/MR (Mixed Reality, mixed reality) wearable devices equipped with smart handles have been more and more widely used .

在使用配置智能手柄的佩戴式设备时,用户需将显示虚拟场景佩戴式设备佩戴至头部,且手持智能手柄。进一步的,用户自主判断出虚拟场景中哪些是可进行交互的对象,并在智能手柄的射线发射方向朝向可进行交互的对象的情况下,按压智能手柄上的按键。基于此,智能手柄向可交互的对象发射射线,实现用户对可进行交互的对象的选中。在射线接触到可交互的对象时,智能手柄上的按键会转换成对该可进行交互的对象操作的按键。用户基于对智能手柄上的按键的操作可实现对可进行交互的对象的操作。When using a wearable device equipped with a smart handle, the user needs to wear the wearable device displaying a virtual scene on the head and hold the smart handle. Further, the user independently judges which objects can be interacted with in the virtual scene, and presses a button on the smart handle when the ray emission direction of the smart handle is directed towards the object that can be interacted with. Based on this, the smart handle emits rays to the interactive objects to realize the user's selection of the interactive objects. When the ray touches an interactive object, the keys on the smart controller will be converted into keys that operate on the interactive object. Users can operate objects that can be interacted with based on the operation of the buttons on the smart handle.

但是,用户无法准确的判断出场景中哪些对象是可进行交互的对象。However, the user cannot accurately determine which objects in the scene are objects that can be interacted with.

发明内容Contents of the invention

本申请实施例的目的是提供一种显示方法、装置及佩戴式设备,能够解决用户无法准确的判断出场景中哪些对象是可进行交互的交互对象的问题。The purpose of the embodiments of the present application is to provide a display method, device and wearable device, which can solve the problem that the user cannot accurately determine which objects in the scene are interactive objects that can be interacted with.

第一方面,本申请实施例提供了一种显示方法,该方法包括:In the first aspect, the embodiment of the present application provides a display method, the method includes:

获取佩戴式设备的佩戴者在所述佩戴式设备所显示的场景中的注视点,所述场景包括虚拟场景和/或现实场景;Acquiring the gaze point of the wearer of the wearable device in the scene displayed by the wearable device, the scene including a virtual scene and/or a real scene;

根据所述注视点以及所述场景,突出显示所述场景内的目标交互对象。According to the gaze point and the scene, the target interactive object in the scene is highlighted.

第二方面,本申请实施例提供了一种显示装置,所述装置包括:In a second aspect, an embodiment of the present application provides a display device, the device comprising:

获取模块,用于获取佩戴式设备的佩戴者在所述佩戴式设备所显示的场景中的注视点,所述场景包括虚拟场景和/或现实场景;An acquisition module, configured to acquire the gaze point of the wearer of the wearable device in the scene displayed by the wearable device, where the scene includes a virtual scene and/or a real scene;

显示模块,用于根据所述注视点以及所述场景,突出显示所述场景内的目标交互对象。A display module, configured to highlight target interactive objects in the scene according to the gaze point and the scene.

第三方面,本申请实施例提供了一种佩戴式设备,该佩戴式设备包括处理器和存储器,所述存储器存储可在所述处理器上运行的程序或指令,所述程序或指令被所述处理器执行时实现如第一方面所述的方法的步骤。In the third aspect, the embodiment of the present application provides a wearable device, the wearable device includes a processor and a memory, the memory stores programs or instructions that can run on the processor, and the programs or instructions are executed by the The steps of the method described in the first aspect are implemented when the processor executes.

第四方面,本申请实施例提供了一种可读存储介质,所述可读存储介质上存储程序或指令,所述程序或指令被处理器执行时实现如第一方面所述的方法的步骤。In a fourth aspect, an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .

第五方面,本申请实施例提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现如第一方面所述的方法。In the fifth aspect, the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions, so as to implement the first aspect the method described.

第六方面,本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如第一方面所述的方法。In a sixth aspect, an embodiment of the present application provides a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the method described in the first aspect.

本申请实施例提供了一种显示方法,该方法包括:获取佩戴式设备的佩戴者在佩戴式设备所显示的场景中的注视点,场景包括虚拟场景和/或现实场景;根据注视点以及场景,突出显示场景内的目标交互对象。通过该方法,可实现对场景内的目标交互对象自动进行突出显示,这样佩戴者则可清楚且准确的判断出场景中的哪些对象是可进行交互的对象。An embodiment of the present application provides a display method, the method includes: acquiring the gaze point of the wearer of the wearable device in the scene displayed by the wearable device, the scene includes a virtual scene and/or a real scene; according to the gaze point and the scene , highlighting the target interactive object within the scene. Through this method, the target interactive objects in the scene can be automatically highlighted, so that the wearer can clearly and accurately determine which objects in the scene are objects that can be interacted with.

附图说明Description of drawings

图1是实现本申请实施例的一种显示方法的流程示意图一;FIG. 1 is a first schematic flow diagram of a display method for implementing an embodiment of the present application;

图2是实现本申请实施例的一种佩戴者通过佩戴式设备查看到的场景的示意图;FIG. 2 is a schematic diagram of a scene viewed by a wearer through a wearable device implementing an embodiment of the present application;

图3是实现本申请实施例的一种显示方法的流程示意图二;FIG. 3 is a schematic flow diagram II of a display method for implementing an embodiment of the present application;

图4为实现本申请实施例的一种显示装置的结构示意图;FIG. 4 is a schematic structural diagram of a display device implementing an embodiment of the present application;

图5为实现本申请实施例的一种佩戴式设备的硬件结构示意图一;FIG. 5 is a first schematic diagram of a hardware structure of a wearable device implementing an embodiment of the present application;

图6为实现本申请实施例的一种佩戴式设备的硬件结构示意图二。FIG. 6 is a second schematic diagram of a hardware structure of a wearable device implementing an embodiment of the present application.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员获得的所有其他实施例,都属于本申请保护的范围。The following will clearly describe the technical solutions in the embodiments of the present application with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, but not all of them. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments in this application belong to the protection scope of this application.

本申请的说明书和权利要求书中的术语“第一”、“第二”等是用于区别类似的对象,而不用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施,且“第一”、“第二”等所区分的对象通常为一类,并不限定对象的个数,例如第一对象可以是一个,也可以是多个。此外,说明书以及权利要求中“和/或”表示所连接对象的至少其中之一,字符“/”,一般表示前后关联对象是一种“或”的关系。The terms "first", "second" and the like in the specification and claims of the present application are used to distinguish similar objects, and are not used to describe a specific sequence or sequence. It should be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application can be practiced in sequences other than those illustrated or described herein, and that references to "first," "second," etc. distinguish Objects are generally of one type, and the number of objects is not limited. For example, there may be one or more first objects. In addition, "and/or" in the specification and claims means at least one of the connected objects, and the character "/" generally means that the related objects are an "or" relationship.

下面结合附图,通过具体的实施例及其应用场景对本申请实施例提供的显示方法、装置及佩戴式设备进行详细地说明。The display method, device and wearable device provided by the embodiments of the present application will be described in detail below through specific embodiments and application scenarios with reference to the accompanying drawings.

本申请实施例提供了一种显示方法,如图1所示,该方法包括如下S1100-S1200:The embodiment of this application provides a display method, as shown in Figure 1, the method includes the following S1100-S1200:

S1100、获取佩戴式设备的佩戴者在佩戴式设备所显示的场景中的注视点。S1100. Obtain a gaze point of a wearer of the wearable device in a scene displayed by the wearable device.

其中,场景包括虚拟场景和/或现实场景。Wherein, the scene includes a virtual scene and/or a real scene.

本申请实施例提供的显示方法在佩戴式设备显示虚拟场景和/或现实场景的过程中执行。这也就是说,本申请实施例涉及到的佩戴式设备可以为:VR(Virtual Reality,虚拟现实)佩戴式设备、AR(Augmented Reality,增强现实)佩戴式设备以及MR(MixedReality,混合现实)佩戴式设备中的任一个。进一步的,佩戴式设备可以为智能眼镜或者智能头盔等。The display method provided by the embodiment of the present application is executed in the process of the wearable device displaying a virtual scene and/or a real scene. That is to say, the wearable device involved in the embodiment of this application can be: VR (Virtual Reality, virtual reality) wearable device, AR (Augmented Reality, augmented reality) wearable device and MR (MixedReality, mixed reality) wearable device any of the devices. Further, the wearable device may be smart glasses or a smart helmet.

可以理解的是,现实场景可以为:佩戴式设备上设置有采集外界环境图像的相机;该相机对对外界环境进行采集,得到外界环境图像;佩戴式设备对该外界环境图像进行重现,得到佩戴式设备所显示的现实场景。It can be understood that the actual scene can be as follows: the wearable device is equipped with a camera that collects images of the external environment; the camera collects the external environment to obtain an image of the external environment; the wearable device reproduces the image of the external environment to obtain A real-world scene displayed by a wearable device.

以及,场景中包括可与佩戴者进行交互的对象以及不可与佩戴者进行交互的对象。其中,现实场景中的交互对象可以为智能开关等智能家居。And, the scene includes objects that can interact with the wearer and objects that cannot interact with the wearer. Wherein, the interactive object in the real scene may be a smart home such as a smart switch.

在本申请的一个实施例中,上述S1100的具体实现可以为:佩戴式设备上设置有采集人双眼图像的相机;该相机对佩戴者双眼进行图像采集;利用眼动追踪算法,对相机采集到的人双眼图像进行处理,计算出佩戴者在场景中的注视点。In an embodiment of the present application, the specific implementation of the above-mentioned S1100 may be as follows: the wearable device is provided with a camera that collects images of both eyes of the person; the camera collects images of the wearer's eyes; The binocular image of the person is processed to calculate the gaze point of the wearer in the scene.

当然,上述S1100还可通过其他方式来实现,例如通过预先训练好的可对注视点进行识别的神经网络模型,对佩戴者在场景中的注视点进行识别。对此,本申请不做限定。Of course, the above S1100 can also be implemented in other ways, such as identifying the gaze point of the wearer in the scene through a pre-trained neural network model capable of identifying gaze points. In this regard, this application does not make a limitation.

在一个示例中,场景中可由如图2中的101和103组成。其中,103为场景中视觉上距离佩戴者更近的桌子。101中包括对象:圆柱体105、球体104、第一长方体106、树107以及第二长方体108。以及,佩戴者100的注视点如图2中的109所示,佩戴者120的视野范围如图2中的102所示。In an example, the scene may consist of 101 and 103 as shown in FIG. 2 . Wherein, 103 is a table visually closer to the wearer in the scene. 101 includes objects: acylinder 105 , asphere 104 , afirst cuboid 106 , atree 107 and asecond cuboid 108 . And, the gaze point of thewearer 100 is shown as 109 in FIG. 2 , and the field of view of the wearer 120 is shown as 102 in FIG. 2 .

S1200、根据注视点以及场景,突出显示场景内的目标交互对象。S1200. Highlight the target interaction object in the scene according to the gaze point and the scene.

在本申请实施例中,场景内的目标交互对象可以为场景内的所有交互对象,也可以为场景内的部分交互对象。In the embodiment of the present application, the target interactive objects in the scene may be all interactive objects in the scene, or may be some interactive objects in the scene.

在本申请的一个实施例中,为了确定场景内的交互对象,上述S1200可通过下述S1210和S1211来实现:In one embodiment of the present application, in order to determine the interactive objects in the scene, the above S1200 can be implemented through the following S1210 and S1211:

S1210、根据场景,确定出场景对应的交互对象。S1210. According to the scene, determine an interactive object corresponding to the scene.

在本申请实施例中,不同场景预先定义有不同的交互对象。佩戴式设备的开发人员可事先存储不同场景与其中交互对象的映射关系至佩戴式设备中。这样,在场景确定的情况下,则场景中哪些对象是可交互的交互对象则是确定的。In the embodiment of the present application, different interactive objects are predefined in different scenarios. The developer of the wearable device can store the mapping relationship between different scenes and the interactive objects in the wearable device in advance. In this way, when the scene is determined, which objects in the scene are interactive interactive objects are determined.

以场景为枪击游戏的场景为例,交互对象则为枪机游戏的场景中的敌方人员,敌方装备以及敌方建筑设施等。Taking the scene of a shooting game as an example, the interactive objects are enemy personnel, equipment, and building facilities in the scene of a shooting game.

基于上述内容,上述S1210的具体实现可以为:佩戴式设备中存储有不同场景与对应交互对象之间的映射关系;佩戴式设备从其中存储的映射关系中查找上述S1210所对应场景匹配的映射关系;将查找到的映射关系中对应的交互对象,确定为上述S1210中的场景对应的交互对象。Based on the above content, the specific implementation of the above S1210 may be: the wearable device stores the mapping relationship between different scenes and the corresponding interactive objects; the wearable device searches the mapping relationship corresponding to the scene corresponding to the above S1210 from the mapping relationship stored therein. ; Determine the corresponding interactive object in the found mapping relationship as the interactive object corresponding to the scene in S1210 above.

S1211、根据注视点以及场景对应的交互对象,突出显示场景内的目标交互对象。S1211. Highlight a target interactive object in the scene according to the gaze point and the interactive object corresponding to the scene.

在本申请实施例中,上述S1211的具体实现,可参见下述S1220所示实施例或者S1230和S1231所示实施例。In the embodiment of the present application, for the specific implementation of the foregoing S1211, reference may be made to the embodiment shown in the following S1220 or the embodiments shown in S1230 and S1231.

在本申请的一个实施例中,上述S1200可具体通过如下S1220来实现:In an embodiment of the present application, the above S1200 can be specifically implemented through the following S1220:

S1220、在注视点位于场景内的情况下,突出显示场景内的每一交互对象。S1220. Under the condition that the gaze point is located in the scene, highlight each interactive object in the scene.

在本申请实施例中,上述S1220的具体实现为:判断注视点是否位于佩戴式设备所显示的场景中;在位于的情况下,则首先从头戴式设备存储的不同映射关系中,查找出上述S1220对应的场景所匹配的映射关系;将场景内包含的,且属于查找到映射关系中的交互对象突出显示;反之,则不进行处理。In the embodiment of the present application, the specific implementation of the above S1220 is to determine whether the gaze point is located in the scene displayed by the wearable device; The mapping relationship matched by the scene corresponding to S1220 above; highlight the interactive objects included in the scene and belonging to the found mapping relationship; otherwise, do not process.

可以理解的是,在上述S1220所示实施例中,目标交互对象为场景内的所有的交互对象。It can be understood that, in the embodiment shown in S1220 above, the target interactive objects are all interactive objects in the scene.

在本申请的另一个实施例中,上述S1200可具体通过如下S1230以及S1231来实现:In another embodiment of the present application, the above S1200 can be specifically implemented through the following S1230 and S1231:

S1230、根据注视点,确定位于场景中的注视区域。S1230. Determine a gaze area located in the scene according to the gaze point.

在本申请的一个实施例中,上述S1230的具体实现可以为:将场景中以注视点为中心的预设范围内的区域,确定为注视区域。具体的:In an embodiment of the present application, the specific implementation of the above S1230 may be: determining an area within a preset range centered on the gaze point in the scene as the gaze area. specific:

将场景中以注视点为中心,r为半径的球形区域作为注视区域。其中,r可根据经验进行设置。基于此,即使存在被完全遮挡的交互对象,基于下述S1231,仍可显示出该被完全遮挡的交互对象。The spherical area centered on the gaze point in the scene and r as the radius is used as the gaze area. Among them, r can be set according to experience. Based on this, even if there is a completely blocked interactive object, based on the following S1231, the completely blocked interactive object can still be displayed.

或者,可将场景中以注视点为中心,平行于佩戴者面部的m*n的矩形区域作为注视区域。其中,m为矩形区域的长,n为矩形区域的宽,且m与n可根据经验进行设置。Alternatively, an m*n rectangular area centered on the gaze point in the scene and parallel to the face of the wearer may be used as the gaze area. Wherein, m is the length of the rectangular area, n is the width of the rectangular area, and m and n can be set according to experience.

S1231、根据注视区域以及场景,突出显示注视区域内的交互对象。S1231. According to the gaze area and the scene, highlight the interactive object in the gaze area.

在本申请实施例中,上述S1231的具体实现为:首先从头戴式设备存储的不同映射关系中,查找出上述S1231对应的场景所匹配的映射关系;将注视区域内包含的,且属于查找到映射关系中的交互对象突出显示。In the embodiment of the present application, the specific implementation of the above-mentioned S1231 is as follows: firstly, from the different mapping relationships stored in the head-mounted device, find out the mapping relationship that matches the scene corresponding to the above-mentioned S1231; The interactive objects in the mapping relationship are highlighted.

可以理解的是,在上述S1230和S1231所示实施例中,目标交互对象则为注视区域内的交互对象,即目标交互对象为场景中的部分交互对象。It can be understood that, in the above embodiments shown in S1230 and S1231, the target interactive object is an interactive object in the gaze area, that is, the target interactive object is a part of the interactive objects in the scene.

在一个示例中,如图2所示,注视区域内的交互对象如圆柱体105、球体104、第一长方体106所示。In an example, as shown in FIG. 2 , the interactive objects in the gaze area are shown as acylinder 105 , asphere 104 , and afirst cuboid 106 .

在本实施例中,仅对佩戴者的注视区域内的交互对象进行突出显示,无需对佩戴者注视不到的注视区域外的交互对象进行突出显示。这样,可节省佩戴式设备的运算资源。In this embodiment, only the interactive objects within the gaze area of the wearer are highlighted, and there is no need to highlight the interactive objects outside the gaze area beyond the wearer's gaze. In this way, computing resources of the wearable device can be saved.

进一步的,为了突出显示场景内的目标交互对象,上述S1200中的突出显示场景内的目标交互对象可具体通过以下至少一个步骤来实现:Further, in order to highlight the target interactive object in the scene, the above-mentioned highlighting of the target interactive object in the scene in S1200 may be specifically implemented through at least one of the following steps:

S1240、放大场景中的目标交互对象。S1240. Enlarge the target interactive object in the scene.

在本申请的一个实施例中,上述S1240的具体实现方式可以为:将目标交互对象放大到预设尺寸。In an embodiment of the present application, the specific implementation manner of the above S1240 may be: enlarging the target interactive object to a preset size.

S1241、将场景中的目标交互对象显示在第一位置。S1241. Display the target interactive object in the scene at the first position.

其中,上述S1241中的显示可具体为重复显示。以及,第一位置为佩戴者可明显注意到的位置。这样,可实现对目标交互对象的集中显示,便于佩戴者集中查看交互对象。Wherein, the display in the above S1241 may specifically be repeated display. And, the first position is a position that can be clearly noticed by the wearer. In this way, centralized display of the target interactive object can be realized, and it is convenient for the wearer to centrally view the interactive object.

在一个示例中,第一位置可以为佩戴式设备所显示场景的中心位置或者如图2的右下角位置110等。对此,本申请不做限定。In an example, the first position may be the center position of the scene displayed by the wearable device or theposition 110 in the lower right corner as shown in FIG. 2 . In this regard, this application does not make a limitation.

当然,上述S1200中的突出显示场景内的目标交互对象还可具体通过以下步骤来实现:Of course, the highlighting of the target interactive object in the scene in the above S1200 can also be specifically implemented through the following steps:

以预设颜色(设颜色为人眼敏感的颜色,例如明黄色)显示场景内的目标交互对象等。The target interactive objects and the like in the scene are displayed in a preset color (set the color to be a color sensitive to human eyes, such as bright yellow).

需要说明的是,对于上述1200中的突出显示方式,本申请不做限定。It should be noted that, the present application does not limit the highlighting manner in theabove step 1200 .

基于上述S1100和S1200可实现对场景内的目标交互对象自动进行突出显示,这样佩戴者则可清楚且准确的判断出场景中的哪些对象是可进行交互的对象。Based on the above S1100 and S1200, the target interactive objects in the scene can be automatically highlighted, so that the wearer can clearly and accurately determine which objects in the scene are objects that can be interacted with.

本申请实施例提供了一种显示方法,该方法包括:获取佩戴式设备的佩戴者在佩戴式设备所显示的场景中的注视点,场景包括虚拟场景和/或现实场景;根据注视点以及场景,突出显示场景内的目标交互对象。通过该方法,可实现对场景内的目标交互对象自动进行突出显示,这样佩戴者则可清楚且准确的判断出场景中的哪些对象是可进行交互的对象。An embodiment of the present application provides a display method, the method includes: acquiring the gaze point of the wearer of the wearable device in the scene displayed by the wearable device, the scene includes a virtual scene and/or a real scene; according to the gaze point and the scene , highlighting the target interactive object within the scene. Through this method, the target interactive objects in the scene can be automatically highlighted, so that the wearer can clearly and accurately determine which objects in the scene are objects that can be interacted with.

在本申请的一个实施例中,本申请提供的显示方法在上述S1200之前,还包括如下S1250:In one embodiment of the present application, before the above S1200, the display method provided by the present application further includes the following S1250:

S1250、在场景中的第二位置显示目标交互对象。S1250. Display the target interactive object at a second position in the scene.

其中,第一位置不同于第二位置。Wherein, the first location is different from the second location.

在本申请实施例中,上述第二位置为目标交互对象初始显示位置。In the embodiment of the present application, the above-mentioned second position is the initial display position of the target interactive object.

结合上述S1241,突出显示目标交互对象的方式为,将目标交互对象重复显示在不同于第二位置的第一位置处。In combination with the above S1241, the manner of highlighting the target interactive object is to repeatedly display the target interactive object at a first position different from the second position.

在本申请的一个实施例中,本申请实施例提供的显示方法还包括如下S1300和S1400:In one embodiment of the present application, the display method provided in the embodiment of the present application further includes the following S1300 and S1400:

S1300、接收对目标交互对象的选择输入;S1300. Receive a selection input of a target interaction object;

在本申请实施例中,选择输入由佩戴者触发,用于从目标交互对象中选取出佩戴者下一步想要操作的目标交互对象。其中,将佩戴者下一步想要操作的目标交互对象记为操作交互对象。In the embodiment of the present application, the selection input is triggered by the wearer, and is used to select the target interactive object that the wearer wants to operate next from the target interactive objects. Wherein, the target interactive object that the wearer wants to operate in the next step is recorded as the operation interactive object.

在本申请的一个实施例中,选择输入可通过不同的方式来实现。例如,选择输入为语音输入、佩戴式设备的手柄输入或脑电波输入中的至少一个。In one embodiment of the present application, the selection input can be implemented in different ways. For example, the selection input is at least one of voice input, handle input of the wearable device, or brain wave input.

S1400、响应于选择输入,从目标交互对象中确定操作交互对象。S1400. In response to a selection input, determine an operation interaction object from target interaction objects.

在本申请实施例中,交互对象存在一一对应的标识,该标识可以为交互对象的名称,或者对应的映射标识。以及,交互对象的标识可显示在对应交互对应的一侧,以供佩戴者输入选择输入。In this embodiment of the present application, there is a one-to-one identifier for the interactive object, and the identifier may be the name of the interactive object, or a corresponding mapping identifier. And, the identification of the interactive object can be displayed on the side corresponding to the corresponding interaction, for the wearer to input selection input.

在一个示例中,圆柱体105的映射标识为b、球体104的映射标识为a以及第一长方体106的映射标识为c。In one example, the mapping of thecylinder 105 is identified as b, the mapping of thesphere 104 is identified as a, and the mapping of thefirst cuboid 106 is identified as c.

在本申请的一个实施例中,在选择输入为语音输入的情况下,上述S1400的具体实现可以为:对选择输入进行语音识别,以识别出语音输入中包含的目标交互对象的标识。进一步的,将所显示的目标交互对象中,与语音输入中包含的目标交互对象的标识,确定为操作交互对象。In an embodiment of the present application, in the case that the selection input is voice input, the specific implementation of the above S1400 may be: performing voice recognition on the selection input to identify the identifier of the target interaction object included in the voice input. Further, among the displayed target interactive objects, the identifiers of the target interactive objects included in the voice input are determined as operation interactive objects.

在选择输入为基于佩戴式设备适配的手柄的输入的情况下,则上述S1400的具体实现为:对手柄所发射的射线的到达位置进行识别,以识别出该到达位置所对应的目标交互对象。进一步的,将识别出的目标交互对象,作为操作交互对象。In the case that the selected input is based on the input of the handle adapted to the wearable device, the specific implementation of the above S1400 is: identify the arrival position of the rays emitted by the handle, so as to identify the target interactive object corresponding to the arrival position . Further, the identified target interaction object is used as an operation interaction object.

在选择输入为脑电波输入的情况下,上述S1400的具体实现为:对佩戴者的脑电波进行识别,以识别出佩戴者所选中的目标交互对象。进一步的,将识别出的目标交互对象,作为操作交互对象。In the case that the selection input is brain wave input, the specific implementation of the above S1400 is: identifying the wearer's brain wave to identify the target interaction object selected by the wearer. Further, the identified target interaction object is used as an operation interaction object.

基于上述S1300和S1400,则可完成对佩戴式设备所显示场景的交互对象的准确选择。Based on the above S1300 and S1400, accurate selection of interactive objects in the scene displayed by the wearable device can be completed.

在本申请的一个实施例中,上述S1100可具体通过如下S1110和S1111来实现:In an embodiment of the present application, the above S1100 can be specifically implemented through the following S1110 and S1111:

S1110、获取佩戴式设备的当前位姿以及前一位姿采集时刻对应的历史位姿。S1110. Obtain the current pose of the wearable device and the historical pose corresponding to the previous pose collection time.

在本申请的一个实施例中,佩戴式设备上设置有位姿传感器。基于该位姿传感器则可确定出佩戴式设备的位姿。可以理解的是,佩戴式设备的位姿则可代表佩戴者头部的位姿。In one embodiment of the present application, a pose sensor is provided on the wearable device. Based on the pose sensor, the pose of the wearable device can be determined. It can be understood that the pose of the wearable device may represent the pose of the wearer's head.

结合上述内容,位姿传感器按照预先设置的时间间隔对佩戴式设备的位姿进行采集。在本申请实施例中,将当前时刻位姿传感器采集到的最新位姿,记为当前位姿。将位于当前位姿的前一位姿,记为上述S1110中的历史位姿。In combination with the above content, the pose sensor collects the pose of the wearable device at a preset time interval. In the embodiment of the present application, the latest pose collected by the pose sensor at the current moment is recorded as the current pose. The previous pose at the current pose is recorded as the historical pose in S1110 above.

S1111、在根据当前位姿以及历史位姿确定佩戴式设备的位姿发生变化的情况下,获取佩戴式设备的佩戴者在佩戴式设备所显示的场景中的注视点。S1111. In a case where it is determined according to the current pose and the historical pose that the pose of the wearable device changes, acquire a gaze point of a wearer of the wearable device in a scene displayed by the wearable device.

结合上述S1111可知,本申请实施例提供的显示方法在上述S1111之前还包括:根据当前位姿和历史位姿确定佩戴式设备的位姿是否发生变化的步骤。In conjunction with the above S1111, it can be seen that before the above S1111, the display method provided by the embodiment of the present application further includes: a step of determining whether the pose of the wearable device changes according to the current pose and the historical pose.

在本申请实施例中,在当前位姿与历史位姿的偏差大于或等于预设偏差的情况下,则确定佩戴式设备的位姿发生变化。反之,在当前位姿与历史位姿的偏差小于预设偏差的情况下,则确定佩戴式的位姿未发生变化。其中,预设偏差为佩戴者的注视区域发生明显变化时,引起的佩戴式设备的位姿变化值。In the embodiment of the present application, if the deviation between the current pose and the historical pose is greater than or equal to the preset deviation, it is determined that the pose of the wearable device has changed. Conversely, if the deviation between the current pose and the historical pose is smaller than the preset deviation, it is determined that the pose of the wearable has not changed. Wherein, the preset deviation is the pose change value of the wearable device caused when the gaze area of the wearer changes significantly.

在佩戴式设备的位姿发生变化时,佩戴者的注视点则发生明显变化。此时,再获取佩戴式设备的佩戴者在佩戴式设备所显示场景中的注视点,可避免佩戴式设备的运算资源的浪费。When the pose of the wearable device changes, the gaze point of the wearer changes significantly. At this time, acquiring the gaze point of the wearer of the wearable device in the scene displayed by the wearable device can avoid waste of computing resources of the wearable device.

对应于上述S1111、在根据当前位姿和历史位姿确定佩戴式设备的位姿未发生变化的情况下,则重复执行上述S1110。Corresponding to the above S1111, if it is determined according to the current pose and the historical pose that the pose of the wearable device has not changed, the above S1110 is repeatedly executed.

结合上述内容,在本申请的一个实施例中,如图3所示,本申请实施例提供的显示方法包括如下步骤:In combination with the above, in one embodiment of the present application, as shown in Figure 3, the display method provided by the embodiment of the present application includes the following steps:

S3001、获取佩戴式设备的当前位姿以及前一位姿采集时刻对应的历史位姿;S3001. Obtain the current pose of the wearable device and the historical pose corresponding to the previous pose collection moment;

S3002、根据当前位姿和历史位姿确定佩戴式设备的位姿是否发生变化,在是的情况下,执行下述S3003,在否的情况下,执行上述S3001;S3002. Determine whether the pose of the wearable device has changed according to the current pose and the historical pose, if yes, execute the following S3003, and if no, execute the above S3001;

S3003、获取佩戴式设备的佩戴者在佩戴式设备所显示的场景中的注视点。S3003. Obtain the gaze point of the wearer of the wearable device in the scene displayed by the wearable device.

S3004、根据注视点,确定位于场景中的注视区域;S3004. Determine a gaze area located in the scene according to the gaze point;

S3005、根据注视区域以及场景,将场景中的目标交互对象显示在第一位置;S3005. Display the target interactive object in the scene at the first position according to the gaze area and the scene;

S3006、接收用于对目标交互对象的选择输入;S3006. Receive an input for selecting a target interactive object;

S3007、响应于选择输入,从目标交互对象中确定操作交互对象。S3007. In response to the selection input, determine the operation interaction object from the target interaction objects.

本申请实施例提供的显示方法,执行主体可以为显示装置。本申请实施例中以显示装置执行显示方法为例,说明本申请实施例提供的显示装置。The display method provided in the embodiment of the present application may be executed by a display device. In the embodiment of the present application, the display device provided in the embodiment of the present application is described by taking the display device executing the display method as an example.

如图4所示,本申请实施例提供的显示装置400包括:As shown in FIG. 4, thedisplay device 400 provided in the embodiment of the present application includes:

获取模块410,用于获取佩戴式设备的佩戴者在所述佩戴式设备所显示的场景中的注视点,所述场景包括虚拟场景和/或现实场景;Anacquisition module 410, configured to acquire the gaze point of the wearer of the wearable device in the scene displayed by the wearable device, where the scene includes a virtual scene and/or a real scene;

显示模块420,用于根据所述注视点以及所述场景,突出显示所述场景内的目标交互对象。Thedisplay module 420 is configured to highlight a target interactive object in the scene according to the gaze point and the scene.

本申请实施例提供了一种显示装置,该装置包括:获取模块,用于获取佩戴式设备的佩戴者在所述佩戴式设备所显示的场景中的注视点,所述场景包括虚拟场景和/或现实场景;显示模块,用于根据所述注视点以及所述场景,突出显示所述场景内的目标交互对象。通过该装置,可实现对场景内的目标交互对象自动进行突出显示,这样佩戴者则可清楚且准确的判断出场景中的哪些对象是可进行交互的对象。An embodiment of the present application provides a display device, the device includes: an acquisition module, configured to acquire a gaze point of a wearer of a wearable device in a scene displayed by the wearable device, and the scene includes a virtual scene and/or or a real scene; a display module configured to highlight a target interactive object in the scene according to the gaze point and the scene. Through the device, the target interactive objects in the scene can be automatically highlighted, so that the wearer can clearly and accurately determine which objects in the scene are objects that can be interacted with.

在本申请的一个实施例中,所述显示模块420包括:In one embodiment of the present application, thedisplay module 420 includes:

第一确定单元,用于根据所述注视点,确定位于所述场景中的注视区域;A first determination unit, configured to determine a gaze area located in the scene according to the gaze point;

第一显示单元,用于根据所述注视区域以及所述场景,突出显示所述注视区域内的交互对象。The first display unit is configured to highlight the interactive objects in the gaze area according to the gaze area and the scene.

在本申请的一个实施例中,所述显示模块具体用于:In one embodiment of the present application, the display module is specifically used for:

放大所述场景中的目标交互对象;Enlarging the target interactive object in the scene;

和/或,将所述场景中的目标交互对象显示在第一位置。And/or, display the target interactive object in the scene at the first position.

在本申请的一个实施例中,所述显示模块420还用于:In an embodiment of the present application, thedisplay module 420 is also used for:

在所述场景中的第二位置显示所述目标交互对象;displaying the target interactive object at a second location in the scene;

其中,所述第一位置不同于所述第二位置。Wherein, the first position is different from the second position.

在本申请的一个实施例中,所述显示模块420包括:In one embodiment of the present application, thedisplay module 420 includes:

第二确定单元,用于根据所述场景,确定所述场景对应的交互对象;The second determining unit is configured to determine an interactive object corresponding to the scene according to the scene;

第二显示单元,用于根据所述注视点以及所述场景对应的交互对象,突出显示所述场景内的目标交互对象。The second display unit is configured to highlight a target interactive object in the scene according to the gaze point and the interactive object corresponding to the scene.

在本申请的一个实施例中,获取模块410具体用于:In one embodiment of the present application, theacquisition module 410 is specifically used to:

获取所述佩戴式设备的当前位姿以及前一位姿采集时刻对应的历史位姿;Obtaining the current pose of the wearable device and the historical pose corresponding to the previous pose acquisition moment;

在根据所述当前位姿以及所述历史位姿确定所述佩戴式设备的位姿发生变化的情况下,获取佩戴式设备的佩戴者在所述佩戴式设备所显示的场景中的注视点。If it is determined according to the current pose and the historical pose that the pose of the wearable device changes, acquire a gaze point of a wearer of the wearable device in a scene displayed by the wearable device.

本申请实施例中的显示装置可以是佩戴式设备,也可以是佩戴式设备中的部件,例如集成电路或芯片。该佩戴式设备可以是终端,示例性的,佩戴式设备可以为AR佩戴式设备、VR佩戴式设备及MR佩戴式设备等,本申请实施例不作具体限定。The display device in the embodiment of the present application may be a wearable device, and may also be a component in the wearable device, such as an integrated circuit or a chip. The wearable device may be a terminal. Exemplarily, the wearable device may be an AR wearable device, a VR wearable device, an MR wearable device, etc., which are not specifically limited in this embodiment of the present application.

本申请实施例中的显示装置可以为具有操作系统的装置。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本申请实施例不作具体限定。The display device in the embodiment of the present application may be a device with an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.

本申请实施例提供的显示装置能够实现图1或图3的方法实施例实现的各个过程,为避免重复,这里不再赘述。The display device provided by the embodiment of the present application can implement various processes implemented by the method embodiment in FIG. 1 or FIG. 3 , and details are not repeated here to avoid repetition.

可选地,如图5所示,本申请实施例还提供一种佩戴式设备500,包括处理器501和存储器502,存储器502上存储有可在所述处理器501上运行的程序或指令,该程序或指令被处理器501执行时实现上述显示方法实施例的各个步骤,且能达到相同的技术效果,为避免重复,这里不再赘述。Optionally, as shown in FIG. 5 , the embodiment of the present application also provides awearable device 500, including aprocessor 501 and amemory 502, and thememory 502 stores programs or instructions that can run on theprocessor 501, When the program or instruction is executed by theprocessor 501, each step of the above-mentioned display method embodiment can be realized, and the same technical effect can be achieved. To avoid repetition, details are not repeated here.

图6为实现本申请实施例的一种佩戴式设备的硬件结构示意图。FIG. 6 is a schematic diagram of a hardware structure of a wearable device implementing an embodiment of the present application.

该佩戴式设备1000包括但不限于:射频单元1001、网络模块1002、音频输出单元1003、输入单元1004、传感器1005、显示单元1006、用户输入单元1007、接口单元1008、存储器1009、以及处理器1010等部件。Thewearable device 1000 includes but not limited to: aradio frequency unit 1001, anetwork module 1002, anaudio output unit 1003, aninput unit 1004, asensor 1005, adisplay unit 1006, auser input unit 1007, aninterface unit 1008, amemory 1009, and aprocessor 1010 and other components.

本领域技术人员可以理解,佩戴式设备1000还可以包括给各个部件供电的电源(比如电池),电源可以通过电源管理系统与处理器1010逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。图6中示出的佩戴式设备结构并不构成对佩戴式设备的限定,佩戴式设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,在此不再赘述。Those skilled in the art can understand that thewearable device 1000 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to theprocessor 1010 through the power management system, so that the management of charging, discharging, and charging can be realized through the power management system. functions such as power management. The structure of the wearable device shown in FIG. 6 does not constitute a limitation to the wearable device. The wearable device may include more or less components than shown in the illustration, or combine some components, or arrange different components. Herein No longer.

其中,处理器1010,用于获取佩戴式设备的佩戴者在所述佩戴式设备所显示的场景中的注视点,所述场景包括虚拟场景和/或现实场景;Wherein, theprocessor 1010 is configured to obtain the gaze point of the wearer of the wearable device in the scene displayed by the wearable device, and the scene includes a virtual scene and/or a real scene;

根据所述注视点以及所述场景,突出显示所述场景内的目标交互对象。According to the gaze point and the scene, the target interactive object in the scene is highlighted.

通过本申请实施例提供的佩戴式设备,可实现对场景内的目标交互对象自动进行突出显示,这样佩戴者则可清楚且准确的判断出场景中的哪些对象是可进行交互的对象。Through the wearable device provided by the embodiment of the present application, the target interactive objects in the scene can be automatically highlighted, so that the wearer can clearly and accurately determine which objects in the scene are objects that can be interacted with.

可选地,处理器1010,具体用于根据所述注视点,确定位于所述场景中的注视区域;Optionally, theprocessor 1010 is specifically configured to determine a gaze area located in the scene according to the gaze point;

根据所述注视区域以及所述场景,突出显示所述注视区域内的交互对象。According to the gaze area and the scene, interactive objects in the gaze area are highlighted.

可选地,处理器1010,具体用于放大所述场景中的目标交互对象;Optionally, theprocessor 1010 is specifically configured to enlarge the target interactive object in the scene;

和/或,将所述场景中的目标交互对象显示在第一位置。And/or, display the target interactive object in the scene at the first position.

可选地,处理器1010,还用于在所述场景中的第二位置显示所述目标交互对象;Optionally, theprocessor 1010 is further configured to display the target interactive object at a second position in the scene;

其中,所述第一位置不同于所述第二位置。Wherein, the first position is different from the second position.

可选地,处理器1010,具体用于根据所述场景,确定出所述场景对应的交互对象;Optionally, theprocessor 1010 is specifically configured to determine an interactive object corresponding to the scene according to the scene;

根据所述注视点以及所述场景对应的交互对象,突出显示所述场景内的目标交互对象。According to the gaze point and the interactive object corresponding to the scene, the target interactive object in the scene is highlighted.

可选地,处理器1010,具体用于获取所述佩戴式设备的当前位姿以及前一位姿采集时刻对应的历史位姿;Optionally, theprocessor 1010 is specifically configured to acquire the current pose of the wearable device and the historical pose corresponding to the previous pose collection moment;

在根据所述当前位姿以及所述历史位姿确定所述佩戴式设备的位姿发生变化的情况下,获取佩戴式设备的佩戴者在所述佩戴式设备所显示的场景中的注视点。If it is determined according to the current pose and the historical pose that the pose of the wearable device changes, acquire a gaze point of a wearer of the wearable device in a scene displayed by the wearable device.

应理解的是,本申请实施例中,输入单元1004可以包括图形处理器(GraphicsProcessing Unit,GPU)10041和麦克风10042,图形处理器10041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。显示单元1006可包括显示面板10061,可以采用液晶显示器、有机发光二极管等形式来配置显示面板10061。用户输入单元1007包括触控面板10071以及其他输入设备10072中的至少一种。触控面板10071,也称为触摸屏。触控面板10071可包括触摸检测装置和触摸控制器两个部分。其他输入设备10072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。It should be understood that, in this embodiment of the present application, theinput unit 1004 may include a graphics processor (Graphics Processing Unit, GPU) 10041 and amicrophone 10042, and thegraphics processor 10041 is used for an image capture device (such as Camera) to process the image data of still pictures or videos. Thedisplay unit 1006 may include adisplay panel 10061, and thedisplay panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. Theuser input unit 1007 includes at least one of atouch panel 10071 andother input devices 10072 . Thetouch panel 10071 is also called a touch screen. Thetouch panel 10071 may include two parts, a touch detection device and a touch controller.Other input devices 10072 may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and joysticks, which will not be repeated here.

存储器1009可用于存储软件程序以及各种数据。存储器1009可主要包括存储程序或指令的第一存储区和存储数据的第二存储区,其中,第一存储区可存储操作系统、至少一个功能所需的应用程序或指令(比如声音播放功能、图像播放功能等)等。此外,存储器1009可以包括易失性存储器或非易失性存储器,或者,存储器1009可以包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本申请实施例中的存储器1009包括但不限于这些和任意其它适合类型的存储器。Thememory 1009 can be used to store software programs as well as various data. Thememory 1009 may mainly include a first storage area for storing programs or instructions and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or instructions required by at least one function (such as a sound playing function, image playback function, etc.), etc. Furthermore,memory 1009 may include volatile memory or nonvolatile memory, or,memory 1009 may include both volatile and nonvolatile memory. Wherein, the non-volatile memory may be a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable ROM, PROM), an erasable programmable read-only memory (Erasable PROM, EPROM), an electronically programmable Erase Programmable Read-Only Memory (Electrically EPROM, EEPROM) or Flash. Volatile memory can be random access memory (Random Access Memory, RAM), static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (Double Data Rate SDRAM, DDRSDRAM), enhanced synchronous dynamic random access memory (Enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (Synch link DRAM , SLDRAM) and Direct Memory Bus Random Access Memory (Direct Rambus RAM, DRRAM). Thememory 1009 in the embodiment of the present application includes but is not limited to these and any other suitable types of memory.

处理器1010可包括一个或多个处理单元;可选的,处理器1010集成应用处理器和调制解调处理器,其中,应用处理器主要处理涉及操作系统、用户界面和应用程序等的操作,调制解调处理器主要处理无线通信信号,如基带处理器。可以理解的是,上述调制解调处理器也可以不集成到处理器1010中。Theprocessor 1010 may include one or more processing units; optionally, theprocessor 1010 integrates an application processor and a modem processor, wherein the application processor mainly processes operations related to the operating system, user interface, and application programs, etc., Modem processors mainly process wireless communication signals, such as baseband processors. It can be understood that the foregoing modem processor may not be integrated into theprocessor 1010 .

本申请实施例还提供一种可读存储介质,所述可读存储介质上存储有程序或指令,该程序或指令被处理器执行时实现上述显示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。The embodiment of the present application also provides a readable storage medium, the readable storage medium stores a program or an instruction, and when the program or instruction is executed by a processor, each process of the above-mentioned display method embodiment is realized, and the same Technical effects, in order to avoid repetition, will not be repeated here.

其中,所述处理器为上述实施例中所述的佩戴式设备中的处理器。所述可读存储介质,包括计算机可读存储介质,如计算机只读存储器ROM、随机存取存储器RAM、磁碟或者光盘等。Wherein, the processor is the processor in the wearable device described in the above embodiments. The readable storage medium includes a computer-readable storage medium, such as a computer read-only memory ROM, a random access memory RAM, a magnetic disk or an optical disk, and the like.

本申请实施例另提供了一种芯片,所述芯片包括处理器和通信接口,所述通信接口和所述处理器耦合,所述处理器用于运行程序或指令,实现上述显示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。The embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to realize the various aspects of the above display method embodiments process, and can achieve the same technical effect, in order to avoid repetition, it will not be repeated here.

应理解,本申请实施例提到的芯片还可以称为系统级芯片、系统芯片、芯片系统或片上系统芯片等。It should be understood that the chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.

本申请实施例提供一种计算机程序产品,该程序产品被存储在存储介质中,该程序产品被至少一个处理器执行以实现如上述显示方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。An embodiment of the present application provides a computer program product, the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the various processes in the above-mentioned display method embodiment, and can achieve the same technical effect, To avoid repetition, details are not repeated here.

需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。此外,需要指出的是,本申请实施方式中的方法和装置的范围不限按示出或讨论的顺序来执行功能,还可包括根据所涉及的功能按基本同时的方式或按相反的顺序来执行功能,例如,可以按不同于所描述的次序来执行所描述的方法,并且还可以添加、省去、或组合各种步骤。另外,参照某些示例所描述的特征可在其他示例中被组合。It should be noted that, in this document, the term "comprising", "comprising" or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase "comprising a ..." does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element. In addition, it should be pointed out that the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以计算机软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the methods of the above embodiments can be implemented by means of software plus a necessary general-purpose hardware platform, and of course also by hardware, but in many cases the former is better implementation. Based on such an understanding, the technical solution of the present application can be embodied in the form of computer software products, which are stored in a storage medium (such as ROM/RAM, magnetic disk, etc.) , optical disc), including several instructions to enable a terminal (which may be a mobile phone, computer, server, or network device, etc.) to execute the methods described in various embodiments of the present application.

上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。The embodiments of the present application have been described above in conjunction with the accompanying drawings, but the present application is not limited to the above-mentioned specific implementations. The above-mentioned specific implementations are only illustrative and not restrictive. Those of ordinary skill in the art will Under the inspiration of this application, without departing from the purpose of this application and the scope of protection of the claims, many forms can also be made, all of which belong to the protection of this application.

Claims (11)

CN202310190310.3A2023-02-282023-02-28Display method and device and wearable equipmentPendingCN116301357A (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN202310190310.3ACN116301357A (en)2023-02-282023-02-28Display method and device and wearable equipment
PCT/CN2024/078481WO2024179392A1 (en)2023-02-282024-02-26Display method and apparatus and wearable device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310190310.3ACN116301357A (en)2023-02-282023-02-28Display method and device and wearable equipment

Publications (1)

Publication NumberPublication Date
CN116301357Atrue CN116301357A (en)2023-06-23

Family

ID=86833573

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310190310.3APendingCN116301357A (en)2023-02-282023-02-28Display method and device and wearable equipment

Country Status (2)

CountryLink
CN (1)CN116301357A (en)
WO (1)WO2024179392A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2024179392A1 (en)*2023-02-282024-09-06维沃移动通信有限公司Display method and apparatus and wearable device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10152495B2 (en)*2013-08-192018-12-11Qualcomm IncorporatedVisual search in real world using optical see-through head mounted display with augmented reality and user interaction tracking
CN103475893B (en)*2013-09-132016-03-23北京智谷睿拓技术服务有限公司The pick-up method of object in the pick device of object and three-dimensional display in three-dimensional display
US10229541B2 (en)*2016-01-282019-03-12Sony Interactive Entertainment America LlcMethods and systems for navigation within virtual reality space using head mounted display
US10395428B2 (en)*2016-06-132019-08-27Sony Interactive Entertainment Inc.HMD transitions for focusing on specific content in virtual-reality environments
CN110928407B (en)*2019-10-302023-06-09维沃移动通信有限公司 Information display method and device
CN110826465B (en)*2019-10-312023-06-30南方科技大学 Transparency adjustment method and device for wearable device display
CN111949131B (en)*2020-08-172023-04-25陈涛Eye movement interaction method, system and equipment based on eye movement tracking technology
CN113467619B (en)*2021-07-212023-07-14腾讯科技(深圳)有限公司Picture display method and device, storage medium and electronic equipment
CN116301357A (en)*2023-02-282023-06-23维沃移动通信有限公司Display method and device and wearable equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2024179392A1 (en)*2023-02-282024-09-06维沃移动通信有限公司Display method and apparatus and wearable device

Also Published As

Publication numberPublication date
WO2024179392A1 (en)2024-09-06

Similar Documents

PublicationPublication DateTitle
US20240119172A1 (en)Controlled exposure to location-based virtual content
US12425716B2 (en)Content capture with audio input feedback
CN113544634B (en) Device, method and graphical user interface for forming a CGR file
US11948263B1 (en)Recording the complete physical and extended reality environments of a user
US20210366163A1 (en)Method, apparatus for generating special effect based on face, and electronic device
CN111142673B (en) Scene switching method and head-mounted electronic device
CN110019918B (en)Information display method, device, equipment and storage medium of virtual pet
JP2019145108A (en)Electronic device for generating image including 3d avatar with facial movements reflected thereon, using 3d avatar for face
CN111541907B (en)Article display method, apparatus, device and storage medium
CN108961157B (en) Image processing method, image processing device and terminal device
CN110246110B (en)Image evaluation method, device and storage medium
US12243120B2 (en)Content distribution system, content distribution method, and content distribution program
CN111917918B (en)Augmented reality-based event reminder management method and device and storage medium
CN118786648A (en) Device, method and graphical user interface for authorizing security operations
CN111698564A (en)Information recommendation method, device, equipment and storage medium
CN113301506B (en) Information sharing method, device, electronic device and medium
US20160259512A1 (en)Information processing apparatus, information processing method, and program
CN115857856A (en) Information prompting method, information prompting device, electronic equipment and readable storage medium
CN116301357A (en)Display method and device and wearable equipment
CN112764700A (en)Image display processing method, device, electronic equipment and storage medium
CN113051022A (en)Graphical interface processing method and graphical interface processing device
WO2024140554A1 (en)Display method and apparatus, and ar device
CN114911382B (en)Signature display method and device, related equipment and storage medium thereof
CN114449323B (en) Video generation method and electronic device
WO2023146837A2 (en)Extended reality for collaboration

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp