BACKGROUNDTechnical FieldThis application is directed to generating three-dimensional (3D) rendering of a physical object and annotating and refining the 3D rendering by physically tracing an input device over the physical object. This application is also directed to distance measurement of a curve traced by the input device.
Description of the Related ArtIn many industries, including the automotive industry, physical models, such as clay models, are used to model automobile designs and physically illustrate design features of an automobile. Refining and augmenting a physical model is an important task in designing cars as well as other industrial or consumer products. During the industrial design process, designer and 3D modelers shape the physical model with tools and tape-mark changes to the physical model. However, physically shaping the physical model is time consuming and oftentimes not easily reversible as the physical model may need to be patched in order to reverse a change made to the model.
Accordingly, a method and apparatus for rendering a 3D model of a physical object and augmenting the 3D model by sketching on the 3D model and digitally or virtually viewing the augmented 3D model is desired.
BRIEF SUMMARYIn an embodiment, a system includes a three-dimensional (3D) scanner configured to scan an outer surface of a physical object, and output data representative of the outer surface of the object. In an embodiment, the system includes a processor configured to receive the data representative of the outer surface of the object, and generate, based on the received data, a 3D model of the object, and output a 3D rendering of the object based on the generated 3D model. In an embodiment, the system includes a display configured to receive the 3D rendering of the object, and display the 3D rendering of the object. The system includes an input device operable to physically trace over at least one portion of the outer surface of the object and a tracking device configured to track a positioning of the input device as the input device physically traces over the at least one portion of the outer surface of the object, and output data representative of at least one spatial position of the input device as the input device traces over the object. The processor is configured to receive the data representative of the at least one spatial position of the input device, augment the 3D rendering of the object based at least in part on the data representative of the at least one spatial position of the input device, and in response to augmenting the 3D rendering of the object, output the augmented 3D rendering of the object to the display. In an embodiment, the display is configured to display the augmented 3D rendering of the object.
In an embodiment, the processor is configured to augment the 3D rendering of the object by at least identifying, based on the data representative of the at least one spatial position of the input device, one or more curves having one or more respective positions in space relative to the outer surface of the object, and superposing the one or more curves on the 3D rendering of the object at one or more rendering positions corresponding to the one or more positions in space relative to the outer surface of the object, respectively.
In an embodiment, the input device is pressure sensitive and configured to sense a pressure applied to the input device as the input device physically traces over the at least one portion of the outer surface of the object, and output data representative of the pressure. The processor is configured to determine respective one or more widths of the one or more curves based at least in part on the pressure applied to the input device as the input device physically traces over the at least one portion of the outer surface of the object to form the one or more curves and superpose, on the 3D rendering of the object, the one or more curves having the respective one or more widths.
In an embodiment, the input device includes a pressure-sensitive tip operable to sense the pressure applied to the input device as the input device physically traces over the at least one portion of the outer surface of the object. In an embodiment, the input device includes a first control input operative to receive one or more respective width indications of the one or more curves. The input device is configured to output data representative of one or more respective width indications to the processor, and the processor is configured to receive the data representative of one or more respective width indications, determine respective one or more widths of the one or more curves based the data representative of one or more respective width indications, and superpose, on the 3D rendering of the object, the one or more curves having the respective one or more widths. In an embodiment, the display is a head-mounted display configured to display the 3D rendering of the object superposed on the physical object that otherwise is visually visible through the head-mounted display.
In an embodiment, a system includes a three-dimensional (3D) scanner configured to scan an outer surface of a physical object, and output data representative of the outer surface of the object. The system includes a processor configured to receive the data representative of the outer surface of the object, generate, based on the received data, a 3D model of the object, and output a 3D rendering of the object based on the generated 3D model. In an embodiment, the system includes a display configured to receive the 3D rendering of the object, and display the 3D rendering of the object and an input device operable to physically trace over at least one portion of the outer surface of the object. The system includes a tracking device configured to track a positioning of the input device as the input device traces over the at least one portion of the outer surface of the object, and output data representative of at least one position of the input device in 3D space as the input device traces over the outer surface of the object. The processor is configured to receive the data representative of the at least one position of the input device, modify the 3D model of the object based at least in part on the data representative of the at least one position of the input device, generate an updated 3D rendering of the object based on the modified 3D model, and in response to generating the updated 3D rendering of the object, output the updated 3D rendering of the object to the display. In an embodiment, the display is configured to display the updated 3D rendering of the object.
In an embodiment, the processor is configured to generate the 3D model of the object by generating a polygon mesh that includes a plurality of vertices and a plurality of edges. In an embodiment, the processor is configured to modify the 3D model of the object by at least changing a position of a vertex of the plurality of vertices or an edge of the plurality of edges to correspond to the at least one position of the input device in 3D space. In an embodiment, the processor is configured to modify the 3D model of the object by at least adding, to the plurality of vertices, a first vertex having a position in space that corresponds to the at least one position of the input device in 3D space. In an embodiment, the processor is configured to modify the 3D model of the object by at least removing, from the plurality of vertices, a second vertex having a position that is closest in 3D space to the position of the first vertex. In an embodiment, the display is a head-mounted display configured to display the 3D rendering of the object superposed on the physical object that otherwise is visually visible through the head-mounted display, and further configured to display the updated 3D rendering of the object superposed on the physical object that otherwise is visually visible through the head-mounted display.
In an embodiment, a system includes a three-dimensional (3D) scanner configured to scan an outer surface of a physical object, and output data representative of the outer surface of the object. In an embodiment, the system includes a processor configured to receive the data representative of the outer surface of the object, and generate, based on the received data, a 3D model of the object, and output a 3D rendering of the object based on the generated 3D model. In an embodiment, the system includes a display configured to receive the 3D rendering of the object, and display the 3D rendering of the object. The system includes an input device operable to physically trace over at least one portion of the outer surface of the object, and a tracking device configured to track a positioning of the input device as the input device traces over the at least one portion of the outer surface of the object, and output data representative of at least two positions of the input device as the input device traces over the object. The processor is configured to receive the data representative of the at least two positions, determine a distance between the at least two positions, and output data representative of the distance.
The processor is configured to identify a curve based on data representative of positions of the input device between the at least two positions, and determine the distance between the at least two positions along the identified curve. The display is configured to receive the data representative of the distance, and display the distance on the display. The input device includes a control input operative to receive a selection of a first mode of operation of a plurality of modes of operation of the input device and output data indicative of the first mode of operation.
In an embodiment, the processor is configured to receive the data indicative of the first mode of operation, and in response to receiving the data indicative of the first mode of operation, determine the distance between the at least two positions, and output the data representative of the distance. In an embodiment, the input device receives, via the control input, a selection of a second mode of operation of the plurality of modes of operation of the input device and output data indicative of the second mode of operation. The processor is configured to receive the data indicative of the second mode of operation, and in response to receiving the data indicative of the second mode of operation, augment the 3D rendering of the object based on positioning information received from the tracking device tracking the input device as the input device traces over at least one portion of the outer surface of the object. The processor is configured to receive the data indicative of the second mode of operation, and in response to receiving the data indicative of the second mode of operation, modify the 3D model of the object based on positioning information received from the tracking device tracking the input device as the input device traces over at least one portion of the outer surface of the object, and generate an updated 3D rendering of the object based on the modified 3D model.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGSFIG. 1 shows a three-dimensional (3D) scanner scanning a physical object.
FIG. 2 shows a 3D rendering system.
FIG. 3 shows an input device in accordance with an embodiment of the present disclosure.
FIG. 4 shows a flow diagram of a method for augmenting a 3D rendering of an object.
FIG. 5 shows a flow diagram of a method for modifying a 3D rendering of an object based on a position of an input device.
FIG. 6 shows a flow diagram of a method for distance measurement based on a position of an input device.
DETAILED DESCRIPTIONFIG. 1 shows a three-dimensional (3D)scanner102 scanning aphysical object101. The3D scanner102 may be any device configured to scan thephysical object101 or an outer surface thereof for generating a three-dimensional model of thephysical object101. The3D scanner102 may be a non-contact or a contact scanner. Further, the3D scanner102 may be an active scanner or a non-active scanner. The3D scanner102 may use any technique for scanning the object, such as time-of-flight (ToF) or triangulation.
The3D scanner102 may be a ToF 3D laser scanner. The3D scanner102 may be an active scanner that uses laser light to probe thephysical object101. The3D scanner102 may be a stereoscopic scanner. The3D scanner102 may include a ToF laser range finder. The laser range finder may identify a distance between the3D scanner102 and the surface of thephysical object101 based on the timing of a round-trip time of a pulse of light emitted by the3D scanner102. The3D scanner102 emits a laser pulse, detects a reflection of the laser pulse reflected by the surface of thephysical object101 and determines a duration of time (round trip time) between a time instant when the laser pulse is emitted and a time instant when the reflection of the laser pulse is detected. The3D scanner102 determines a distance between the3D scanner102 and the surface of thephysical object101 based on the determined time and the speed of light.
The3D scanner102 may directionally emit the laser pulse to scan thephysical object101. The3D scanner102 accordingly scans thephysical object101 from multiple views. The ToF laser range finder may scan an entire field of view one point at a time and may change its direction of view to scan different points of the outer surface of theobject101. The direction of view may be changed either by rotating the range finder or using a system of rotating mirrors, among others.
FIG. 2 shows a3D rendering system106. Thesystem106 includes the3D scanner102, a 3D rendering device108 (shown in block diagram form), a display110 (shown pictorially for example as a head-mounted display), aninput device112 and atracking device113 for theinput device112. The3D rendering device108 includes aprocessor114,memory116 and one ormore communication devices118. Thememory116 and the one ormore communication devices118 are communicatively coupled to theprocessor114. The3D rendering device108 is communicatively coupled to the3D scanner102, thedisplay110, theinput device112 and thetracking device113.
Theprocessor114 may be any type of computational device configured to perform the operations described herein. Theprocessor114 may be a graphics processing unit (GPU) or a central processing unit (CPU), among others. Theprocessor114 may also be a controller, a microcontroller or a microprocessor, among others. Thememory116 may be any type of storage device configured to store data. The data may be graphics data (such as a 3D rendering of the surface of the physical object101) or the data may be executable instructions that, when executed by theprocessor114, cause the processor to perform the operations described herein.
The one ormore communication devices118 may be any type of communication devices configured to traffic or exchange data with other communication devices. Acommunication device118 may be a wireless or a wired communication device and may be a modem or a transceiver, among others. Acommunication device118 may receive data from or transmit data to another communication device. Although not shown inFIG. 2, other communication devices may be part of the3D scanner102, thedisplay110, theinput device112 and/or thetracking device113. The one or more communication devices, which may include one or multiple communication devices, may communicate using any type of protocol associated with a respective communication device. The protocol may be an Institute of Electrical and Electronics Engineers (IEEE) 802.11 protocol, a Bluetooth protocol, a universal serial bus (USB) protocol or cellular communications protocol, such as a Third Generation Partnership Project (3GPP) Long-Term Evolution (LTE) protocol, among others.
It is noted that the3D rendering device108 may be a computer, tablet or smartphone, among others. The3D rendering device108 may be independent of thedisplay110 or thetracking device102. However, in alternative embodiments the3D rendering device108 may be part of thedisplay110 or thetracking device102 or the operations performed by the3D rendering device108 may instead be performed by thedisplay110 and a processor, memory or one or more communication devices thereof.
The3D rendering device108 receives, over the one ormore communication devices118, a signal carrying data representative of the scannedphysical object101. The signal may be modulated and encoded in accordance with a respective modulation and encoding of the communication protocol used by the one ormore communication devices118.
The one ormore communication devices118 demodulates and decodes the signal and outputs the data representative of the scannedphysical object101 to theprocessor114. Theprocessor114 evaluates the data representative of the scannedphysical object101. Theprocessor114 generates a 3D model of thephysical object101 based on the data representative of thephysical object101. The 3D model of thephysical object101 may include a polygon mesh that includes a plurality of vertices and a plurality of edges. The polygon mesh may also include a plurality of surfaces. Each surface may be between three or more respective edges of the plurality of edges. A vertex of the plurality of vertices has a position in space that corresponds to a position in space of a point on the outer surface of thephysical object101. The plurality of vertices, the plurality of edges and the plurality of surfaces virtually (and digitally) represent the scannedphysical object101. Theprocessor114 stores the 3D model of thephysical object101 in thememory116. Theprocessor114 causes the 3D model of thephysical object101 to be output, via the one ormore communication devices118, to thedisplay110.
Thedisplay110 may be a head-mounted display (HMD). As a head-mounted display, thedisplay110 may be a virtual reality display or an augmented reality display. As an augmented reality display, thedisplay110 may be transparent or semi-transparent. As such, a viewer viewing thephysical object101 through thedisplay110 sees thephysical object101 by virtue of the display's110 transparent properties. Using the 3D model of the object, thedisplay110 may superpose a 3D rendering of thephysical object101 over thephysical object101 as thephysical object101 is transparently visible through thedisplay101. Accordingly, in such embodiment, the viewer sees the 3D rendering of thephysical object101 overlaid on thephysical object101.
The viewer or user may use theinput device112 to annotate, augment, refine, or change (collectively “augment”) the 3D rendering of the physical object. The user may use theinput device112 to augment the 3D rendering of the physical object by drawing one or more curves in general or any other shape on the 3D rendering. In this regard, the user may trace the input device or a tip thereof in 3-dimensional space over at least a portion of thephysical object101. Thetracking device113 tracks a position of theinput device112 in the 3-dimensional space and outputs data representative of the position to the3D rendering device108. The3D rendering device108 receives the data representative of the position of theinput device112 and generates an augmented 3D rendering of the physical object based on the data representing the tracked position of theinput device112. As will be appreciated from the description herein, the augmented 3D rendering of the physical object may include designs and features that appear virtually on or in relation to a surface of the physical object but do not otherwise appear in the actual 3-dimensional space of the physical object.
FIG. 3 shows an example of theinput device112 in accordance with an embodiment. Theinput device112 includes ahousing119, atip120, amarker122 and a plurality ofcontrol inputs124a,124b,124c. Thetip120 may be pressure-sensitive. Themarker122 may be positioned on thetip120 of theinput device112. In other embodiments, themarker122 may be positioned elsewhere on theinput device112. Themarker122 may be a passive or an active marker that is used to track and determine the position of thetip120. For example, themarker122 may be a reflective coating that reflects light. Alternatively or in addition, themarker122 may be a light-emitting diode (LED) that actively illuminates light for tracking thetip120 of theinput device112. In various embodiments, themarker122 may be a strobe light that emits light having a specified wavelength or signature. In various embodiments, theinput device112 may be marker-less, whereby a position of thetip120 or another part of the input device may be tracked based on a shape or other property thereof.
Referring back toFIG. 2, thetracking device113 tracks the spatial positions of theinput device112 or themarker122 thereof as theinput device112 moves through 3-dimensional space. Thetracking device113 determines a spatial position of themarker122 and outputs data representative of the position to the3D rendering device108. In at least one embodiment, thetracking device113 may include one or more cameras, such as motion capture cameras. The one or more cameras may capture images of themarker122 and determine the position of the marker and consequently thetip120 andinput device112 based on the captured images.
Thetracking device113 may include a communication device (not shown). Thetracking device113 may send a signal, over the communication device, including the data representative of the spatial position of theinput device112. The3D rendering device108 receives the signal, over the one ormore communication devices118, and outputs the data representative of the spatial position to theprocessor114. Theprocessor114 identifies the position of theinput device112 or themarker122 based on the received position data. Theprocessor114 thereafter augments the 3D rendering of the physical object based on the received position data.
For example, the user may physically trace over an outer surface of thephysical object101 with theinput device114 or thetip120 thereof to draw a line or, generally, a curve. Thus, theinput device114 may be used to sketch (or chart) over the 3D rendering of the physical object. As the user traces over the outer surface of thephysical object101, thetracking device113 tracks the spatial position of thetip122 and outputs data representative of the position to the3D rendering device108. The3D rendering device108 augments the 3D rendering of the physical object by adding a corresponding curve to the 3D rendering of the physical object. The curve may be a collection of points connected with one another and having positions in space corresponding to the positions of the tip detected by thetracking device113. The3D rendering device108 superposes the curve onto the 3D rendering of the physical object. The3D rendering device108 thereafter generates an augmented 3D rendering of the physical object. The augmented 3D rendering includes the 3D rendering of the physical object (previously generated) having the curve superposed thereon.
The3D rendering device108 outputs data representative of the augmented 3D rendering of the physical object to thedisplay110. Thedisplay110 displays the augmented 3D rendering of the physical object. It is noted that detecting the spatial position of theinput device112, generating the augmented 3D rendering and outputting, to thedisplay110, the data representative of the augmented 3D rendering may be performed in real-time. Thus, the user viewing thedisplay110 sees the curve in the augmented 3D rendering in real-time and as the user “draws” using the input device112 (or as the user uses theinput device112 to trace over the outer surface of the physical object101). It is noted that the term “curve” is used herein to represent any general shape drawn by the user using theinput device112. The curve, for example, may be a straight line or any other shape.
In an embodiment, thetip120 of theinput device112 may be pressure-sensitive. Theinput device112 may sense the pressure applied to the tip by the user as the user operates theinput device112. The pressure may be used to determine a thickness of the curve drawn by the user. Theinput device112 may output data representative of the pressure applied to thetip120. Theinput device112 may output the pressure data to the3D rendering device108. As described herein, theinput device112 may include a communication device (not shown) operable to communicate with the one ormore communication devices118 of the3D rendering device108 and operable to output a signal including the data representative of the pressure applied to thetip120. The one ormore communication devices118 of the3D rendering device108 may receive the signal and output the data representative of the pressure to theprocessor114. Theprocessor114 identifies the pressure based on the received pressure data. Theprocessor114 renders the curve with a line thickness that corresponds to the identified pressure. The relationship between the pressure and thickness may be proportional, whereby a greater amount of pressure applied by the user results in rendering a thicker curve.
Theprocessor114 may evaluate the identified pressure together with the position of thetip120. Theprocessor114 generates the curve to be superposed onto the 3D rendering of the physical object based on both the pressure data and the position data. A thickness of the curve at a position in space corresponds to the identified pressure applied to thetip120 at that position in space.
The plurality of control inputs124a-cof theinput device112 may be used to control attributes of the curve. For example, afirst control input124amay be used to select between modes of operation of theinput device112. A first mode of operation may be augmentation of the 3D rendering as described herein, whereby one or more additional curves are superposed on the 3D rendering. A second mode of operation may be modification of the 3D rendering and a third mode of operation may be distance measurement as described herein. The user may operate thefirst control input124a, which may be a multi-pole or a multiway switch, to select the mode of operation from various available modes of operation.
Similarly, the second andthird control inputs124b,124cmay be used to select attributes of the curve, such as color, style, or thickness of the line making the curve. In an embodiment, thesecond control input124bmay be used to select a color of the curve such as red, green or blue, among others, and/or a style of the curve such as a solid or dashed line curve, among others. In an embodiment, thethird control input124cmay be used to select a static or constant thickness of the curve. The thickness selected using thethird control input124cmay override or supersede the thickness determined based on pressure applied to thetip120. In an embodiment, control input functionality may be user-configurable. For example, a user may specify a control input functionality respectively associated with the control inputs124a-cthat is different than a default control input functionality of theinput device112.
It is noted thatinput device112 ofFIG. 3 is exemplary and non-limiting. In various embodiments any other type ofinput device112 may be used. Theinput device112 may have a different form factor than that illustrated inFIG. 3. In an embodiment, the input device may be a joystick, touchpad, pressure-sensitive pad or wheel, among others. Further, theinput device112 may have more control inputs or fewer control inputs than illustrated inFIG. 3.
Theinput device112 outputs, to the3D rendering device108, data representative of the selected mode of operation and/or attributes of the curve. The3D rendering device108 receives the data representative of the selected mode of operation and/or attributes of the curve and uses the data together with the data representative of the position of thetip120 to generate the augmented 3D rendering of the physical object. For example, the3D rendering device108 may apply a color to the curve or render the curve to have a thickness that is in accordance with the received attributes.
In addition or as an alternative to augmenting the 3D rendering of thephysical object101, the3D rendering device108 may refine or change the 3D rendering of thephysical object101 based on user input provided using theinput device112. The user may use the input device to trace the outer surface of thephysical object101 in order to refine or change (and improve the accuracy of) the 3D rendering of the physical object. For example, the user may trace over thephysical object101 to provide precise positions of thetip120 at or near the outer surface of thephysical object101. The positions of thetip120 are then used to change the 3D rendering of thephysical object101 and improve the accuracy of the 3D rendering of thephysical object101.
As the user utilizes theinput device112 to trace the outer surface of thephysical object101, thetracking device113 tracks the position of the tip. Thetracking device113 outputs data representative of the spatial position of thetip120 to the3D rendering device108. The position may be a position in space represented in a Cartesian coordinate system of 3-dimensional space as three coordinates (for example, (x,y,z)) or represented in a Polar coordinate system as three coordinates (for example, radial distance, polar angle and azimuthal angle) in relation to a reference point (or a point of origin). The position tracking of theinput device112 may have more precise spatial resolution than the3D scanner102 that is otherwise used to generate the 3-dimensional model of the physical object, as described above with regard toFIGS. 1 and 2. The3D rendering device108 receives the data representing the tracked position of thetip120 of theinput device112 and, using the tracked position data, adjusts or changes the 3D model that provides the 3D rendering of the physical object.
As described herein, the 3D rendering of the physical object may include a plurality of vertices, whereby each pair of vertices is connected by an edge of a plurality of edges. The3D rendering device108 may set the position of thetip120 received from trackingdevice113 as a vertex of the plurality of vertices. As such, the 3D rendering of the physical object is adjusted based on the data position received from thetracking device113. Furthermore, the3D rendering device108 may remove an existing vertex of the 3D rendering and replace the removed vertex with a vertex at the received position of theinput device112. The removed vertex may be the vertex whose position in Euclidean space is closest to the received position of theinput device112. The3D rendering device108 may remove the vertex and replace with a new vertex whose position corresponds (or is identical) to the spatial position of thetip120 received from thetracking device113. Thus, the3D rendering device108 iteratively improves the 3D rendering of the physical object using tracked positional data of theinput device112 as theinput device112 traces portions of the surface of the physical object. Based on the adjustments made to the 3D model of the physical object, the3D rendering device108 generates an updated 3D rendering of thephysical object101 and outputs data representative of the updated 3D rendering to thedisplay110.
Thus, the3D rendering device108 initially generates a 3D model of thephysical object101 based on the data representative of the scannedphysical object101 output by the3D scanner102. Then, the3D rendering device108 refines the 3D model based on the data representative of the position of theinput device112 ortip120 thereof as theinput device112 traces portions of the surface of the physical object. Accordingly, the3D rendering device108 incrementally improves the 3D rendering of the physical object.
In an embodiment, thesystem106 may be used to measure distances in space. The distance, which may be Euclidean distance, may lie anywhere in space. The distance may, for example, between two points on an outer surface of thephysical object101. To measure distance, a user may place thetip120 of theinput device112 at a first point and move thetip120 along the surface of the physical object to a second point that is different from the first point.
When thetip120 is at the first point, thetracking device113 identifies a first spatial position of the tip and outputs the first position data to the3D rendering device108. The3D rendering device108 stores the first position data in thememory116. The user then moves thetip120 of theinput device112 along the surface of the physical object to the second point. Thetracking device113 identifies a second position associated with the second point in space. The tracking device103 outputs the second position to the3D rendering device108. Having received the first and second positions, the3D rendering device108 determines the Euclidean distance between the first and second positions. The3D rendering device108 then outputs data indicative of the distance to thedisplay110 to be displayed to the user or to any other device that outputs the distance to the user.
It is noted that in various embodiments, the distance may be a linear distance between two points, such as the first and second points. In addition or alternatively, the distance may be a length of an arc or a curve traced by thetip120 of theinput device112. As the user traces a curve, thetracking device113 determines the spatial position of thetip120 in real-time and outputs data representative of the position to the3D rendering device108. It is recognized that it may be advantageous for the user to trace a curve or an arc slowly to allow thetracking device113 to determine various positions of thetip120 in small distance increments in relation to each other and with greater granularity. Identifying the displacement of thetip120 in smaller increments leads to improved accuracy in determining the length of a curve.
It is noted that in various embodiments, thetracking device113 may be part of the3D scanner102 or thetracking device113 may be dispensed with and the3D scanner102 may perform the tracking functions performed by thetracking device113. Accordingly, the3D scanner102 may track the spatial position of theinput device112 and output data representative of the tracked position to the3D rendering device108. Thetracking device113 may be an outside-in tracking device in which cameras or other sensors at fixed locations and oriented towards theinput device112 track movement of the input device as it moves within the visual ranges of the cameras or other sensors. Furthermore, thetracking device113 may be part of or included in the head-mounted display or the3D rendering device108. Alternatively or in addition, thedisplay110 may include inside-out tracking, whereby thedisplay110 may include a camera that “looks out” on or observes an external surrounding environment or space to determine a position of thedisplay110 or theinput device112 in relation to the environment or space.
FIG. 4 shows a flow diagram of amethod400 for augmenting a 3D rendering of an object. In themethod400, a 3D scanner, such as the3D scanner102 described with reference toFIG. 1, scans an outer surface of a physical object at402. At404, a 3D rendering device, such as the3D rendering device108 described with reference toFIG. 2, generates a 3D model of the object based on the data resulting from scanning the outer surface at402. The 3D model of the object may include a plurality of vertices, a plurality of edges, and a plurality of surfaces determined from the 3D scanning of the outer surface of the object.
At406, a display, such as thedisplay110 described with reference toFIG. 2, displays a 3D rendering of the physical object based on the generated 3D model. The display may be a virtual reality (VR) or an augmented reality (AR) display. The physical object may be transparently visible through the display. The display may superpose the 3D rendering over the physical object that is otherwise visible through the display. At408, a tracking device, such as thetracking device113 described with reference toFIG. 2, tracks a positioning of an input device as the input device physically traces over at least one portion of the outer surface of the object. At410, the tracking device identifies at least one spatial position of the input device as the input device traces over the outer surface of the object.
At412, the 3D rendering device augments the 3D rendering of the object based at least in part on the tracked position or positions of the input device. A user may physically trace the input device over a portion of an outer surface of the physical object to draw a curve or any shape. The tracking device tracks the input device as the user physically traces the input device over the outer surface of the physical object. Data representing the spatial position of the input device is provided to the 3D rendering device, which uses the data to determine the shape of the curve as well as the position of the curve in relation to the 3D rendering of the object. The 3D rendering device augments the 3D rendering to include a rendering of the curve. The display displays the augmented 3D rendering of the object at414.
FIG. 5 shows a flow diagram of amethod500 for modifying a 3D rendering of an object based on a tracked position of an input device.Steps502,504,506,508 and510 of themethod500 are similar tosteps402,404,406,408 and410 of themethod400 described with reference toFIG. 4. Themethod500 includes scanning an outer surface of a physical object at502, generating a 3D model of the object at504 based on the scanning of the outer surface and displaying at506 a 3D rendering of the object based on the generated 3D model. Themethod500 also includes tracking at508 a positioning of an input device as the input device physically traces over at least one portion of the outer surface of the object and identifying at510 at least one spatial position of the input device as the input device traces over the outer surface of the object.
The user may physically trace the input device over the surface of the physical object in order to provide more precise physical coordinates of the surface of the physical object. By tracing the input device over or positioning the input device at the surface of the physical object while the input device is being tracked, the user effectively provides the positioning (or the coordinates) of the surface. The more precise data reflecting the positioning of the surface can be used to modify and enhance the 3D rendering of the physical object (for example, in the event that the 3D scanning of the object is inaccurate).
Thus, as opposed to augmenting the 3D rendering, themethod500 proceeds to modifying at512, by the 3D rendering device, the 3D model of the object based at least in part on the tracked position or positions of the input device. The tracked spatial position of the input device is used to refine or enhance the accuracy of the 3D model of the object rather than augment or add to the 3D rendering. As described herein, the position of the input device may be included as a vertex in the modified 3D model of the object. Following modifying the 3D model of the object, the display displays at514 an updated 3D rendering of the object based on the modified 3D model of the object.
FIG. 6 shows a flow diagram of amethod600 for distance measurement based on a tracked position or positions of an input device. In themethod600, a tracking device tracks, at602, a spatial positioning of an input device as the input device traces over the at least one portion of an outer surface of a physical object. The user may trace over the outer surface of the physical object to measure a distance between two points or positions along the outer surface of the physical object. The tracking device identifies, at604, the at least two positions of the input device as the input device traces over the object.
The 3D rendering device determines a distance between the at least two positions at606. The distance may be a Euclidean distance between the at least two positions. The distance may be a linear distance along a straight line or a distance along a curve traversed by the input device. The curve traversed by the input device may be approximated by a plurality of short line segments extending between multiple sensed positions of the input device as the input device traversed the curve. The distance along the curve may be determined by summing individual distances of the short line segments. The 3D rendering device outputs data representative of the distance at608, which may be displayed on the display.
The various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.