





本案是有關於一種影像處理系統及影像處理方法,且特別是有關於一種應用於擴增實境之影像處理系統及影像處理方法。This case relates to an image processing system and image processing method, and particularly to an image processing system and image processing method applied to augmented reality.
一般而言,擴增實境技術需要能夠拍攝出影像深度的攝像機才能取得真實物體與攝像機之間的距離,再將虛擬物體依據影像深度以決定放置於畫面中的位置。然而,一般的深度攝像機只能取得與攝像機距離約2~3公尺以內之環境的影像深度,當使用者想要將大型虛擬物體渲染在戶外的大型物件上時,例如將巨大的虛擬章魚渲染在高樓的真實場景上,並將此畫面呈現於顯示器時,則深度攝像機會因為無法取得戶外大型物件的正確深度,而導致渲染出的畫面在物件之間的擺放位置不適當(例如錯誤的遮蔽效應)。Generally speaking, augmented reality technology requires a camera that can shoot the depth of the image to obtain the distance between the real object and the camera, and then the virtual object is placed on the screen according to the depth of the image. However, the general depth camera can only obtain the image depth of the environment within a distance of about 2~3 meters from the camera. When the user wants to render a large virtual object on a large object outdoors, such as rendering a huge virtual octopus In the real scene of a high-rise building, and presenting this picture on the display, the depth camera will not be able to obtain the correct depth of large outdoor objects, resulting in the inappropriate placement of the rendered picture between the objects (such as errors The shadowing effect).
因此,如何在擴增實境中精準地將虛擬影像與大型物件進行影像結合,已成為須解決的問題之一。Therefore, how to accurately combine virtual images with large objects in augmented reality has become one of the problems to be solved.
根據本案之一方面,提供一種影像處理系統,包含:一攝像機、一定位裝置、一處理器以及一顯示器。攝像機用以拍攝一真實影像。定位裝置用以偵測攝像機的一攝像位置。處理器耦接攝像機及定位裝置,用以接收一高精度地圖及一虛擬物件,處理器藉由一同步定位與地圖構建(Simultaneous localization and mapping,SLAM)演算法和該攝像位置計算攝像機的一攝像姿態,並依據攝像姿態及高精度地圖的一三維資訊,以投影出一深度影像,該處理器疊合深度影像及真實影像,以產生一疊合影像,依據虛擬物件的一虛擬座標,將虛擬物件疊加至疊合影像,以產生一渲染影像。顯示器耦接處理器,用以顯示渲染影像。According to one aspect of the present case, an image processing system is provided, including: a camera, a positioning device, a processor, and a display. The camera is used to shoot a real image. The positioning device is used to detect a camera position of the camera. The processor is coupled to the camera and the positioning device for receiving a high-precision map and a virtual object. The processor uses a simultaneous localization and mapping (Simultaneous localization and mapping, SLAM) algorithm and the camera position to calculate a camera of the camera Pose, and project a depth image based on the camera pose and a three-dimensional information of the high-precision map. The processor superimposes the depth image and the real image to generate a superimposed image. Based on a virtual coordinate of the virtual object, the virtual The object is superimposed on the superimposed image to produce a rendered image. The display is coupled to the processor for displaying rendered images.
根據本案之另一方面,提供一種影像處理方法包含:藉由一攝像機拍攝一真實影像;藉由一定位裝置定位攝像機的一攝像位置;藉由一同步定位與地圖構建演算法和攝像位置計算攝像機的一攝像姿態;藉由處理器依據攝像姿態及一高精度地圖的一三維資訊,以投影出一深度影像;藉由處理器疊合深度影像及真實影像,以產生一疊合影像;藉由處理器依據一虛擬物件的一虛擬座標,將虛擬物件疊加至疊合影像,以產生一渲染影像;以及藉由一顯示器顯示渲染影像。According to another aspect of the case, there is provided an image processing method including: shooting a real image with a camera; positioning a camera position of the camera with a positioning device; building an algorithm and camera position calculation camera with a simultaneous positioning and map A camera posture; a depth image is projected by the processor according to the camera posture and a three-dimensional information of a high-precision map; a depth image and a real image are superimposed by the processor to generate a superimposed image; by The processor superimposes the virtual object on the superimposed image according to a virtual coordinate of the virtual object to generate a rendered image; and displays the rendered image through a display.
綜上,本案的影像處理系統及影像處理方法應用高精度地圖、攝像機姿態等資訊,將真實影像經轉換後投影出深度影像,並將深度影像及真實影像精準地疊合,以產生疊合影像,疊合影像中的每一點都包含深度資訊,使得虛擬物件可依其虛擬座標對應至疊合影像中的位置,以渲染至疊合影像上,藉此,可以使真實影像中的大型物件與虛擬物件精準地結合。In summary, the image processing system and image processing method of this case use high-precision maps, camera poses and other information to project the real image into a depth image after conversion, and accurately overlay the depth image and the real image to produce a superimposed image Each point in the superimposed image contains depth information, so that the virtual object can be mapped to the position in the superimposed image according to its virtual coordinates to be rendered onto the superimposed image, thereby enabling large objects and objects in the real image Virtual objects are precisely combined.
請參閱第1~2圖,第1圖為根據本案一實施例繪示的一種影像處理系統100的方塊圖。第2圖為根據本案一實施例繪示的一種影像處理方法200的流程圖。於一實施例中,影像處理系統100包含一攝像機10、一定位裝置20、一處理器30及一顯示器40。處理器30係耦接至攝像機10以及定位裝置20。顯示器40係耦接至處理器30。Please refer to FIGS. 1-2. FIG. 1 is a block diagram of an
於一實施例中,攝像機10可以是電荷耦合元件(Charge Coupled Device,CCD)或互補性氧化金屬半導體(Complementary Metal-Oxide Semiconductor,CMOS),定位裝置20可以是全球定位系统(Global Positioning System,GPS)定位器,處理器30可以被實施為微控制單元(microcontroller)、微處理器(microprocessor)、數位訊號處理器(digital signal processor)、特殊應用積體電路(application specific integrated circuit,ASIC)或一邏輯電路。In an embodiment, the
於一實施例中,顯示器40可以是手持電子裝置(例如手機、平板)的顯示裝置或是頭戴式裝置中的顯示裝置。In an embodiment, the
於一實施例中,請參閱第2、3A~3D圖,第3A~3D圖為根據本案一實施例繪示的一種影像處理方法的示意圖。以下詳述本案影像處理方法200的流程圖。影像處理方法200中所提及的元件可由第1圖所述的元件實現之。In one embodiment, please refer to Figures 2 and 3A~3D. Figures 3A~3D are schematic diagrams of an image processing method according to an embodiment of the present invention. The flowchart of the
於步驟210中,攝像機10拍攝一真實影像(例如為第3C圖)。In
於步驟220中,定位裝置20偵測攝像機10的一攝像位置。於一實施例中,影像處理系統100中的定位裝置20為全球定位系统定位器,可用以定位攝像機10的攝像位置。In
於步驟230中,處理器30接收一高精度地圖及一虛擬物件。In
於一實施例中,高精度地圖可以被事先準備,例如,處理器30可以透過網路取得儲存於遠端儲存裝置中的高精度地圖,或是由影像處理系統100自身的儲存裝置讀取高精度地圖。於一實施例中,高精度地圖包含三維資訊,例如第3A圖所示,三維資訊包含真實影像之一三維構圖或真實影像中之物件(例如為建築物OBJ1、OBJ2)的一高度資訊。In an embodiment, a high-precision map may be prepared in advance. For example, the
於一實施例中,虛擬物件可以被事先準備,其可以是三維的虛擬物件,虛擬物件中各點的虛擬座標亦可以被定義,例如第3B圖所示的虛擬物件為一章魚VI,使用者透過處理器30事先繪製此章魚VI,並定義此章魚VI圖像上各點的虛擬座標。In one embodiment, the virtual object may be prepared in advance, which may be a three-dimensional virtual object, and the virtual coordinates of each point in the virtual object may also be defined. For example, the virtual object shown in FIG. 3B is an octopus VI, which is used The processor draws the octopus VI in advance through the
於步驟240中,處理器30計算攝像機10的一攝像姿態。In
於一實施例中,處理器30藉由一同步定位與地圖構建(Simultaneous localization and mapping,SLAM)演算法和攝像位置以計算出攝像機10的攝像姿態。於一實施例中,攝像機10在一場景中連續以不同視角拍攝多張影像後,應用同步定位與地圖構建演算法,比對出此些影像之間相同物體上特徵點之對應位置,藉由疊加不同影像中之此些特徵點之對應位置,可產生此場景的三維地圖,且可定位攝像機10擺放的位置,藉此,依照特徵點反推攝像機10的攝像姿態(例如為水平、傾斜或垂直放置)及/或拍攝視角(例如為攝像機10與一建築物的相對位置)。In one embodiment, the
於步驟250中,處理器30依據攝像姿態及高精度地圖的一三維資訊,以投影出一深度影像。In
於一實施例中,處理器30將高精度地圖中的每一點座標經由一座標轉換函式以映射為一相機座標,使其能投影至顯示器40上。In one embodiment, the
舉例而言,處理器30將高精度地圖中的三維資訊,如世界座標(world coordinate,於座標轉換函式中標示為符號wc)藉由以下座標轉換函式進行映射,轉換為相機座標(camera coordinate,於座標轉換函式中標示為符號cc): [x y z]Tcc=[R|t][X Y Z 1]Twc其中,將為高精度地圖中的每一個座標(X,Y,Z)代入此座標轉換函式,並藉由旋轉參數R及平移參數t的調整,可將高精度地圖中的每一點投影至相機座標;換言之,透過座標轉換函式可以將具有三維資訊的高精度地圖進行轉換,使其能投影至顯示器40(例如為手機螢幕)上。For example, the
於一實施例中,處理器30可應用已知的影像定位追蹤技術或透視投影(perspective projection)以投影出深度影像。In one embodiment, the
於一實施例中,深度影像例如為第3B圖所示,其為灰階影像,距離攝像機10越近的物體越亮(例如灰階區塊a,其位置對應於真實影像,如第3C圖之建築影像a’),距離攝像機10越遠的物體越暗(例如灰階區塊c,其位置對應於真實影像,如第3C圖之建築影像c’),灰階區塊b的位置(其對應於真實影像,如第3C圖之建築影像b’)介於灰階區塊a與灰階區塊c之間。此外,深度影像中可包含事先繪製好的虛擬影像VI,例如為第3B圖中的章魚VI,其可以是三維影像且包含已定義好的虛擬座標及顏色。In an embodiment, the depth image is, for example, shown in FIG. 3B, which is a gray-scale image, and the object closer to the
於步驟260中,處理器30疊合深度影像及真實影像,以產生一疊合影像。例如,處理器30將第3B圖所示之深度影像與第3C圖所示之真實影像作疊合,以產生疊合影像。In
於一實施例中,處理器30將灰階影像中的一灰階邊緣資訊(例如為第3B圖中的邊緣L)與真實影像中的一實際邊緣資訊(例如為第3C圖中的邊緣L’)做比對,並藉由旋轉或平移灰階影像或真實影像之其中一者(例如旋轉或平移灰階影像,使邊緣L與真實影像中的邊緣L’對齊),以使深度影像及真實影像疊合。於一實施例中,可比對深度影像及真實影像中的多個邊緣資訊,並藉由旋轉及平移深度影像及真實影像其中之一者,使深度影像及真實影像疊合得更為精準。In an embodiment, the
於一實施例中,疊合影像中的每一點都包含深度資訊,深度資訊可例如以座標方式或深度數值方式表示之。In one embodiment, each point in the superimposed image includes depth information, which may be expressed in a coordinate manner or a depth numerical manner, for example.
於一實施例中,疊合影像中包含真實影像中每一點的座標資訊。In one embodiment, the superimposed image includes the coordinate information of each point in the real image.
於步驟270中,處理器30依據虛擬物件的一虛擬座標,將虛擬物件疊加至疊合影像,以產生一渲染影像。In step 270, the
於一實施例中,渲染定義為將真實影像中加入虛擬影像或虛擬物件。In one embodiment, rendering is defined as adding real images to virtual images or virtual objects.
於一實施例中,由於疊合影像中包含真實影像中每一點的座標資訊,處理器30依據此些座標資訊及虛擬物件的虛擬座標,以將該虛擬物件疊加至疊合影像。In one embodiment, since the superimposed image includes the coordinate information of each point in the real image, the
於一實施例中,處理器30將虛擬物件(如第3B圖中的章魚VI)依據其每一點的虛擬座標,以疊加至疊合影像,藉此產生如第3D圖所示的渲染影像。其中,由於疊合影像包含影像中每一個點的座標資訊,且虛擬物件(如章魚VI)的每一個虛擬座標點亦為事先定義好的,因此可以依據虛擬座標將章魚VI渲染至疊合影像上。在此例中,由於章魚VI只有兩隻腳的座標位於建築a’的前方,其他腳的座標位於建築物a’的後方,因此於第3D圖中僅能看到未被建築物a’遮蔽的兩隻腳,由此可知,章魚VI的每一點被渲染至疊合影像後的位置可以正確地計算出來。In one embodiment, the
於步驟280中,顯示器40顯示渲染影像。In
綜上,本案的影像處理系統及影像處理方法應用高精度地圖、攝像機姿態等資訊,將真實影像經轉換後投影出深度影像,並將深度影像及真實影像精準地疊合,以產生疊合影像,疊合影像中的每一點都包含深度資訊,使得虛擬物件可依據其虛擬座標對應至疊合影像中的位置,以渲染至疊合影像上,藉此,可以使真實影像中的大型物件與虛擬物件精準地結合。In summary, the image processing system and image processing method of this case use high-precision maps, camera poses and other information to project the real image into a depth image after conversion, and accurately overlay the depth image and the real image to produce a superimposed image Each point in the superimposed image contains depth information, so that the virtual object can be mapped to the position in the superimposed image according to its virtual coordinates to be rendered onto the superimposed image, thereby enabling large objects and objects in the real image Virtual objects are precisely combined.
雖然本案已以實施例揭露如上,然其並非用以限定本案,任何熟習此技藝者,在不脫離本案之精神和範圍內,當可作各種之更動與潤飾,因此本案之保護範圍當視後附之申請專利範圍所界定者為準。Although this case has been disclosed above with examples, it is not intended to limit this case. Anyone who is familiar with this skill can make various changes and retouching without departing from the spirit and scope of this case, so the scope of protection of this case should be considered The scope of the attached patent application shall prevail.
100‧‧‧影像處理系統10‧‧‧攝像機20‧‧‧定位裝置30‧‧‧處理器40‧‧‧顯示器200‧‧‧影像處理方法210~280‧‧‧步驟OBJ1、OBJ2‧‧‧建築物VI‧‧‧章魚a、b、c‧‧‧灰階區塊a’、b’、c’‧‧‧建築影像100‧‧‧
為讓本揭示內容之上述和其他目的、特徵、優點與實施例能更明顯易懂,所附圖示之說明如下: 第1圖為根據本案一實施例繪示的一種影像處理系統的方塊圖; 第2圖為根據本案一實施例繪示的一種影像處理方法的流程圖;以及 第3A~3D圖為根據本案一實施例繪示的一種影像處理方法的示意圖。In order to make the above and other objects, features, advantages and embodiments of the disclosure more obvious and understandable, the description of the attached drawings is as follows: FIG. 1 is a block diagram of an image processing system according to an embodiment of the present case Figure 2 is a flowchart of an image processing method according to an embodiment of the present case; and Figures 3A~3D are schematic diagrams of an image processing method according to an embodiment of the present case.
200‧‧‧影像處理方法200‧‧‧Image processing method
210~280‧‧‧步驟210~280‧‧‧step
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW107120249ATWI691932B (en) | 2018-06-12 | 2018-06-12 | Image processing system and image processing method |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW107120249ATWI691932B (en) | 2018-06-12 | 2018-06-12 | Image processing system and image processing method |
| Publication Number | Publication Date |
|---|---|
| TW202001805Atrue TW202001805A (en) | 2020-01-01 |
| TWI691932B TWI691932B (en) | 2020-04-21 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW107120249ATWI691932B (en) | 2018-06-12 | 2018-06-12 | Image processing system and image processing method |
| Country | Link |
|---|---|
| TW (1) | TWI691932B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11430190B2 (en) | 2020-10-14 | 2022-08-30 | Institute For Information Industry | Virtual and real image fusion method, virtual and real image fusion system and non-transient computer readable medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11832001B2 (en)* | 2021-12-20 | 2023-11-28 | Visera Technologies Company Limited | Image processing method and image processing system |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9182243B2 (en)* | 2012-06-05 | 2015-11-10 | Apple Inc. | Navigation application |
| US20140192164A1 (en)* | 2013-01-07 | 2014-07-10 | Industrial Technology Research Institute | System and method for determining depth information in augmented reality scene |
| US10262462B2 (en)* | 2014-04-18 | 2019-04-16 | Magic Leap, Inc. | Systems and methods for augmented and virtual reality |
| CN106937531B (en)* | 2014-06-14 | 2020-11-06 | 奇跃公司 | Method and system for generating virtual and augmented reality |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11430190B2 (en) | 2020-10-14 | 2022-08-30 | Institute For Information Industry | Virtual and real image fusion method, virtual and real image fusion system and non-transient computer readable medium |
| TWI816057B (en)* | 2020-10-14 | 2023-09-21 | 財團法人資訊工業策進會 | Virtual and real image fusion method, virtual and real image fusion system and non-transient computer readable medium |
| Publication number | Publication date |
|---|---|
| TWI691932B (en) | 2020-04-21 |
| Publication | Publication Date | Title |
|---|---|---|
| TWI590189B (en) | Augmented reality method, system and computer-readable non-transitory storage medium | |
| CN103337094B (en) | A kind of method of applying binocular camera and realizing motion three-dimensional reconstruction | |
| JP6626223B2 (en) | Indoor ranging method | |
| CN110599432B (en) | Image processing system and image processing method | |
| Hübner et al. | Marker-based localization of the microsoft hololens in building models | |
| CN104657103B (en) | Hand-held CAVE optical projection systems based on depth camera | |
| US20130176337A1 (en) | Device and Method For Information Processing | |
| CN107729707B (en) | Engineering construction lofting method based on mobile augmented reality technology and BIM | |
| US11989827B2 (en) | Method, apparatus and system for generating a three-dimensional model of a scene | |
| JP2013171523A (en) | Ar image processing device and method | |
| WO2017156949A1 (en) | Transparent display method and transparent display apparatus | |
| CN108921889A (en) | A kind of indoor 3-D positioning method based on Augmented Reality application | |
| US9881419B1 (en) | Technique for providing an initial pose for a 3-D model | |
| JP2015114905A (en) | Information processor, information processing method, and program | |
| CN109427094B (en) | Method and system for acquiring mixed reality scene | |
| CN113298928A (en) | House three-dimensional reconstruction method, device, equipment and storage medium | |
| CN113822936A (en) | Data processing method and device, computer equipment and storage medium | |
| TWI691932B (en) | Image processing system and image processing method | |
| Jones et al. | Correction of geometric distortions and the impact of eye position in virtual reality displays | |
| JP7624217B2 (en) | Marker installation method | |
| CN107958491B (en) | Matching method of mobile augmented reality virtual coordinates and construction site coordinates | |
| Miyake et al. | Outdoor markerless augmented reality | |
| TWI564841B (en) | A method, apparatus and computer program product for real-time images synthesizing | |
| JP2008203991A (en) | Image processing device | |
| CN114723800B (en) | Point cloud data correction method and correction device, electronic device and storage medium |