Movatterモバイル変換


[0]ホーム

URL:


TW202001805A - Image processing system and image processing method - Google Patents

Image processing system and image processing method
Download PDF

Info

Publication number
TW202001805A
TW202001805ATW107120249ATW107120249ATW202001805ATW 202001805 ATW202001805 ATW 202001805ATW 107120249 ATW107120249 ATW 107120249ATW 107120249 ATW107120249 ATW 107120249ATW 202001805 ATW202001805 ATW 202001805A
Authority
TW
Taiwan
Prior art keywords
image
camera
processor
real
image processing
Prior art date
Application number
TW107120249A
Other languages
Chinese (zh)
Other versions
TWI691932B (en
Inventor
魏守德
陳韋志
Original Assignee
大陸商光寶電(廣州)有限公司
光寶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商光寶電(廣州)有限公司, 光寶科技股份有限公司filedCritical大陸商光寶電(廣州)有限公司
Priority to TW107120249ApriorityCriticalpatent/TWI691932B/en
Publication of TW202001805ApublicationCriticalpatent/TW202001805A/en
Application grantedgrantedCritical
Publication of TWI691932BpublicationCriticalpatent/TWI691932B/en

Links

Images

Landscapes

Abstract

A image processing system comprises a camera, a positioning device, a processor, and a display. The camera captures a real view image. The positioning device positions a camera position of the camera. The processor receives a high-precision map and a virtual object and calculates a camera posture. According to a three-dimensional information of the camera posture and the high-precision map, the processor projects a depth image, superimposes the depth image and the real view image to generate a stack image, and superimposes the virtual object to the stack image according to a virtual coordinate of the virtual object to produce a rendered image. The display displays the rendered image.

Description

Translated fromChinese
影像處理系統及影像處理方法Image processing system and image processing method

本案是有關於一種影像處理系統及影像處理方法,且特別是有關於一種應用於擴增實境之影像處理系統及影像處理方法。This case relates to an image processing system and image processing method, and particularly to an image processing system and image processing method applied to augmented reality.

一般而言,擴增實境技術需要能夠拍攝出影像深度的攝像機才能取得真實物體與攝像機之間的距離,再將虛擬物體依據影像深度以決定放置於畫面中的位置。然而,一般的深度攝像機只能取得與攝像機距離約2~3公尺以內之環境的影像深度,當使用者想要將大型虛擬物體渲染在戶外的大型物件上時,例如將巨大的虛擬章魚渲染在高樓的真實場景上,並將此畫面呈現於顯示器時,則深度攝像機會因為無法取得戶外大型物件的正確深度,而導致渲染出的畫面在物件之間的擺放位置不適當(例如錯誤的遮蔽效應)。Generally speaking, augmented reality technology requires a camera that can shoot the depth of the image to obtain the distance between the real object and the camera, and then the virtual object is placed on the screen according to the depth of the image. However, the general depth camera can only obtain the image depth of the environment within a distance of about 2~3 meters from the camera. When the user wants to render a large virtual object on a large object outdoors, such as rendering a huge virtual octopus In the real scene of a high-rise building, and presenting this picture on the display, the depth camera will not be able to obtain the correct depth of large outdoor objects, resulting in the inappropriate placement of the rendered picture between the objects (such as errors The shadowing effect).

因此,如何在擴增實境中精準地將虛擬影像與大型物件進行影像結合,已成為須解決的問題之一。Therefore, how to accurately combine virtual images with large objects in augmented reality has become one of the problems to be solved.

根據本案之一方面,提供一種影像處理系統,包含:一攝像機、一定位裝置、一處理器以及一顯示器。攝像機用以拍攝一真實影像。定位裝置用以偵測攝像機的一攝像位置。處理器耦接攝像機及定位裝置,用以接收一高精度地圖及一虛擬物件,處理器藉由一同步定位與地圖構建(Simultaneous localization and mapping,SLAM)演算法和該攝像位置計算攝像機的一攝像姿態,並依據攝像姿態及高精度地圖的一三維資訊,以投影出一深度影像,該處理器疊合深度影像及真實影像,以產生一疊合影像,依據虛擬物件的一虛擬座標,將虛擬物件疊加至疊合影像,以產生一渲染影像。顯示器耦接處理器,用以顯示渲染影像。According to one aspect of the present case, an image processing system is provided, including: a camera, a positioning device, a processor, and a display. The camera is used to shoot a real image. The positioning device is used to detect a camera position of the camera. The processor is coupled to the camera and the positioning device for receiving a high-precision map and a virtual object. The processor uses a simultaneous localization and mapping (Simultaneous localization and mapping, SLAM) algorithm and the camera position to calculate a camera of the camera Pose, and project a depth image based on the camera pose and a three-dimensional information of the high-precision map. The processor superimposes the depth image and the real image to generate a superimposed image. Based on a virtual coordinate of the virtual object, the virtual The object is superimposed on the superimposed image to produce a rendered image. The display is coupled to the processor for displaying rendered images.

根據本案之另一方面,提供一種影像處理方法包含:藉由一攝像機拍攝一真實影像;藉由一定位裝置定位攝像機的一攝像位置;藉由一同步定位與地圖構建演算法和攝像位置計算攝像機的一攝像姿態;藉由處理器依據攝像姿態及一高精度地圖的一三維資訊,以投影出一深度影像;藉由處理器疊合深度影像及真實影像,以產生一疊合影像;藉由處理器依據一虛擬物件的一虛擬座標,將虛擬物件疊加至疊合影像,以產生一渲染影像;以及藉由一顯示器顯示渲染影像。According to another aspect of the case, there is provided an image processing method including: shooting a real image with a camera; positioning a camera position of the camera with a positioning device; building an algorithm and camera position calculation camera with a simultaneous positioning and map A camera posture; a depth image is projected by the processor according to the camera posture and a three-dimensional information of a high-precision map; a depth image and a real image are superimposed by the processor to generate a superimposed image; by The processor superimposes the virtual object on the superimposed image according to a virtual coordinate of the virtual object to generate a rendered image; and displays the rendered image through a display.

綜上,本案的影像處理系統及影像處理方法應用高精度地圖、攝像機姿態等資訊,將真實影像經轉換後投影出深度影像,並將深度影像及真實影像精準地疊合,以產生疊合影像,疊合影像中的每一點都包含深度資訊,使得虛擬物件可依其虛擬座標對應至疊合影像中的位置,以渲染至疊合影像上,藉此,可以使真實影像中的大型物件與虛擬物件精準地結合。In summary, the image processing system and image processing method of this case use high-precision maps, camera poses and other information to project the real image into a depth image after conversion, and accurately overlay the depth image and the real image to produce a superimposed image Each point in the superimposed image contains depth information, so that the virtual object can be mapped to the position in the superimposed image according to its virtual coordinates to be rendered onto the superimposed image, thereby enabling large objects and objects in the real image Virtual objects are precisely combined.

請參閱第1~2圖,第1圖為根據本案一實施例繪示的一種影像處理系統100的方塊圖。第2圖為根據本案一實施例繪示的一種影像處理方法200的流程圖。於一實施例中,影像處理系統100包含一攝像機10、一定位裝置20、一處理器30及一顯示器40。處理器30係耦接至攝像機10以及定位裝置20。顯示器40係耦接至處理器30。Please refer to FIGS. 1-2. FIG. 1 is a block diagram of animage processing system 100 according to an embodiment of the present invention. FIG. 2 is a flowchart of animage processing method 200 according to an embodiment of the present case. In one embodiment, theimage processing system 100 includes acamera 10, apositioning device 20, aprocessor 30, and adisplay 40. Theprocessor 30 is coupled to thecamera 10 and thepositioning device 20. Thedisplay 40 is coupled to theprocessor 30.

於一實施例中,攝像機10可以是電荷耦合元件(Charge Coupled Device,CCD)或互補性氧化金屬半導體(Complementary Metal-Oxide Semiconductor,CMOS),定位裝置20可以是全球定位系统(Global Positioning System,GPS)定位器,處理器30可以被實施為微控制單元(microcontroller)、微處理器(microprocessor)、數位訊號處理器(digital signal processor)、特殊應用積體電路(application specific integrated circuit,ASIC)或一邏輯電路。In an embodiment, thecamera 10 may be a charge coupled device (Charge Coupled Device, CCD) or a complementary metal oxide semiconductor (Complementary Metal-Oxide Semiconductor, CMOS), and thepositioning device 20 may be a global positioning system (Global Positioning System, GPS ) Locator, theprocessor 30 may be implemented as a micro controller, a microprocessor, a digital signal processor, an application specific integrated circuit (ASIC) or a Logic circuit.

於一實施例中,顯示器40可以是手持電子裝置(例如手機、平板)的顯示裝置或是頭戴式裝置中的顯示裝置。In an embodiment, thedisplay 40 may be a display device of a handheld electronic device (such as a mobile phone or a tablet) or a display device of a head-mounted device.

於一實施例中,請參閱第2、3A~3D圖,第3A~3D圖為根據本案一實施例繪示的一種影像處理方法的示意圖。以下詳述本案影像處理方法200的流程圖。影像處理方法200中所提及的元件可由第1圖所述的元件實現之。In one embodiment, please refer to Figures 2 and 3A~3D. Figures 3A~3D are schematic diagrams of an image processing method according to an embodiment of the present invention. The flowchart of theimage processing method 200 in this case is described in detail below. The components mentioned in theimage processing method 200 may be implemented by the components described in FIG. 1.

於步驟210中,攝像機10拍攝一真實影像(例如為第3C圖)。Instep 210, thecamera 10 shoots a real image (for example, FIG. 3C).

於步驟220中,定位裝置20偵測攝像機10的一攝像位置。於一實施例中,影像處理系統100中的定位裝置20為全球定位系统定位器,可用以定位攝像機10的攝像位置。Instep 220, thepositioning device 20 detects a camera position of thecamera 10. In an embodiment, thepositioning device 20 in theimage processing system 100 is a global positioning system locator, which can be used to locate the camera position of thecamera 10.

於步驟230中,處理器30接收一高精度地圖及一虛擬物件。Instep 230, theprocessor 30 receives a high-precision map and a virtual object.

於一實施例中,高精度地圖可以被事先準備,例如,處理器30可以透過網路取得儲存於遠端儲存裝置中的高精度地圖,或是由影像處理系統100自身的儲存裝置讀取高精度地圖。於一實施例中,高精度地圖包含三維資訊,例如第3A圖所示,三維資訊包含真實影像之一三維構圖或真實影像中之物件(例如為建築物OBJ1、OBJ2)的一高度資訊。In an embodiment, a high-precision map may be prepared in advance. For example, theprocessor 30 may obtain a high-precision map stored in a remote storage device through a network, or read the high-resolution map from the storage device of theimage processing system 100 itself. Accuracy map. In one embodiment, the high-precision map contains three-dimensional information, for example, as shown in FIG. 3A, the three-dimensional information includes one of the three-dimensional composition of the real image or the height information of the objects in the real image (such as buildings OBJ1, OBJ2).

於一實施例中,虛擬物件可以被事先準備,其可以是三維的虛擬物件,虛擬物件中各點的虛擬座標亦可以被定義,例如第3B圖所示的虛擬物件為一章魚VI,使用者透過處理器30事先繪製此章魚VI,並定義此章魚VI圖像上各點的虛擬座標。In one embodiment, the virtual object may be prepared in advance, which may be a three-dimensional virtual object, and the virtual coordinates of each point in the virtual object may also be defined. For example, the virtual object shown in FIG. 3B is an octopus VI, which is used The processor draws the octopus VI in advance through theprocessor 30 and defines the virtual coordinates of each point on the image of the octopus VI.

於步驟240中,處理器30計算攝像機10的一攝像姿態。Instep 240, theprocessor 30 calculates a camera posture of thecamera 10.

於一實施例中,處理器30藉由一同步定位與地圖構建(Simultaneous localization and mapping,SLAM)演算法和攝像位置以計算出攝像機10的攝像姿態。於一實施例中,攝像機10在一場景中連續以不同視角拍攝多張影像後,應用同步定位與地圖構建演算法,比對出此些影像之間相同物體上特徵點之對應位置,藉由疊加不同影像中之此些特徵點之對應位置,可產生此場景的三維地圖,且可定位攝像機10擺放的位置,藉此,依照特徵點反推攝像機10的攝像姿態(例如為水平、傾斜或垂直放置)及/或拍攝視角(例如為攝像機10與一建築物的相對位置)。In one embodiment, theprocessor 30 calculates the camera posture of thecamera 10 by using a simultaneous localization and mapping (SLAM) algorithm and camera position. In one embodiment, after thecamera 10 continuously shoots multiple images from different perspectives in a scene, it uses simultaneous positioning and map construction algorithms to compare the corresponding positions of feature points on the same object between these images by By superimposing the corresponding positions of these feature points in different images, a three-dimensional map of this scene can be generated, and the position where thecamera 10 is placed can be located, thereby inferring the camera posture of thecamera 10 according to the feature points (for example, horizontal, tilt Or placed vertically) and/or shooting angle of view (for example, the relative position of thecamera 10 and a building).

於步驟250中,處理器30依據攝像姿態及高精度地圖的一三維資訊,以投影出一深度影像。Instep 250, theprocessor 30 projects a depth image based on the camera pose and the three-dimensional information of the high-precision map.

於一實施例中,處理器30將高精度地圖中的每一點座標經由一座標轉換函式以映射為一相機座標,使其能投影至顯示器40上。In one embodiment, theprocessor 30 maps each point coordinate in the high-precision map to a camera coordinate through a coordinate conversion function, so that it can be projected onto thedisplay 40.

舉例而言,處理器30將高精度地圖中的三維資訊,如世界座標(world coordinate,於座標轉換函式中標示為符號wc)藉由以下座標轉換函式進行映射,轉換為相機座標(camera coordinate,於座標轉換函式中標示為符號cc): [x y z]Tcc=[R|t][X Y Z 1]Twc其中,將為高精度地圖中的每一個座標(X,Y,Z)代入此座標轉換函式,並藉由旋轉參數R及平移參數t的調整,可將高精度地圖中的每一點投影至相機座標;換言之,透過座標轉換函式可以將具有三維資訊的高精度地圖進行轉換,使其能投影至顯示器40(例如為手機螢幕)上。For example, theprocessor 30 maps the three-dimensional information in the high-precision map, such as the world coordinate (marked as a symbol wc in the coordinate conversion function) by the following coordinate conversion function, and converts it into a camera coordinate (camera coordinate, marked as the symbol cc in the coordinate conversion function: [xyz]Tcc = [R|t][XYZ 1]Twc where each coordinate (X, Y, Z) in the high-precision map Substitute this coordinate conversion function, and by adjusting the rotation parameter R and the translation parameter t, each point in the high-precision map can be projected to the camera coordinates; in other words, the high-precision map with three-dimensional information can be converted through the coordinate conversion function The conversion is performed so that it can be projected on the display 40 (for example, a mobile phone screen).

於一實施例中,處理器30可應用已知的影像定位追蹤技術或透視投影(perspective projection)以投影出深度影像。In one embodiment, theprocessor 30 can apply a known image location tracking technology or perspective projection to project a depth image.

於一實施例中,深度影像例如為第3B圖所示,其為灰階影像,距離攝像機10越近的物體越亮(例如灰階區塊a,其位置對應於真實影像,如第3C圖之建築影像a’),距離攝像機10越遠的物體越暗(例如灰階區塊c,其位置對應於真實影像,如第3C圖之建築影像c’),灰階區塊b的位置(其對應於真實影像,如第3C圖之建築影像b’)介於灰階區塊a與灰階區塊c之間。此外,深度影像中可包含事先繪製好的虛擬影像VI,例如為第3B圖中的章魚VI,其可以是三維影像且包含已定義好的虛擬座標及顏色。In an embodiment, the depth image is, for example, shown in FIG. 3B, which is a gray-scale image, and the object closer to thecamera 10 is brighter (for example, the gray-scale block a, whose position corresponds to the real image, as shown in FIG. 3C Building image a'), the further away from thecamera 10, the darker the object (for example, gray scale block c, its position corresponds to the real image, such as the architectural image c'in Figure 3C), the position of gray scale block b ( It corresponds to the real image, such as the architectural image b'in FIG. 3C, between the gray-scale block a and the gray-scale block c. In addition, the depth image may include a pre-drawn virtual image VI, for example, the octopus VI in FIG. 3B, which may be a three-dimensional image and includes the defined virtual coordinates and colors.

於步驟260中,處理器30疊合深度影像及真實影像,以產生一疊合影像。例如,處理器30將第3B圖所示之深度影像與第3C圖所示之真實影像作疊合,以產生疊合影像。Instep 260, theprocessor 30 superimposes the depth image and the real image to generate a superimposed image. For example, theprocessor 30 superimposes the depth image shown in FIG. 3B and the real image shown in FIG. 3C to generate a superimposed image.

於一實施例中,處理器30將灰階影像中的一灰階邊緣資訊(例如為第3B圖中的邊緣L)與真實影像中的一實際邊緣資訊(例如為第3C圖中的邊緣L’)做比對,並藉由旋轉或平移灰階影像或真實影像之其中一者(例如旋轉或平移灰階影像,使邊緣L與真實影像中的邊緣L’對齊),以使深度影像及真實影像疊合。於一實施例中,可比對深度影像及真實影像中的多個邊緣資訊,並藉由旋轉及平移深度影像及真實影像其中之一者,使深度影像及真實影像疊合得更為精準。In an embodiment, theprocessor 30 compares a gray-scale edge information in the gray-scale image (for example, the edge L in FIG. 3B) and a real edge information in the real image (for example, the edge L in the 3C image) ') for comparison, and by rotating or translating the grayscale image or one of the real images (such as rotating or translating the grayscale image to align the edge L with the edge L'in the real image), the depth image and Real images overlap. In one embodiment, multiple edge information in the depth image and the real image can be compared, and by rotating and translating one of the depth image and the real image, the depth image and the real image can be superimposed more accurately.

於一實施例中,疊合影像中的每一點都包含深度資訊,深度資訊可例如以座標方式或深度數值方式表示之。In one embodiment, each point in the superimposed image includes depth information, which may be expressed in a coordinate manner or a depth numerical manner, for example.

於一實施例中,疊合影像中包含真實影像中每一點的座標資訊。In one embodiment, the superimposed image includes the coordinate information of each point in the real image.

於步驟270中,處理器30依據虛擬物件的一虛擬座標,將虛擬物件疊加至疊合影像,以產生一渲染影像。In step 270, theprocessor 30 superimposes the virtual object on the superimposed image according to a virtual coordinate of the virtual object to generate a rendered image.

於一實施例中,渲染定義為將真實影像中加入虛擬影像或虛擬物件。In one embodiment, rendering is defined as adding real images to virtual images or virtual objects.

於一實施例中,由於疊合影像中包含真實影像中每一點的座標資訊,處理器30依據此些座標資訊及虛擬物件的虛擬座標,以將該虛擬物件疊加至疊合影像。In one embodiment, since the superimposed image includes the coordinate information of each point in the real image, theprocessor 30 superimposes the virtual object on the superimposed image according to the coordinate information and the virtual coordinate of the virtual object.

於一實施例中,處理器30將虛擬物件(如第3B圖中的章魚VI)依據其每一點的虛擬座標,以疊加至疊合影像,藉此產生如第3D圖所示的渲染影像。其中,由於疊合影像包含影像中每一個點的座標資訊,且虛擬物件(如章魚VI)的每一個虛擬座標點亦為事先定義好的,因此可以依據虛擬座標將章魚VI渲染至疊合影像上。在此例中,由於章魚VI只有兩隻腳的座標位於建築a’的前方,其他腳的座標位於建築物a’的後方,因此於第3D圖中僅能看到未被建築物a’遮蔽的兩隻腳,由此可知,章魚VI的每一點被渲染至疊合影像後的位置可以正確地計算出來。In one embodiment, theprocessor 30 superimposes a virtual object (such as the octopus VI in Figure 3B) according to the virtual coordinates of each point to the superimposed image, thereby generating a rendered image as shown in Figure 3D. Among them, because the superimposed image contains the coordinate information of each point in the image, and each virtual coordinate point of the virtual object (such as octopus VI) is also defined in advance, you can render the octopus VI to the superimposed image according to the virtual coordinates on. In this example, since the octopus VI only has the coordinates of two feet in front of the building a'and the coordinates of the other feet are behind the building a', it can only be seen that it is not obscured by the building a'in Figure 3D From the two feet, it can be seen that each point of the octopus VI is rendered to the position after the superimposed image can be calculated correctly.

於步驟280中,顯示器40顯示渲染影像。Instep 280, thedisplay 40 displays the rendered image.

綜上,本案的影像處理系統及影像處理方法應用高精度地圖、攝像機姿態等資訊,將真實影像經轉換後投影出深度影像,並將深度影像及真實影像精準地疊合,以產生疊合影像,疊合影像中的每一點都包含深度資訊,使得虛擬物件可依據其虛擬座標對應至疊合影像中的位置,以渲染至疊合影像上,藉此,可以使真實影像中的大型物件與虛擬物件精準地結合。In summary, the image processing system and image processing method of this case use high-precision maps, camera poses and other information to project the real image into a depth image after conversion, and accurately overlay the depth image and the real image to produce a superimposed image Each point in the superimposed image contains depth information, so that the virtual object can be mapped to the position in the superimposed image according to its virtual coordinates to be rendered onto the superimposed image, thereby enabling large objects and objects in the real image Virtual objects are precisely combined.

雖然本案已以實施例揭露如上,然其並非用以限定本案,任何熟習此技藝者,在不脫離本案之精神和範圍內,當可作各種之更動與潤飾,因此本案之保護範圍當視後附之申請專利範圍所界定者為準。Although this case has been disclosed above with examples, it is not intended to limit this case. Anyone who is familiar with this skill can make various changes and retouching without departing from the spirit and scope of this case, so the scope of protection of this case should be considered The scope of the attached patent application shall prevail.

100‧‧‧影像處理系統10‧‧‧攝像機20‧‧‧定位裝置30‧‧‧處理器40‧‧‧顯示器200‧‧‧影像處理方法210~280‧‧‧步驟OBJ1、OBJ2‧‧‧建築物VI‧‧‧章魚a、b、c‧‧‧灰階區塊a’、b’、c’‧‧‧建築影像100‧‧‧Image processing system 10‧‧‧Camera 20‧‧‧Positioning device 30‧‧‧Processor 40‧‧‧Display 200‧‧‧Image processing method 210~280‧‧‧Steps OBJ1, OBJ2‧‧‧Architecture Object VI‧‧‧Octopus a, b, c‧‧‧ grayscale block a', b', c'‧‧‧ building image

為讓本揭示內容之上述和其他目的、特徵、優點與實施例能更明顯易懂,所附圖示之說明如下: 第1圖為根據本案一實施例繪示的一種影像處理系統的方塊圖; 第2圖為根據本案一實施例繪示的一種影像處理方法的流程圖;以及 第3A~3D圖為根據本案一實施例繪示的一種影像處理方法的示意圖。In order to make the above and other objects, features, advantages and embodiments of the disclosure more obvious and understandable, the description of the attached drawings is as follows: FIG. 1 is a block diagram of an image processing system according to an embodiment of the present case Figure 2 is a flowchart of an image processing method according to an embodiment of the present case; and Figures 3A~3D are schematic diagrams of an image processing method according to an embodiment of the present case.

200‧‧‧影像處理方法200‧‧‧Image processing method

210~280‧‧‧步驟210~280‧‧‧step

Claims (10)

Translated fromChinese
一種影像處理系統,包含: 一攝像機,用以拍攝一真實影像; 一定位裝置,用以定位該攝像機的一攝像位置; 一處理器,耦接該攝像機和該定位裝置,並接收一高精度地圖及一虛擬物件,該處理器藉由一同步定位與地圖構建演算法和該攝像位置計算該攝像機的一攝像姿態,並依據該攝像姿態及該高精度地圖的一三維資訊,以投影出一深度影像,該處理器疊合該深度影像及該真實影像,以產生一疊合影像,該處理器依據該虛擬物件的一虛擬座標,將該虛擬物件疊加至該疊合影像,以產生一渲染影像;以及 一顯示器,耦接該處理器,以顯示渲染影像。An image processing system includes: a camera for shooting a real image; a positioning device for positioning a camera position of the camera; a processor coupled to the camera and the positioning device and receiving a high-precision map And a virtual object, the processor calculates a camera pose of the camera by a synchronous positioning and map building algorithm and the camera position, and projects a depth based on the camera pose and a three-dimensional information of the high-precision map Image, the processor superimposes the depth image and the real image to generate a superimposed image, and the processor superimposes the virtual object to the superimposed image according to a virtual coordinate of the virtual object to generate a rendered image And a display coupled to the processor to display the rendered image.如請求項1所述之影像處理系統,其中該深度影像為一灰階影像,該處理器將該灰階影像中的一灰階邊緣資訊與該真實影像中的一實際邊緣資訊做比對,並藉由旋轉或平移該灰階影像或該真實影像之其中一者,以使該深度影像及該真實影像疊合。The image processing system according to claim 1, wherein the depth image is a grayscale image, and the processor compares a grayscale edge information in the grayscale image with an actual edge information in the real image, And by rotating or translating one of the grayscale image or the real image, the depth image and the real image are superimposed.如請求項1所述之影像處理系統,其中該疊合影像中包含該真實影像中每一點的座標資訊,該處理器依據該些座標資訊及該虛擬物件的該虛擬座標,以將該虛擬物件疊加至該疊合影像。The image processing system according to claim 1, wherein the superimposed image includes coordinate information of each point in the real image, and the processor uses the coordinate information and the virtual coordinates of the virtual object to select the virtual object Superimpose to the superimposed image.如請求項1所述之影像處理系統,其中該高精度地圖的三維資訊包含該真實影像之一三維構圖或該真實影像中之物件的一高度資訊。The image processing system according to claim 1, wherein the three-dimensional information of the high-precision map includes a three-dimensional composition of the real image or height information of objects in the real image.如請求項1所述之影像處理系統,其中該處理器將該高精度地圖中的每一點座標經由一座標轉換函式以映射為該顯示器得以顯示的一相機座標。The image processing system according to claim 1, wherein the processor maps each point coordinate in the high-precision map to a camera coordinate displayed by the display through a coordinate conversion function.一種影像處理方法,包含: 藉由一攝像機拍攝一真實影像; 藉由一定位裝置定位該攝像機的一攝像位置; 藉由一處理器根據一同步定位與地圖構建(Simultaneous localization and mapping,SLAM)演算法和該攝像位置計算該攝像機的一攝像姿態; 藉由該處理器依據該攝像姿態及一高精度地圖的一三維資訊,以投影出一深度影像; 藉由該處理器疊合該深度影像及該真實影像,以產生一疊合影像; 藉由該處理器依據一虛擬物件的一虛擬座標,將該虛擬物件疊加至該疊合影像,以產生一渲染影像;以及 藉由一顯示器顯示該渲染影像。An image processing method, including: shooting a real image with a camera; locating a camera position of the camera with a positioning device; calculating with a processor based on a simultaneous localization and mapping (Simultaneous localization and mapping, SLAM) Method and the camera position to calculate a camera posture of the camera; by the processor according to the camera posture and a three-dimensional information of a high-precision map to project a depth image; by the processor superimposing the depth image and Generating a superimposed image by the real image; superimposing the virtual object on the superimposed image according to a virtual coordinate of a virtual object by the processor to generate a rendered image; and displaying the rendering by a display image.如請求項6所述之影像處理方法,其中該深度影像為一灰階影像,該影像處理方法更包含: 將該灰階影像中的一灰階邊緣資訊與該真實影像中的一實際邊緣資訊做比對,並藉由旋轉或平移該灰階影像或該真實影像之其中一者,以使該深度影像及該真實影像疊合。The image processing method according to claim 6, wherein the depth image is a grayscale image, and the image processing method further includes: a grayscale edge information in the grayscale image and an actual edge information in the real image Make a comparison, and rotate or translate the grayscale image or the real image to make the depth image and the real image overlap.如請求項6所述之影像處理方法,其中該疊合影像中包含該真實影像中每一點的座標資訊,該影像處理方法更包含: 依據該些座標資訊及該虛擬物件的該虛擬座標,以將該虛擬物件疊加至該疊合影像。The image processing method according to claim 6, wherein the superimposed image includes coordinate information of each point in the real image, and the image processing method further includes: based on the coordinate information and the virtual coordinates of the virtual object, to Superimpose the virtual object to the superimposed image.如請求項6所述之影像處理方法,其中該高精度地圖的三維資訊包含該真實影像之一三維構圖或該真實影像中之物件的一高度資訊。The image processing method according to claim 6, wherein the three-dimensional information of the high-precision map includes a three-dimensional composition of the real image or a height information of an object in the real image.如請求項6所述之影像處理方法,更包含: 將該高精度地圖中的每一點座標經由一座標轉換函式以映射為該顯示器得以顯示的一相機座標。The image processing method according to claim 6, further comprising: mapping each point coordinate in the high-precision map to a camera coordinate displayed by the display through a coordinate conversion function.
TW107120249A2018-06-122018-06-12 Image processing system and image processing methodTWI691932B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
TW107120249ATWI691932B (en)2018-06-122018-06-12 Image processing system and image processing method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
TW107120249ATWI691932B (en)2018-06-122018-06-12 Image processing system and image processing method

Publications (2)

Publication NumberPublication Date
TW202001805Atrue TW202001805A (en)2020-01-01
TWI691932B TWI691932B (en)2020-04-21

Family

ID=69941576

Family Applications (1)

Application NumberTitlePriority DateFiling Date
TW107120249ATWI691932B (en)2018-06-122018-06-12 Image processing system and image processing method

Country Status (1)

CountryLink
TW (1)TWI691932B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11430190B2 (en)2020-10-142022-08-30Institute For Information IndustryVirtual and real image fusion method, virtual and real image fusion system and non-transient computer readable medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11832001B2 (en)*2021-12-202023-11-28Visera Technologies Company LimitedImage processing method and image processing system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9182243B2 (en)*2012-06-052015-11-10Apple Inc.Navigation application
US20140192164A1 (en)*2013-01-072014-07-10Industrial Technology Research InstituteSystem and method for determining depth information in augmented reality scene
US10262462B2 (en)*2014-04-182019-04-16Magic Leap, Inc.Systems and methods for augmented and virtual reality
CN106937531B (en)*2014-06-142020-11-06奇跃公司 Method and system for generating virtual and augmented reality

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11430190B2 (en)2020-10-142022-08-30Institute For Information IndustryVirtual and real image fusion method, virtual and real image fusion system and non-transient computer readable medium
TWI816057B (en)*2020-10-142023-09-21財團法人資訊工業策進會Virtual and real image fusion method, virtual and real image fusion system and non-transient computer readable medium

Also Published As

Publication numberPublication date
TWI691932B (en)2020-04-21

Similar Documents

PublicationPublication DateTitle
TWI590189B (en)Augmented reality method, system and computer-readable non-transitory storage medium
CN103337094B (en)A kind of method of applying binocular camera and realizing motion three-dimensional reconstruction
JP6626223B2 (en) Indoor ranging method
CN110599432B (en) Image processing system and image processing method
Hübner et al.Marker-based localization of the microsoft hololens in building models
CN104657103B (en)Hand-held CAVE optical projection systems based on depth camera
US20130176337A1 (en)Device and Method For Information Processing
CN107729707B (en)Engineering construction lofting method based on mobile augmented reality technology and BIM
US11989827B2 (en)Method, apparatus and system for generating a three-dimensional model of a scene
JP2013171523A (en)Ar image processing device and method
WO2017156949A1 (en)Transparent display method and transparent display apparatus
CN108921889A (en)A kind of indoor 3-D positioning method based on Augmented Reality application
US9881419B1 (en)Technique for providing an initial pose for a 3-D model
JP2015114905A (en)Information processor, information processing method, and program
CN109427094B (en)Method and system for acquiring mixed reality scene
CN113298928A (en)House three-dimensional reconstruction method, device, equipment and storage medium
CN113822936A (en)Data processing method and device, computer equipment and storage medium
TWI691932B (en) Image processing system and image processing method
Jones et al.Correction of geometric distortions and the impact of eye position in virtual reality displays
JP7624217B2 (en) Marker installation method
CN107958491B (en)Matching method of mobile augmented reality virtual coordinates and construction site coordinates
Miyake et al.Outdoor markerless augmented reality
TWI564841B (en)A method, apparatus and computer program product for real-time images synthesizing
JP2008203991A (en) Image processing device
CN114723800B (en) Point cloud data correction method and correction device, electronic device and storage medium

[8]ページ先頭

©2009-2025 Movatter.jp