Movatterモバイル変換


[0]ホーム

URL:


TWI792592B - Computer-assisted needle insertion system and computer-assisted needle insertion method - Google Patents

Computer-assisted needle insertion system and computer-assisted needle insertion method
Download PDF

Info

Publication number
TWI792592B
TWI792592BTW110136416ATW110136416ATWI792592BTW I792592 BTWI792592 BTW I792592BTW 110136416 ATW110136416 ATW 110136416ATW 110136416 ATW110136416 ATW 110136416ATW I792592 BTWI792592 BTW I792592B
Authority
TW
Taiwan
Prior art keywords
needle
breathing
processor
needle insertion
learning model
Prior art date
Application number
TW110136416A
Other languages
Chinese (zh)
Other versions
TW202226031A (en
Inventor
許柏安
張志吉
簡志偉
李嘉濱
吳昆達
呂委整
Original Assignee
財團法人工業技術研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人工業技術研究院filedCritical財團法人工業技術研究院
Priority to US17/534,418priorityCriticalpatent/US12171460B2/en
Publication of TW202226031ApublicationCriticalpatent/TW202226031A/en
Application grantedgrantedCritical
Publication of TWI792592BpublicationCriticalpatent/TWI792592B/en
Priority to US18/942,801prioritypatent/US20250064478A1/en

Links

Images

Landscapes

Abstract

A computer-assisted needle insertion system and a computer-assisted needle insertion method are provided. The computer-assisted needle insertion method includes: obtaining a first machine learning (ML) model and a second ML model; obtaining a computed tomography (CT) image and a needle insertion path, and generating a suggest needle insertion path according to the first ML model, the CT image, and the needle insertion path, and indicating the needle to approach an insertion point on a skin of an target, wherein the insertion point is located on the suggest needle insertion path; obtaining a breath signal of the target and estimating whether a future breath state of the target is normal according to the second ML model and the breath signal; and outputting a suggested insertion period according to the breath signal in response to determining the future breath state is normal.

Description

Translated fromChinese
電腦輔助入針系統和電腦輔助入針方法Computer Aided Needle Insertion System and Computer Aided Needle Insertion Method

本揭露是有關於一種電腦輔助入針系統和電腦輔助入針方法。The present disclosure relates to a computer-assisted needle insertion system and a computer-assisted needle insertion method.

隨著科技的進步,電腦斷層攝影(computed tomography,CT)影像已普遍用於執行精準內臟入針或腫瘤消融術(ablation)等手術。然而,基於CT影像的手術存在一些缺點。舉例來說,在進行內臟入針手術時,施術者通常僅能憑著經驗估計初次入針的位置、角度或深度,而使的入針路徑不夠精準。因此,受術者可能需要被拍攝多張CT影像以供作為施術者校正入針路徑的參考。如此,受術者接受的輻射劑量可能超標。此外,受術者的呼吸動作可能影響施術者調整入針路徑,且施術者僅能根據經驗等無量化的指標來執行手術。手術的風險可能因上述的因素而增加。With the advancement of technology, computed tomography (CT) images have been widely used in operations such as precise visceral needle insertion or tumor ablation. However, surgery based on CT images has some disadvantages. For example, when performing visceral needling surgery, the operator can usually only estimate the position, angle or depth of the initial needle insertion based on experience, so the needle insertion path is not accurate enough. Therefore, the subject may need to take multiple CT images as a reference for the operator to correct the needle insertion path. In this way, the radiation dose received by the recipient may exceed the standard. In addition, the patient's breathing action may affect the operator's adjustment of the needle insertion path, and the operator can only perform the operation based on unquantifiable indicators such as experience. The risk of surgery may be increased by the factors mentioned above.

本揭露提供一種電腦輔助入針系統和電腦輔助入針方法,可提供建議入針時段和建議入針路徑給入針的施術者。The present disclosure provides a computer-assisted needling insertion system and a computer-assisted needling insertion method, which can provide a suggested needling time period and a suggested needling path for the operator of needling.

本揭露的一種電腦輔助入針系統,適用於控制針具。電腦輔助入針系統包含儲存媒體以及處理器。儲存媒體儲存第一機器學習模型以及第二機器學習模型。處理器耦接儲存媒體,其中處理器經配置以執行:取得電腦斷層攝影影像以及入針路徑,並且根據第一機器學習模型、電腦斷層攝影影像以及入針路徑產生建議入針路徑,並指示針具接近目標的表面上的入針點,其中入針點位於建議入針路徑;取得目標的呼吸訊號,根據第二機器學習模型以及呼吸訊號估計目標的未來呼吸狀態是否正常;以及響應於判斷未來呼吸狀態正常,根據呼吸訊號輸出建議入針時段。The disclosed computer-aided needle insertion system is suitable for controlling needles. The computer-aided needle insertion system includes a storage medium and a processor. The storage medium stores the first machine learning model and the second machine learning model. The processor is coupled to the storage medium, wherein the processor is configured to execute: obtain the computerized tomography image and the needle insertion path, generate a suggested needle insertion path according to the first machine learning model, the computer tomography image, and the needle insertion path, and indicate the needle There is a needle entry point on the surface close to the target, wherein the needle entry point is located on the suggested needle entry path; obtaining the target's breathing signal, estimating whether the target's future breathing state is normal according to the second machine learning model and the breathing signal; and responding to judging the future The breathing state is normal, and the needle insertion time is recommended according to the breathing signal output.

在本揭露的一實施例中,上述的第一機器學習模型為深度Q-學習模型。In an embodiment of the present disclosure, the above-mentioned first machine learning model is a deep Q-learning model.

在本揭露的一實施例中,上述的電腦斷層攝影影像包含針具、目標的表面上的標記以及目標物件。In an embodiment of the present disclosure, the above-mentioned computed tomography image includes needles, markings on the surface of the target, and the target object.

在本揭露的一實施例中,上述的Q-學習模型的狀態集合中的狀態包含:針具的針尖的第一座標、標記的第二座標以及目標物件的第三座標。In an embodiment of the present disclosure, the states in the state set of the above-mentioned Q-learning model include: the first coordinate of the needle tip of the needle, the second coordinate of the mark, and the third coordinate of the target object.

在本揭露的一實施例中,上述的Q-學習模型的獎勵關聯於下列的至少其中之一:第一座標與第三座標之間的第一距離;第一座標、第二座標和第三座標所形成的單位向量與入針路徑之間的夾角;單位向量與入針路徑之間的第二距離;第一座標停止更新的時間;以及第一座標與表面之間的第三距離。In an embodiment of the present disclosure, the reward of the above-mentioned Q-learning model is associated with at least one of the following: the first distance between the first coordinate and the third coordinate; the first coordinate, the second coordinate and the third The angle between the unit vector formed by the coordinates and the needle path; the second distance between the unit vector and the needle path; the time when the first coordinate stops updating; and the third distance between the first coordinate and the surface.

在本揭露的一實施例中,上述的電腦輔助入針系統更包含收發器。收發器耦接處理器,其中處理器通過收發器通訊連接至機械手臂,其中處理器通過收發器傳送指令至機械手臂以控制針具,其中Q-學習模型的動作集合中的動作包含:機械手臂的關節動作。In an embodiment of the present disclosure, the computer-aided needle insertion system further includes a transceiver. The transceiver is coupled to the processor, wherein the processor communicates with the robotic arm through the transceiver, wherein the processor transmits instructions to the robotic arm through the transceiver to control the needle, wherein the actions in the action set of the Q-learning model include: the robotic arm joint action.

在本揭露的一實施例中,上述的處理器從動作集合中選出對應於Q-學習模型的累積獎勵的最大期望值的動作,並且根據動作更新Q-學習模型的狀態以訓練Q-學習模型。In an embodiment of the present disclosure, the above-mentioned processor selects an action corresponding to the maximum expected cumulative reward of the Q-learning model from the action set, and updates the state of the Q-learning model according to the action to train the Q-learning model.

在本揭露的一實施例中,上述的處理器根據更新的狀態決定建議入針路徑。In an embodiment of the present disclosure, the above-mentioned processor determines a suggested needle insertion route according to the updated state.

在本揭露的一實施例中,上述的機械手臂包含固定針具的夾具,其中處理器通過收發器指示機械手臂在建議入針時段以外的時段鬆開夾具。In an embodiment of the present disclosure, the above-mentioned robotic arm includes a clamp for fixing the needle, wherein the processor instructs the robotic arm to release the clamp during a period other than the recommended needle insertion period through the transceiver.

在本揭露的一實施例中,上述的處理器更經配置以執行:根據歷史資料決定參考呼吸週期,並且根據參考呼吸週期決定取樣時段;根據取樣時段取樣心電圖訊號;以及將心電圖訊號轉換為呼吸訊號。In an embodiment of the present disclosure, the above-mentioned processor is further configured to execute: determine a reference breathing cycle according to historical data, and determine a sampling period according to the reference breathing cycle; sample the ECG signal according to the sampling period; and convert the ECG signal into a respiration signal.

在本揭露的一實施例中,上述的儲存媒體更儲存第三機器學習模型,其中處理器更經配置以執行:自呼吸訊號擷取出呼吸參數;取得第二座標的位移量,並根據位移量決定時域參數;對呼吸訊號執行短時傅立葉轉換以產生頻域參數;基於第三機器學習模型而根據呼吸參數、時域參數以及頻域參數判斷目標的長期呼吸狀態是否正常;以及響應於長期呼吸狀態正常,估計目標的未來呼吸狀態是否正常。In an embodiment of the present disclosure, the above-mentioned storage medium further stores a third machine learning model, wherein the processor is further configured to execute: extracting breathing parameters from the breathing signal; obtaining the displacement of the second coordinate, and according to the displacement Determining time domain parameters; performing short-time Fourier transform on the respiratory signal to generate frequency domain parameters; judging whether the long-term breathing state of the target is normal according to the breathing parameters, time domain parameters and frequency domain parameters based on the third machine learning model; and responding to the long-term The breathing state is normal, and it is estimated whether the target's future breathing state is normal.

在本揭露的一實施例中,上述的標記包含對應於第一子座標的第一子標記以及對應於第二子座標的第二子標記,其中時域參數包含下列的至少其中之一:第一子座標在X軸的第一位移量;第一子座標在Y軸的第二位移量;第一子座標在Z軸的第三位移量;第一位移量、第二位移量以及第三位移量的第一總和;第二子座標在X軸的第四位移量;第二子座標在Y軸的第五位移量;第二子座標在Z軸的第六位移量;第四位移量、第五位移量以及第六位移量的第二總和;以及第一總和與第二總和的總和。In an embodiment of the present disclosure, the above-mentioned markers include a first sub-mark corresponding to the first sub-coordinate and a second sub-mark corresponding to the second sub-coordinate, wherein the time domain parameter includes at least one of the following: The first displacement of a sub-coordinate on the X-axis; the second displacement of the first sub-coordinate on the Y-axis; the third displacement of the first sub-coordinate on the Z-axis; the first displacement, the second displacement and the third The first sum of the displacement; the fourth displacement of the second sub-coordinate on the X-axis; the fifth displacement of the second sub-coordinate on the Y-axis; the sixth displacement of the second sub-coordinate on the Z-axis; the fourth displacement , the second sum of the fifth displacement and the sixth displacement; and the sum of the first sum and the second sum.

在本揭露的一實施例中,上述的處理器更經配置以執行:基於第二機器學習模型而根據時域參數以及頻域參數估計目標的未來呼吸狀態是否正常。In an embodiment of the present disclosure, the above-mentioned processor is further configured to execute: estimating whether the future breathing state of the target is normal or not according to the time-domain parameter and the frequency-domain parameter based on the second machine learning model.

在本揭露的一實施例中,上述的處理器更經配置以執行:根據呼吸訊號判斷第二座標的位移量,並且根據位移量決定時域參數;根據時域參數產生特徵訊號;響應於特徵訊號在時段期間小於閾值,將時段設為呼吸停滯時段;以及根據呼吸停滯時段決定建議入針時段。In an embodiment of the present disclosure, the above-mentioned processor is further configured to perform: judging the displacement of the second coordinate according to the breathing signal, and determining the time-domain parameter according to the displacement; generating a characteristic signal according to the time-domain parameter; responding to the characteristic The signal is less than the threshold during the time period, and the time period is set as the apnea stagnation period; and the suggested needle insertion time period is determined according to the apnea stagnation period.

在本揭露的一實施例中,上述的儲存媒體更儲存第三機器學習模型,其中處理器更經配置以執行:取得對應於呼吸停滯時段的第二呼吸訊號;自第二呼吸訊號擷取出第二呼吸參數;根據第二呼吸訊號判斷第二座標的第二位移量,並且根據第二位移量決定第二時域參數;對第二呼吸訊號執行短時傅立葉轉換以產生頻域參數;基於第三機器學習模型而根據第二呼吸參數、第二時域參數以及頻域參數判斷目標的當前呼吸狀態是否正常;以及響應於當前呼吸狀態正常,將呼吸停滯時段設為建議入針時段。In an embodiment of the present disclosure, the above-mentioned storage medium further stores a third machine learning model, wherein the processor is further configured to execute: obtaining a second respiratory signal corresponding to a period of respiratory arrest; extracting a second respiratory signal from the second respiratory signal; Two respiratory parameters; determine the second displacement of the second coordinate according to the second respiratory signal, and determine the second time domain parameter according to the second displacement; perform short-time Fourier transform on the second respiratory signal to generate the frequency domain parameter; based on the first Three machine learning models judge whether the current breathing state of the target is normal according to the second breathing parameter, the second time domain parameter and the frequency domain parameter;

在本揭露的一實施例中,上述的處理器根據建議入針時段決定針具移動時間以及針具移動距離,並且輸出針具移動時間以及針具移動距離。In an embodiment of the present disclosure, the above-mentioned processor determines the needle moving time and the needle moving distance according to the suggested needle insertion time period, and outputs the needle moving time and the needle moving distance.

本揭露的一種電腦輔助入針方法,適用於控制針具。電腦輔助入針方法包含:取得第一機器學習模型以及第二機器學習模型;取得電腦斷層攝影影像以及入針路徑,並且根據第一機器學習模型、電腦斷層攝影影像以及入針路徑產生建議入針路徑,並指示針具接近目標的表面上的入針點,其中所述入針點位於所述建議入針路徑;取得目標的呼吸訊號,根據第二機器學習模型以及呼吸訊號估計目標的未來呼吸狀態是否正常;以及響應於判斷未來呼吸狀態正常,根據呼吸訊號輸出建議入針時段。A computer-assisted needle insertion method disclosed in the present disclosure is suitable for controlling needles. The computer-assisted needle insertion method includes: obtaining a first machine learning model and a second machine learning model; obtaining computerized tomography images and needle insertion paths, and generating suggested needle insertions based on the first machine learning model, computer tomography images, and needle insertion paths path, and indicate the needle entry point on the surface of the needle approaching the target, wherein the needle entry point is located on the proposed needle entry path; obtain the target's breathing signal, and estimate the target's future breathing according to the second machine learning model and the breathing signal Whether the state is normal; and in response to judging that the future breathing state is normal, suggest a needle insertion time period according to the breathing signal output.

基於上述,本揭露可使針具接近目標表面上的理想入針點。本揭露可根據機器學習技術判斷目標的呼吸狀態是否正常。本揭露還可根據目標的呼吸訊號判斷目標的呼吸停滯時段,並且建議施術者在呼吸停滯時段執行入針手術。Based on the above, the present disclosure can bring the needle tool close to the ideal needle entry point on the target surface. The present disclosure can judge whether the target's breathing state is normal based on machine learning technology. The present disclosure can also judge the target's breathing stagnation period according to the breathing signal of the target, and suggest the operator to perform the needle insertion operation during the breathing stagnation period.

為了使本發明之內容可以被更容易明瞭,以下特舉實施例作為本發明確實能夠據以實施的範例。另外,凡可能之處,在圖式及實施方式中使用相同標號的元件/構件/步驟,係代表相同或類似部件。In order to make the content of the present invention more comprehensible, the following specific embodiments are taken as examples in which the present invention can actually be implemented. In addition, wherever possible, elements/components/steps using the same reference numerals in the drawings and embodiments represent the same or similar parts.

圖1根據本揭露的一實施例繪示一種電腦輔助入針系統100的示意圖。電腦輔助入針系統100可用於控制針具,其中所述針具可固定於機械手臂。電腦輔助入針系統100可通過配置機械手臂來控制針具。電腦輔助入針系統100可包含處理器110、儲存媒體120以及收發器130。FIG. 1 shows a schematic diagram of a computer-assistedneedle insertion system 100 according to an embodiment of the present disclosure. The computer-assistedneedle insertion system 100 can be used to control a needle, wherein the needle can be fixed to a robotic arm. The computer-aidedneedle insertion system 100 can control needles by configuring a robotic arm. The computer-aidedneedle insertion system 100 may include aprocessor 110 , astorage medium 120 and atransceiver 130 .

處理器110例如是中央處理單元(central processing unit,CPU),或是其他可程式化之一般用途或特殊用途的微控制單元(micro control unit,MCU)、微處理器(microprocessor)、數位信號處理器(digital signal processor,DSP)、可程式化控制器、特殊應用積體電路(application specific integrated circuit,ASIC)、圖形處理器(graphics processing unit,GPU)、影像訊號處理器(image signal processor,ISP)、影像處理單元(image processing unit,IPU)、算數邏輯單元(arithmetic logic unit,ALU)、複雜可程式邏輯裝置(complex programmable logic device,CPLD)、現場可程式化邏輯閘陣列(field programmable gate array,FPGA)或其他類似元件或上述元件的組合。處理器110可耦接至儲存媒體120以及收發器130,並且存取和執行儲存於儲存媒體120中的多個模組和各種應用程式。Theprocessor 110 is, for example, a central processing unit (central processing unit, CPU), or other programmable general purpose or special purpose micro control unit (micro control unit, MCU), microprocessor (microprocessor), digital signal processing Digital signal processor (DSP), programmable controller, application specific integrated circuit (ASIC), graphics processing unit (graphics processing unit, GPU), image signal processor (image signal processor, ISP) ), image processing unit (image processing unit, IPU), arithmetic logic unit (arithmetic logic unit, ALU), complex programmable logic device (complex programmable logic device, CPLD), field programmable logic gate array (field programmable gate array , FPGA) or other similar components or combinations of the above components. Theprocessor 110 can be coupled to thestorage medium 120 and thetransceiver 130 , and access and execute multiple modules and various application programs stored in thestorage medium 120 .

儲存媒體120例如是任何型態的固定式或可移動式的隨機存取記憶體(random access memory,RAM)、唯讀記憶體(read-only memory,ROM)、快閃記憶體(flash memory)、硬碟(hard disk drive,HDD)、固態硬碟(solid state drive,SSD)或類似元件或上述元件的組合,而用於儲存可由處理器110執行的多個模組或各種應用程式。在本實施例中,儲存媒體120可儲存包含機器學習模型121(或稱為「第一機器學習模型」,用於建議入針路徑)、機器學習模型122(或稱為「第二機器學習模型」,用於估計目標的未來呼吸狀態是否正常)以及機器學習模型123(或稱為「第三機器學習模型」,用於判斷目標的當前/長期呼吸狀態是否正常)等多個模組,其功能將於後續說明。Thestorage medium 120 is, for example, any type of fixed or removable random access memory (random access memory, RAM), read-only memory (read-only memory, ROM), flash memory (flash memory) , hard disk drive (hard disk drive, HDD), solid state drive (solid state drive, SSD) or similar components or a combination of the above components, and are used to store multiple modules or various application programs executable by theprocessor 110 . In this embodiment, thestorage medium 120 can store a machine learning model 121 (or called "the first machine learning model", used to suggest the needle insertion route), a machine learning model 122 (or called "the second machine learning model"). ", for estimating whether the target's future breathing state is normal) and machine learning model 123 (or called "the third machine learning model", for judging whether the target's current/long-term breathing state is normal), etc. The function will be explained later.

收發器130以無線或有線的方式傳送及接收訊號。收發器130還可以執行例如低噪聲放大、阻抗匹配、混頻、向上或向下頻率轉換、濾波、放大以及類似的操作。處理器110可通過收發器130通訊連接至用以控制針具的機械手臂。Thetransceiver 130 transmits and receives signals in a wireless or wired manner. Thetransceiver 130 may also perform operations such as low noise amplification, impedance matching, frequency mixing, up or down frequency conversion, filtering, amplification, and the like. Theprocessor 110 can be communicatively connected to the robot arm for controlling the needle through thetransceiver 130 .

圖2根據本揭露的一實施例繪示對目標(即:受術者)20進行入針的示意圖。機械手臂30可包含用於固定針具40的夾具31。電腦輔助入針系統100可輔助施術者控制機械手臂30以將針具40的針尖41插入目標物件60。目標物件60例如是目標20體內的目標臟器。FIG. 2 shows a schematic diagram of inserting needles into a target (ie, subject) 20 according to an embodiment of the present disclosure. Therobot arm 30 may include aclamp 31 for fixing theneedle 40 . The computer-aidedneedle insertion system 100 can assist the operator to control therobot arm 30 to insert theneedle tip 41 of theneedle tool 40 into thetarget object 60 . Thetarget object 60 is, for example, a target organ in the body of thetarget 20 .

圖3根據本揭露的一實施例繪示一種電腦輔助入針方法的流程圖,其中所述電腦輔助入針方法可由如圖1所示的電腦輔助入針系統100實施。在步驟S310中,處理器110可執行術前程序。在步驟S320中,處理器110可執行術中程序。在步驟S330中,處理器110可執行針具控制程序。在步驟S340中,處理器110可執行針具控制校正程序。FIG. 3 shows a flowchart of a computer-aided needle insertion method according to an embodiment of the present disclosure, wherein the computer-aided needle insertion method can be implemented by the computer-aidedneedle insertion system 100 shown in FIG. 1 . In step S310, theprocessor 110 may execute a preoperative procedure. In step S320, theprocessor 110 may execute an intraoperative procedure. In step S330, theprocessor 110 may execute a needle control program. In step S340, theprocessor 110 may execute a needle control correction program.

處理器110可執行術前程序(即:步驟S310)以產生CT影像。具體來說,處理器110可通過收發器130通訊連接至CT影像攝影機,並且控制CT影像攝影機掃瞄目標20以產生CT影像。CT影像可包含目標20、標籤50以及目標物件60。處理器110可自CT影像攝影機接收CT影像。Theprocessor 110 may execute a preoperative procedure (ie: step S310 ) to generate a CT image. Specifically, theprocessor 110 is communicatively connected to the CT image camera through thetransceiver 130 , and controls the CT image camera to scan thetarget 20 to generate a CT image. The CT image may include thetarget 20 , thelabel 50 and thetarget object 60 . Theprocessor 110 can receive CT images from a CT image camera.

處理器110可執行術中程序(即:步驟S320)以指示針具40接近目標20的表面21上的入針點。理想的入針點可位於建議入針路徑15與表面21的相交處。處理器110可通過收發器130輸出建議入針路徑15以供施術者參考。圖4根據本揭露的一實施例繪示步驟S320的流程圖。步驟S320可包含步驟S321~S324。Theprocessor 110 may execute an intraoperative procedure (ie: step S320 ) to instruct theneedle 40 to approach the needle entry point on thesurface 21 of thetarget 20 . The ideal needle entry point may be located at the intersection of the proposed needle entry path 15 and thesurface 21 . Theprocessor 110 can output the suggested needle insertion path 15 through thetransceiver 130 for the operator's reference. FIG. 4 shows a flowchart of step S320 according to an embodiment of the present disclosure. Step S320 may include steps S321-S324.

在步驟S321中,處理器110可通過收發器130接收CT影像以及入針路徑10。舉例來說,處理器110可通過收發器130以自CT影像攝影機接收CT影像。舉例來說,施術者可操作終端裝置選擇入針路徑10。處理器110可通過收發器130以自所述終端裝置接收入針路徑10。In step S321 , theprocessor 110 may receive the CT image and the needle insertion path 10 through thetransceiver 130 . For example, theprocessor 110 can receive CT images from a CT image camera through thetransceiver 130 . For example, the operator can operate the terminal device to select the needle insertion path 10 . Theprocessor 110 can receive the needle path 10 from the terminal device through thetransceiver 130 .

在步驟S322中,處理器110可通過收發器130輸出CT影像以及入針路徑10。具體來說,處理器110可通過收發器130通訊連接至顯示器以顯示CT影像供施術者參考。施術者可根據CT影像確認是否採用入針路徑10,並可根據入針路徑10在目標20的表面21貼上標記50,如圖2所示。舉例來說,施術者可在入針路徑10與表面21的相交處貼上標記50。In step S322 , theprocessor 110 can output the CT image and the needle insertion path 10 through thetransceiver 130 . Specifically, theprocessor 110 can be communicatively connected to a monitor through thetransceiver 130 to display CT images for the operator's reference. The operator can confirm whether to use the needle insertion path 10 according to the CT image, and can stick amark 50 on thesurface 21 of thetarget 20 according to the needle insertion path 10 , as shown in FIG. 2 . For example, the operator may place amarker 50 at the intersection of the needle path 10 and thesurface 21 .

在步驟S323中,在標記50被貼上目標20後,處理器110可控制CT影像攝影機掃瞄目標20以取得包含目標20、標籤50以及目標物件60的CT影像。處理器110可通過收發器130以自CT影像攝影機接收所述CT影像。在一實施例中,處理器110可通過收發器130將CT影像以及入針路徑10輸出至顯示器。顯示器可顯示CT影像以及入針路徑10以供施術者參考。In step S323 , after themarker 50 is attached to thetarget 20 , theprocessor 110 may control the CT image camera to scan thetarget 20 to obtain a CT image including thetarget 20 , thelabel 50 and thetarget object 60 . Theprocessor 110 can receive the CT image from a CT image camera through thetransceiver 130 . In one embodiment, theprocessor 110 can output the CT image and the needle insertion path 10 to a display through thetransceiver 130 . The monitor can display the CT image and the needle insertion path 10 for the operator's reference.

在步驟S324中,處理器110可根據CT影像以及入針路徑10指示針具40的針尖41接近目標20的表面21上的入針點,其中所述入針點可位於標記50內。在一實施例中,處理器110可通過收發器130輸出提示資訊(例如:提示影像或提示音訊),其中所述提示資訊可提示施術者操作機械手臂30以使針具40的針尖41接近入針點。在一實施例中,處理器110可通過收發器130傳送指令至機械手臂30以控制針具40,藉以使針具40的針尖41接近入針點。In step S324 , theprocessor 110 may indicate according to the CT image and the needle insertion path 10 that theneedle tip 41 of theneedle tool 40 approaches a needle insertion point on thesurface 21 of thetarget 20 , wherein the needle insertion point may be located within themarker 50 . In one embodiment, theprocessor 110 can output prompt information (for example: prompt image or prompt audio) through thetransceiver 130, wherein the prompt information can prompt the operator to operate themechanical arm 30 to make theneedle tip 41 of theneedle 40 approach the entrance. pin point. In one embodiment, theprocessor 110 can send instructions to therobot arm 30 through thetransceiver 130 to control theneedle 40 so that theneedle tip 41 of theneedle 40 approaches the needle entry point.

具體來說,處理器110可根據機器學習模型121、CT影像以及入針路徑10以產生建議入針路徑15,並指示針具40的針尖41接近入針點,其中CT影像可包含目標20、標籤50以及目標物件60。入針點可位於建議入針路徑15。機器學習模型121可以是強化學習(reinforcement learning)模型。在本實施例中,機器學習模型121是深度Q-學習(deep Q-learning,DQN)模型。Specifically, theprocessor 110 can generate a suggested needle insertion path 15 according to themachine learning model 121, the CT image, and the needle insertion path 10, and instruct theneedle tip 41 of theneedle tool 40 to approach the needle insertion point, wherein the CT image can include thetarget 20, Atag 50 and atarget object 60 . The needle entry point may be located at the suggested needle entry path 15 . Themachine learning model 121 may be a reinforcement learning model. In this embodiment, themachine learning model 121 is a deep Q-learning (deep Q-learning, DQN) model.

根據Q-學習演算法,機器學習模型121的智慧型代理人(agent)可通過執行動作集合(set of actions)A中的動作以從狀態集合(set of states)S中的一個狀態轉移至狀態集合S中的另一個狀態。在一個特定的狀態下執行一個動作時,智慧型代理人可取得一個獎勵(reward)。機器學習模型121的智慧型代理人可由處理器110實施。According to the Q-learning algorithm, the intelligent agent (agent) of themachine learning model 121 can perform the actions in the set of actions (set of actions) A to transfer from a state in the set of states S to the state Another state in the set S. When performing an action in a specific state, the intelligent agent can obtain a reward. An intelligent agent ofmachine learning model 121 may be implemented byprocessor 110 .

機器學習模型121的狀態集合S中的狀態可包含針尖41的座標、標記50的座標、目標物件60的座標或時間戳記。狀態集合S中的狀態可對應於笛卡兒座標系(Cartesian coordinate system)或四元數(quaternion)。機器學習模型121的動作集合A中的動作可包含機械手臂30的關節動作。若機械手臂30包含一或多個關節,則所述關節動作可控制機械手臂30的一或多個關節,藉以改變針尖41的座標。值得注意的是,為了避免特定的動作對機械手臂30造成損害,動作集合A可不包含對應於機械手臂30的奇異點(singularity)的關節動作。The states in the state set S of themachine learning model 121 may include the coordinates of theneedle tip 41 , the coordinates of themarker 50 , the coordinates of thetarget object 60 or a time stamp. The states in the state set S may correspond to a Cartesian coordinate system or a quaternion. The motions in the motion set A of themachine learning model 121 may include joint motions of therobotic arm 30 . If therobotic arm 30 includes one or more joints, the joint action can control one or more joints of therobotic arm 30 , so as to change the coordinates of theneedle tip 41 . It should be noted that, in order to avoid certain actions from causing damage to therobotic arm 30 , the action set A may not include joint actions corresponding to the singularity of therobotic arm 30 .

機器學習模型121所使用的演算法如方程式(1)和方程式(2)所示,其中

Figure 02_image001
為對應於時間點t的狀態,
Figure 02_image003
為對應於時間點t的動作,
Figure 02_image005
為對應於時間點t的獎勵,Q為期望累積獎勵(expected cumulated reward)函數。方程式(1)用於訓練時期,並且方程式(2)用於測試時期。獎勵
Figure 02_image005
僅使用於訓練時期,測試時期是使用訓練好的期望累積獎勵函數Q。在一實施例中,期望累積獎勵函數Q可為深度神經網路。處理器110可在收集到足夠的狀態
Figure 02_image001
、動作
Figure 02_image007
獎勵
Figure 02_image005
、狀態
Figure 02_image009
的樣本後,形成訓練集合TD(TD={(
Figure 02_image001
,
Figure 02_image003
,
Figure 02_image005
,
Figure 02_image009
),
Figure 02_image011
}),並可根據訓練集合TD中的N個樣本(N為正整數)訓練機器學習模型121的期望累積獎勵函數Q,如方程式(1)所示。
Figure 02_image013
…(1)
Figure 02_image015
…(2)The algorithm used by themachine learning model 121 is shown in equation (1) and equation (2), where
Figure 02_image001
is the state corresponding to time point t,
Figure 02_image003
is the action corresponding to time point t,
Figure 02_image005
is the reward corresponding to time point t, and Q is the expected accumulated reward function. Equation (1) is used for the training epoch, and Equation (2) is used for the testing epoch. award
Figure 02_image005
Only used in the training epoch, the test epoch uses the trained expected cumulative reward function Q. In one embodiment, the expected cumulative reward function Q can be a deep neural network.Processor 110 may collect enough status
Figure 02_image001
,action
Figure 02_image007
award
Figure 02_image005
,state
Figure 02_image009
After the samples, form the training set TD (TD={(
Figure 02_image001
,
Figure 02_image003
,
Figure 02_image005
,
Figure 02_image009
),
Figure 02_image011
}), and the expected cumulative reward function Q of themachine learning model 121 can be trained according to N samples in the training set TD (N is a positive integer), as shown in equation (1).
Figure 02_image013
…(1)
Figure 02_image015
…(2)

處理器110可根據所述狀態決定建議入針路徑15,其中建議入針路徑15可近似於入針路徑10(施術者選擇的是入針路徑10,建議入針路徑15會通過期望累積獎勵函數Q逼近入針路徑10),如圖2所示。具體來說,處理器110可根據方程式(2)最佳化針尖41的座標、標記50的座標或目標物件60的座標,以使針尖41接近目標20的表面21上的入針點。例如,狀態

Figure 02_image001
可為針尖41的座標、標記50的座標或目標物件60的座標在時間點t時的函數。另一方面,針尖41的座標、標記50的座標或目標物件60的座標可為狀態
Figure 02_image001
的函數。處理器110可根據狀態
Figure 02_image001
決定一動作
Figure 02_image003
,從而最大化已訓練好的期望累積獎勵Q。處理器110可根據動作
Figure 02_image003
更新狀態
Figure 02_image001
以產生狀態
Figure 02_image009
。再重複執行上述的步驟後,最終的狀態
Figure 02_image001
將會達到最佳化。也就是說,針尖41的座標、標記50的座標或目標物件60的座標將達到最佳化。在完成針尖41的座標、標記50的座標或目標物件60的座標的最佳化後,處理器110可根據針尖41的最佳化座標、標記50的最佳化座標或目標物件60的最佳化座標決定建議入針路徑15。舉例來說,針尖41的最佳化座標、標記50的最佳化座標或目標物件60的最佳化座標三者可組成建議入針路徑15。Theprocessor 110 may decide to suggest a needle insertion path 15 according to the state, wherein the suggested needle insertion path 15 may be similar to the needle insertion path 10 (the operator chooses the needle insertion path 10, and the suggested needle insertion path 15 will pass the expected cumulative reward function Q approaches the needle entry path 10), as shown in Figure 2. Specifically, theprocessor 110 may optimize the coordinates of theneedle tip 41 , themarker 50 or thetarget object 60 according to equation (2), so that theneedle tip 41 is close to the needle entry point on thesurface 21 of thetarget 20 . For example, state
Figure 02_image001
It can be a function of the coordinates of theneedle tip 41 , the coordinates of themarker 50 or the coordinates of thetarget object 60 at the time point t. On the other hand, the coordinates of theneedle point 41, the coordinates of themarker 50 or the coordinates of thetarget object 60 can be state
Figure 02_image001
The function.Processor 110 can be based on the state
Figure 02_image001
decision-action
Figure 02_image003
, thus maximizing the trained expected cumulativereward Q. Processor 110 can act according to
Figure 02_image003
update status
Figure 02_image001
to generate state
Figure 02_image009
. After repeating the above steps, the final state
Figure 02_image001
will be optimized. That is to say, the coordinates of theneedle tip 41 , themarker 50 or thetarget object 60 will be optimized. After completing the optimization of the coordinates of theneedle point 41, the coordinates of themark 50 or the coordinates of thetarget object 60, theprocessor 110 may The chemical coordinates determine the suggested needle insertion path 15 . For example, the optimized coordinates of theneedle tip 41 , the optimized coordinates of themarker 50 , or the optimized coordinates of thetarget object 60 may constitute the proposed needle insertion path 15 .

在一實施例中,處理器110可通過收發器130輸出與機器學習模型121的狀態或動作相關的參考資訊給施術者。施術者可根據參考資訊操作機械手臂30以使針尖41接近入針點。在一實施例中,處理器110可通過收發器130傳送對應於機器學習模型121的狀態或動作的指令至機械手臂30,藉以控制機械手臂30移動針具40以使針尖41接近入針點。In one embodiment, theprocessor 110 can output reference information related to the state or action of themachine learning model 121 to the operator through thetransceiver 130 . The operator can operate themechanical arm 30 according to the reference information to make theneedle tip 41 approach the needle insertion point. In one embodiment, theprocessor 110 can send instructions corresponding to the state or action of themachine learning model 121 to therobotic arm 30 through thetransceiver 130 , so as to control therobotic arm 30 to move theneedle 40 so that theneedle tip 41 approaches the needle entry point.

在一實施例中,機器學習模型121的獎勵可關聯於針尖41的座標與目標物件60的座標之間的距離。若所述距離越短,則獎勵

Figure 02_image005
的值越高。換句話說,機器學習模型121的獎勵
Figure 02_image005
可用於訓練期望累積獎勵函數Q,其中所述期望累積獎勵函數Q可最小化針尖41與目標物件60之間的距離(或稱為「第一距離」)。In one embodiment, the reward of themachine learning model 121 may be related to the distance between the coordinates of theneedle tip 41 and the coordinates of thetarget object 60 . If said distance is shorter, reward
Figure 02_image005
The higher the value. In other words, the reward of themachine learning model 121
Figure 02_image005
It can be used to train the expected cumulative reward function Q, wherein the expected cumulative reward function Q can minimize the distance between theneedle tip 41 and the target object 60 (or called “the first distance”).

在一實施例中,機器學習模型121的獎勵可關聯於特定單位向量與標記50的入針路徑10之間的夾角,其中所述特定向量可為針尖41的座標、標記50的座標與目標物件60的座標所形成的單位向量。若所述夾角越小,則獎勵的值越高。換句話說,機器學習模型121的獎勵可用於訓練期望累積獎勵函數Q,其中所述期望累積獎勵函數Q可使針尖41、座標50以及目標物件60所組成的線段與入針路徑10平行。在一實施例中,機器學習模型121的獎勵可關聯於所述特定單位向量與入針路徑10之間的距離(或稱為「第二距離」)。若所述距離越短,則獎勵的值越高。換句話說,機器學習模型121的獎勵可使針尖41、座標50以及目標物件60所組成的線段逼近入針路徑10。In one embodiment, the reward of themachine learning model 121 may be related to the angle between a specific unit vector and the needle entry path 10 of themarker 50, wherein the specific vector may be the coordinates of theneedle tip 41, the coordinates of themarker 50 and the target object. The unit vector formed by the coordinates of 60. If the included angle is smaller, the reward value is higher. In other words, the reward of themachine learning model 121 can be used to train the expected cumulative reward function Q, wherein the expected cumulative reward function Q can make the line segment formed by theneedle tip 41 , the coordinate 50 and thetarget object 60 parallel to the needle insertion path 10 . In one embodiment, the reward of themachine learning model 121 may be associated with the distance between the specific unit vector and the needle insertion path 10 (or referred to as “the second distance”). If the distance is shorter, the value of the reward is higher. In other words, the reward of themachine learning model 121 can make the line segment formed by theneedle tip 41 , the coordinate 50 and thetarget object 60 approach the needle insertion path 10 .

在一實施例中,機器學習模型121的獎勵可關聯於針尖41的座標停止更新的時間。若停止更新的時間越長(例如:針尖41靜止不動),則獎勵的值越低。換句話說,機器學習模型121的獎勵可用於訓練期望累積獎勵函數Q,其中所述期望累積獎勵函數Q可促使機械手臂30積極地移動針尖41。In one embodiment, the reward of themachine learning model 121 may be related to the time when the coordinates of theneedle tip 41 stop updating. If the update stop time is longer (for example: theneedle tip 41 remains still), the reward value will be lower. In other words, the rewards of themachine learning model 121 can be used to train the desired cumulative reward function Q, wherein the desired cumulative reward function Q can prompt therobotic arm 30 to actively move theneedle tip 41 .

在一實施例中,機器學習模型121的獎勵可關聯於針尖41的座標與表面21之間的距離(或稱為「第三距離」)。若所述距離小於閾值,則獎勵的值會大幅地降低。換句話說,機器學習模型121的獎勵可用於訓練期望累積獎勵函數Q,其中所述期望累積獎勵函數Q可維持針尖41與表面21之間的距離,以避免針尖41與表面21接觸。因此,在施術者操作機械手臂30將針具40刺入表面21之前,機械手臂30都不會讓針具40與表面21接觸。In one embodiment, the reward of themachine learning model 121 may be related to the distance between the coordinates of theneedle tip 41 and the surface 21 (or referred to as “the third distance”). If the distance is smaller than the threshold, the value of the reward will be greatly reduced. In other words, the rewards of themachine learning model 121 can be used to train the desired cumulative reward function Q, wherein the desired cumulative reward function Q can maintain the distance between theneedle tip 41 and thesurface 21 to avoid contact between theneedle tip 41 and thesurface 21 . Therefore, before the operator operates therobotic arm 30 to pierce theneedle 40 into thesurface 21 , themechanical arm 30 will not allow theneedle 40 to contact thesurface 21 .

處理器110可執行針具控制程序(即:步驟S330)以根據目標20的呼吸訊號輸出建議入針時段供施術者參考。圖5根據本揭露的一實施例繪示步驟S330的流程圖,步驟S330可包含步驟S331~S336。Theprocessor 110 can execute the needle control program (ie: step S330 ) to output a suggested needle insertion time period according to the breathing signal of thetarget 20 for the operator's reference. FIG. 5 shows a flowchart of step S330 according to an embodiment of the present disclosure, and step S330 may include steps S331˜S336.

在步驟S331中,處理器110可通過收發器130取得目標20的呼吸訊號。具體來說,處理器110可根據歷史資料決定參考呼吸週期,其中所述歷史資料例如是分別對應於多位臨床受測者的多個呼吸訊號。處理器110可計算多個呼吸訊號的平均呼吸週期以產生參考呼吸週期,並可根據參考呼吸週期決定取樣時段。In step S331 , theprocessor 110 can obtain the breathing signal of thetarget 20 through thetransceiver 130 . Specifically, theprocessor 110 can determine the reference breathing cycle according to the historical data, wherein the historical data are, for example, a plurality of respiratory signals respectively corresponding to a plurality of clinical subjects. Theprocessor 110 can calculate the average breathing cycle of a plurality of breathing signals to generate a reference breathing cycle, and can determine a sampling period according to the reference breathing cycle.

在決定取樣時段後,處理器110可通過收發器130接收目標20的心電圖(electrocardiography,ECG)的測量結果,並且根據取樣時段自測量結果取樣心電圖訊號。處理器110可將心電圖訊號轉換為目標20的呼吸訊號。舉例來說,處理器110可根據R尺寸(R size)方法或RR方法以將心電圖訊號轉換為呼吸訊號。After the sampling period is determined, theprocessor 110 may receive an electrocardiography (ECG) measurement result of thetarget 20 through thetransceiver 130 , and sample the ECG signal from the measurement result according to the sampling period. Theprocessor 110 can convert the ECG signal into a breathing signal of thetarget 20 . For example, theprocessor 110 can convert the ECG signal into a respiratory signal according to the R size method or the RR method.

在步驟S332中,處理器110可根據呼吸訊號判斷目標20的長期呼吸狀態是否正常。若目標20的長期呼吸狀態正常,則進入步驟S333。若目標20的呼吸震幅或週期在所屬群體(例如:相同年齡或相同性別)的正常範圍內,則處理器110可將目標20的長期呼吸狀態判斷為正常。In step S332, theprocessor 110 may determine whether the long-term breathing state of thetarget 20 is normal according to the breathing signal. If the long-term breathing state of thetarget 20 is normal, go to step S333. If the breathing amplitude or period of thetarget 20 is within the normal range of the group (for example: the same age or the same gender), theprocessor 110 may determine the long-term breathing state of thetarget 20 as normal.

具體來說,處理器110可從呼吸訊號中擷取出呼吸參數,其中呼吸參數可包含呼吸起始時間點、呼吸結束時間點或平均呼吸週期等。圖6根據本揭露的一實施例繪示呼吸訊號600的示意圖。處理器110可從呼吸訊號600取得N個呼吸週期,並且根據N個呼吸週期計算平均呼吸週期,其中N可為任意的正整數。在本實施例中,假設N為2。處理器110可自呼吸訊號600取得呼吸起始時間點61以及呼吸停止時間點62。處理器110可根據呼吸停止時間點62與呼吸起始時間點61的差值計算呼吸週期610。此外,處理器110可自呼吸訊號600取得呼吸起始時間點63以及呼吸停止時間點64。處理器110可根據呼吸停止時間點64與呼吸起始時間點63的差值計算呼吸週期620。處理器110可計算呼吸週期610與呼吸週期620的平均值以產生平均呼吸週期(單位:秒)。Specifically, theprocessor 110 can extract breathing parameters from the breathing signal, where the breathing parameters can include a breathing start time point, a breathing end time point, or an average breathing cycle, and the like. FIG. 6 shows a schematic diagram of abreathing signal 600 according to an embodiment of the present disclosure. Theprocessor 110 can obtain N breathing cycles from thebreathing signal 600 and calculate an average breathing cycle according to the N breathing cycles, where N can be any positive integer. In this embodiment, it is assumed that N is 2. Theprocessor 110 can obtain the breath starttime point 61 and the breathstop time point 62 from thebreath signal 600 . Theprocessor 110 may calculate thebreathing period 610 according to the difference between the breathingstop time point 62 and the breathing startingtime point 61 . In addition, theprocessor 110 can obtain the breath starttime point 63 and the breathstop time point 64 from thebreath signal 600 . Theprocessor 110 may calculate thebreathing period 620 according to the difference between the breathingstop time point 64 and the breathing startingtime point 63 . Theprocessor 110 can calculate the average value of thebreathing period 610 and thebreathing period 620 to generate an average breathing period (unit: second).

另一方面,處理器110通過收發器130取得標記50的座標的位移量,並且根據位移量決定時域參數。具體來說,處理器110可根據對應於呼吸訊號600的心電圖訊號判斷標記50的座標的位移量。處理器110可根據標記50的座標的位移量決定時域參數。在一實施例中,處理器110可先對呼吸訊號執行低通濾波程序以產生經濾波的呼吸訊號。處理器110可根據經濾波的呼吸訊號判斷標記50的座標的位移量。On the other hand, theprocessor 110 obtains the displacement of the coordinates of themarker 50 through thetransceiver 130, and determines the time domain parameter according to the displacement. Specifically, theprocessor 110 can determine the displacement of the coordinates of themarker 50 according to the electrocardiogram signal corresponding to therespiration signal 600 . Theprocessor 110 may determine the time domain parameter according to the displacement of the coordinates of themarker 50 . In one embodiment, theprocessor 110 may firstly perform a low-pass filtering process on the respiration signal to generate a filtered respiration signal. Theprocessor 110 can determine the displacement of the coordinates of themarker 50 according to the filtered respiratory signal.

時域參數可關聯於標記50的子標記。圖7根據本揭露的一實施例繪示標記50的示意圖。標記50可包含一或多個子標記。在本實施例中,標記50可包含子標記51、子標記52、子標記53以及子標記54。Temporal parameters may be associated with submarks of themarker 50 . FIG. 7 shows a schematic diagram of amarker 50 according to an embodiment of the present disclosure. Amarker 50 may contain one or more sub-markers. In this embodiment, themark 50 may include a sub-mark 51 , a sub-mark 52 , a sub-mark 53 and a sub-mark 54 .

在一實施例中,時域參數可為時間的函數。時域參數可包含子標記在各個座標軸上的位移量。舉例來說,時域參數可包含子標記51的座標(子標記的座標又稱為「子座標」)在X軸的位移量

Figure 02_image017
、子標記51的座標在Y軸的位移量
Figure 02_image019
、子標記51的座標在Z軸的位移量
Figure 02_image021
、子標記52的座標在X軸的位移量
Figure 02_image023
、子標記52的座標在Y軸的位移量
Figure 02_image025
、子標記52的座標在Z軸的位移量
Figure 02_image027
、子標記53的座標在X軸的位移量
Figure 02_image029
、子標記53的座標在Y軸的位移量
Figure 02_image031
、子標記53的座標在Z軸的位移量
Figure 02_image033
、子標記54的座標在X軸的位移量
Figure 02_image035
、子標記54的座標在Y軸的位移量
Figure 02_image037
或子標記54的座標在Z軸的位移量
Figure 02_image039
等多個時域特徵。In one embodiment, the time domain parameter may be a function of time. The time domain parameter may include the displacement of the sub-marker on each coordinate axis. For example, the time domain parameter may include the displacement of the coordinates of the sub-mark 51 (the coordinates of the sub-mark are also called "sub-coordinates") on the X-axis
Figure 02_image017
, the displacement of the coordinates of thesubmark 51 on the Y axis
Figure 02_image019
, the displacement of the coordinates of thesubmark 51 on the Z axis
Figure 02_image021
, the displacement of the coordinates of the sub-mark 52 on the X-axis
Figure 02_image023
, the displacement of the coordinates of thesubmark 52 on the Y axis
Figure 02_image025
, the displacement of the coordinates of thesubmark 52 on the Z axis
Figure 02_image027
, the displacement of the coordinates of the sub-mark 53 on the X-axis
Figure 02_image029
, the displacement of the coordinates of thesubmark 53 on the Y axis
Figure 02_image031
, the displacement of the coordinates of thesubmark 53 on the Z axis
Figure 02_image033
, the displacement of the coordinates of the sub-mark 54 on the X-axis
Figure 02_image035
, the displacement of the coordinates of thesubmark 54 on the Y axis
Figure 02_image037
Or the displacement of the coordinates of thesubmark 54 on the Z axis
Figure 02_image039
and other time-domain features.

在一實施例中,時域參數可包含子標記在各個座標軸上的位移量的總和。舉例來說,時域參數可包含子標記51的座標在X軸、Y軸和Z軸上的位移量的總和

Figure 02_image041
Figure 02_image043
)。時域參數可包含子標記52的座標在X軸、Y軸和Z軸上的位移量的總和
Figure 02_image045
Figure 02_image047
)。時域參數可包含子標記53的座標在X軸、Y軸和Z軸上的位移量的總和
Figure 02_image049
Figure 02_image051
)。時域參數可包含子標記54的座標在X軸、Y軸和Z軸上的位移量的總和
Figure 02_image053
Figure 02_image055
)。In an embodiment, the time-domain parameter may include the sum of the displacements of the submarks on each coordinate axis. For example, the time domain parameter may include the sum of the displacements of the coordinates of the sub-mark 51 on the X-axis, Y-axis and Z-axis
Figure 02_image041
(
Figure 02_image043
). The time-domain parameter can include the sum of the displacements of the coordinates of the sub-mark 52 on the X-axis, Y-axis and Z-axis
Figure 02_image045
(
Figure 02_image047
). The time-domain parameter can include the sum of the displacements of the coordinates of the sub-mark 53 on the X-axis, Y-axis and Z-axis
Figure 02_image049
(
Figure 02_image051
). The time-domain parameter can include the sum of the displacements of the coordinates of the sub-mark 54 on the X-axis, Y-axis and Z-axis
Figure 02_image053
(
Figure 02_image055
).

在一實施例中,時域參數可包含多個子標記在不同座標軸上的位移量的總和。舉例來說,時域參數可包含總和

Figure 02_image041
、總和
Figure 02_image045
、總和
Figure 02_image049
以及總和
Figure 02_image053
等四個參數的總和
Figure 02_image057
Figure 02_image059
)。In an embodiment, the time-domain parameter may include a sum of displacements of multiple sub-markers on different coordinate axes. For example, a time domain parameter could contain the sum of
Figure 02_image041
,sum
Figure 02_image045
,sum
Figure 02_image049
and the sum
Figure 02_image053
The sum of the four parameters
Figure 02_image057
(
Figure 02_image059
).

另一方面,處理器110可對呼吸訊號執行短時傅立葉轉換(short-time Fourier transform,STFT)以產生頻域參數。舉例來說,處理器110可對呼吸訊號600執行解析度為128的短時傅立葉轉換,藉以產生包含128個頻率特徵的頻域參數。On the other hand, theprocessor 110 may perform short-time Fourier transform (STFT) on the respiratory signal to generate frequency domain parameters. For example, theprocessor 110 may perform a short-time Fourier transform with a resolution of 128 on therespiratory signal 600 to generate frequency domain parameters including 128 frequency features.

在取得呼吸參數、時域參數以及頻域參數後,處理器110可基於機器學習模型123而根據呼吸參數、時域參數以及頻域參數判斷目標20的長期呼吸狀態是否正常。具體來說,處理器110可將分別對應於多個時間點的多個呼吸參數、多個時域參數以及多個頻域參數輸入至機器學習模型123,以產生分別對應於多個時間點的多筆輸出資料,其中所述多筆輸出資料的每一者可代表當前呼吸狀態正常或當前呼吸狀態異常,多個當前呼吸狀態資料則累積為長期呼吸狀態。處理器110可根據所述多筆輸出資料判斷目標20的長期呼吸狀態是否正常。機器學習模型123例如是遞歸類神經網路(recurrent neural network,RNN)模型。在一實施例中,機器學習模型123可為監督式機器學習模型。處理器110可收集多個歷史呼吸參數、多個歷史時域參數以及多個歷史頻域參數來訓練機器學習模型123。訓練好的機器學習模型123可依據根據呼吸參數、時域參數以及頻域參數判斷目標20的呼吸狀態是否正常。After obtaining the breathing parameters, time domain parameters and frequency domain parameters, theprocessor 110 can determine whether the long-term breathing state of thetarget 20 is normal based on themachine learning model 123 according to the breathing parameters, time domain parameters and frequency domain parameters. Specifically, theprocessor 110 may input a plurality of respiratory parameters, a plurality of time domain parameters and a plurality of frequency domain parameters respectively corresponding to a plurality of time points into themachine learning model 123, so as to generate a plurality of time points respectively corresponding to Multiple pieces of output data, wherein each of the multiple pieces of output data can represent the current normal respiratory state or the current abnormal respiratory state, and the multiple current respiratory state data are accumulated into a long-term respiratory state. Theprocessor 110 can determine whether the long-term breathing state of thetarget 20 is normal according to the multiple output data. Themachine learning model 123 is, for example, a recurrent neural network (recurrent neural network, RNN) model. In one embodiment, themachine learning model 123 may be a supervised machine learning model. Theprocessor 110 may collect a plurality of historical respiratory parameters, a plurality of historical time domain parameters, and a plurality of historical frequency domain parameters to train themachine learning model 123 . The trainedmachine learning model 123 can judge whether the breathing state of thetarget 20 is normal according to the breathing parameters, time domain parameters and frequency domain parameters.

在步驟S333中,處理器110根據機器學習模型122以及呼吸訊號估計目標20的未來呼吸狀態是否正常。若目標20的未來呼吸狀態正常,則進入步驟S334。In step S333 , theprocessor 110 estimates whether the future breathing state of thetarget 20 is normal according to themachine learning model 122 and the breathing signal. If the future breathing state of thetarget 20 is normal, go to step S334.

具體來說,處理器110可基於機器學習模型122而根據與呼吸訊號600相關的時域參數或頻率參數(例如:步驟S332所述的時域參數或頻率參數)估計目標20的未來呼吸狀態(例如:估計時間t+1、t+2或t+3的呼吸狀態)。處理器110可將時域參數或頻率參數輸入至機器學習模型122。機器學習模型122可根據輸入資料輸出代表未來呼吸狀態正常或未來呼吸狀態異常的輸出資料。機器學習模型122例如是遞歸類神經網路模型。在一實施例中,機器學習模型122可為監督式機器學習模型。處理器110可收集多個歷史時域參數以及多個歷史頻域參數來訓練機器學習模型122。訓練好的機器學習模型122可依據根據時域參數以及頻域參數估計目標20的未來呼吸狀態是否正常。Specifically, theprocessor 110 may estimate the future breathing state of thetarget 20 according to the time domain parameters or frequency parameters related to the respiratory signal 600 (for example: the time domain parameters or frequency parameters described in step S332) based on the machine learning model 122 ( For example: estimated breathing state at time t+1, t+2 or t+3).Processor 110 may input time domain parameters or frequency parameters tomachine learning model 122 . Themachine learning model 122 can output output data representing normal or abnormal future breathing state according to the input data. Themachine learning model 122 is, for example, a recurrent neural network model. In one embodiment, themachine learning model 122 may be a supervised machine learning model. Theprocessor 110 can collect a plurality of historical time domain parameters and a plurality of historical frequency domain parameters to train themachine learning model 122 . The trainedmachine learning model 122 can estimate whether the future breathing state of thetarget 20 is normal according to the time domain parameters and the frequency domain parameters.

在步驟S334中,處理器110可根據呼吸訊號判斷目標20的呼吸停滯時段。具體來說,處理器110可根據對應於呼吸訊號600的時域參數(例如:步驟S332所述的時域參數)產生特徵訊號。處理器110可響應於特徵訊號在特定時段期間小於閾值而將所述特定時段設為設為呼吸停滯時段。在一實施例中,特徵訊號可為標記50的子標記在各個座標軸上的位移量的特徵。In step S334, theprocessor 110 can determine the period of apnea of thetarget 20 according to the respiration signal. Specifically, theprocessor 110 can generate the characteristic signal according to the time-domain parameters corresponding to the respiratory signal 600 (for example: the time-domain parameters described in step S332 ). Theprocessor 110 may set the specified period of time as an apnea period in response to the characteristic signal being less than a threshold during the specified period of time. In one embodiment, the characteristic signal can be a characteristic of the displacement of the sub-marks of themarker 50 on each coordinate axis.

圖8根據本揭露的一實施例繪示特徵訊號700的示意圖。假設時段710對應於呼吸週期610,並且時段720對應於呼吸週期620。處理器110可響應於特徵訊號700在時段810或時段820期間小於閾值800而將時段810或時段820設為目標20的呼吸停滯時段。在呼吸停滯時段期間為目標20執行入針可大幅地降低手術風險。FIG. 8 shows a schematic diagram of acharacteristic signal 700 according to an embodiment of the present disclosure. Assume thattime period 710 corresponds to breathingcycle 610 andtime period 720 corresponds to breathingcycle 620 .Processor 110 may setperiod 810 orperiod 820 as the apnea period oftarget 20 in response tocharacteristic signal 700 being less thanthreshold 800 duringperiod 810 orperiod 820 . Performing needle insertion for thetarget 20 during periods of apnea can greatly reduce surgical risk.

在步驟S335中,處理器110可根據呼吸停滯時段決定建議入針時段。舉例來說,處理器110可選擇時段810以及時段820的至少其中之一以作為建議入針時段。In step S335, theprocessor 110 may determine a suggested needle insertion time period according to the breathing stagnation period. For example, theprocessor 110 may select at least one of theperiod 810 and theperiod 820 as the suggested needle insertion period.

在一實施例中,處理器110可根據目標20的當前呼吸狀態決定建議入針時段。具體來說,處理器110可通過收發器130取得對應於呼吸停滯時段的呼吸訊號(或稱為「第二呼吸訊號」、「呼吸停滯訊號」)。舉例來說,處理器110可通過收發器130接收對應於呼吸停滯時段的目標20的心電圖訊號,並且將所述心電圖訊號轉換為呼吸停滯訊號。處理器110可根據R尺寸方法或RR方法以將心電圖訊號轉換為呼吸停滯訊號。In one embodiment, theprocessor 110 may determine a suggested needle insertion period according to the current breathing state of the subject 20 . Specifically, theprocessor 110 can obtain a respiratory signal corresponding to the period of respiratory arrest (or referred to as “second respiratory signal” or “respiratory arrest signal”) through thetransceiver 130 . For example, theprocessor 110 may receive the electrocardiogram signal of thetarget 20 corresponding to the apnea period through thetransceiver 130, and convert the electrocardiogram signal into an apnea signal. Theprocessor 110 can convert the ECG signal into an apnea signal according to the R-size method or the RR method.

接著,處理器110可根據呼吸停滯訊號產生時域參數以及頻率參數。舉例來說,處理器110可根據與步驟S332相似的方式產生對應於呼吸停滯訊號時域參數以及頻率參數。處理器110可基於機器學習模型123而根據時域參數以及頻域參數判斷目標20的當前呼吸狀態是否正常。處理器110可將時域參數以及頻域參數輸入至機器學習模型123。機器學習模型123可根據輸入資料輸出代表當前呼吸狀態正常或當前呼吸狀態異常的輸出資料。處理器110可響應於當前狀態正常而將對應於呼吸停滯訊號的呼吸停滯時段設為建議入針時段。Then, theprocessor 110 can generate a time-domain parameter and a frequency parameter according to the apnea signal. For example, theprocessor 110 may generate a time domain parameter and a frequency parameter corresponding to the apnea signal in a manner similar to step S332. Theprocessor 110 may determine whether the current breathing state of thetarget 20 is normal according to the time domain parameter and the frequency domain parameter based on themachine learning model 123 . Theprocessor 110 can input the time domain parameters and the frequency domain parameters to themachine learning model 123 . Themachine learning model 123 can output output data representing the current normal breathing state or the current abnormal breathing state according to the input data. Theprocessor 110 may set the apnea stagnation period corresponding to the apnea stagnation signal as the recommended needle insertion period in response to the current state being normal.

在步驟S336中,處理器110可通過收發器130輸出建議入針時段。In step S336 , theprocessor 110 may output the suggested needle insertion time period through thetransceiver 130 .

在一實施例中,為了避免目標20受到傷害,處理器110可通過收發器130指示機械手臂30在建議入針時段以外的時段鬆開夾具31,藉以使針具40能隨著目標20的呼吸起伏移動。In one embodiment, in order to prevent thetarget 20 from being injured, theprocessor 110 can instruct themechanical arm 30 to release theclamp 31 at a period other than the recommended needle insertion period through thetransceiver 130, so that theneedle 40 can follow the breathing of thetarget 20 Ups and downs move.

在一實施例中,處理器110可根據建議入針時段決定針具40的針具移動時間以及針具移動距離。處理器110可通過收發器130輸出針具移動時間以及針具移動距離。舉例來說,處理器110可決定針具移動時間(或針具移動距離)與建議入針時段的長度成正比。In one embodiment, theprocessor 110 may determine the needle moving time and the needle moving distance of theneedle 40 according to the suggested needle insertion time period. Theprocessor 110 can output the needle moving time and the needle moving distance through thetransceiver 130 . For example, theprocessor 110 may determine that the needle moving time (or the needle moving distance) is proportional to the length of the suggested needle insertion period.

處理器110可執行控制校正程序(即:步驟S340)以確保入針手術順利地執行。圖9根據本揭露的一實施例繪示步驟S340的流程圖,其中步驟S340可包含步驟S341~343。在步驟S341中,處理器110可取得目標20在入針手術期間的CT影像。CT影像可包含目標20、針具40、標籤50以及目標物件60。在步驟S342中,處理器110可根據CT影像判斷針具40的當前路徑是否符合建議入針路徑15。在步驟S343中,處理器110可根據針具40的當前路徑調整機械手臂30。具體來說,若針具40的當前路徑與建議入針路徑15相符,則處理器110可維持機械手臂30的原先配置。若針具40的當前路徑與建議入針路徑15不相符,則處理器110可根據針具40的當前路徑更新建議入針路徑,並且通過收發器130輸出更新的建議入針路徑。Theprocessor 110 may execute a control correction procedure (ie: step S340 ) to ensure that the needle insertion operation is performed smoothly. FIG. 9 shows a flowchart of step S340 according to an embodiment of the present disclosure, wherein step S340 may include steps S341-343. In step S341 , theprocessor 110 may obtain a CT image of thetarget 20 during the needle insertion operation. The CT image may include thetarget 20 , theneedle 40 , thelabel 50 and thetarget object 60 . In step S342 , theprocessor 110 may determine whether the current path of theneedle 40 conforms to the suggested needle insertion path 15 according to the CT image. In step S343 , theprocessor 110 can adjust therobot arm 30 according to the current path of theneedle 40 . Specifically, if the current path of theneedle 40 is consistent with the suggested needle insertion path 15 , theprocessor 110 may maintain the original configuration of therobotic arm 30 . If the current path of theneedle 40 does not match the suggested needle insertion path 15 , theprocessor 110 may update the suggested needle insertion path according to the current path of theneedle 40 , and output the updated suggested needle insertion path through thetransceiver 130 .

圖10根據本揭露的一實施例繪示一種電腦輔助入針方法的流程圖。電腦輔助入針方法適用於控制針具,並且電腦輔助入針方法可由如圖1所示的電腦輔助入針系統100實施。在步驟S111中,取得第一機器學習模型以及第二機器學習模型。在步驟S112中,取得電腦斷層攝影影像以及入針路徑,並且根據第一機器學習模型、電腦斷層攝影影像以及入針路徑產生建議入針路徑,並指示針具接近目標的表面上的入針點,其中入針點位於建議入針路徑。在步驟S113中,取得目標的呼吸訊號,根據第二機器學習模型以及呼吸訊號估計目標的未來呼吸狀態是否正常。在步驟S114中,響應於判斷未來呼吸狀態正常,根據呼吸訊號輸出建議入針時段。FIG. 10 shows a flowchart of a computer-assisted needle insertion method according to an embodiment of the present disclosure. The computer-aided needle insertion method is suitable for controlling needles, and the computer-aided needle insertion method can be implemented by the computer-aidedneedle insertion system 100 shown in FIG. 1 . In step S111, a first machine learning model and a second machine learning model are obtained. In step S112, obtain the computerized tomography image and the needle insertion path, and generate a suggested needle insertion path according to the first machine learning model, the computer tomography image, and the needle insertion path, and indicate the needle insertion point on the surface where the needle is close to the target , where the needle entry point is located on the suggested needle entry path. In step S113, the breathing signal of the target is obtained, and whether the future breathing state of the target is normal is estimated according to the second machine learning model and the breathing signal. In step S114 , in response to determining that the future breathing state is normal, a suggested needle insertion time period is output according to the breathing signal.

綜上所述,本揭露可在入針手術執行前根據深度學習技術固定針具的位置,使針具接近目標表面上的理想入針點。施術者可根據入針點精準地將針具插入目標物件。此外,本揭露可根據機器學習技術判斷目標的呼吸狀態是否正常,藉以避免施術者在目標呼吸狀態異常的情況下執行入針手術,導致入針手術受到干擾。再者,本揭露還可根據目標的呼吸訊號判斷目標的呼吸停滯時段,並且建議施術者在呼吸停滯時段執行入針手術,藉此將目標的呼吸對入針手術的影響降到最低。在非呼吸停滯時段期間,電腦輔助入針系統可控制機械手臂鬆開用以固定針具的夾具,降低針具對使用者身體的損害。To sum up, the present disclosure can fix the position of the needle according to the deep learning technology before the needle insertion operation, so that the needle can approach the ideal needle insertion point on the target surface. The operator can accurately insert the needle into the target object according to the needle insertion point. In addition, the present disclosure can judge whether the breathing state of the target is normal according to the machine learning technology, so as to prevent the operator from performing the needle insertion operation when the breathing state of the target is abnormal, causing the needle insertion operation to be disturbed. Furthermore, the present disclosure can also determine the target's breathing stagnation period according to the breathing signal of the target, and suggest the operator to perform the needle insertion operation during the breathing stagnation period, so as to minimize the influence of the target's breathing on the needle insertion operation. During the period of non-respiratory arrest, the computer-assisted needle insertion system can control the mechanical arm to release the clamp used to fix the needle, reducing the damage of the needle to the user's body.

10:入針路徑 100:電腦輔助入針系統 110:處理器 120:儲存媒體 121、122、123:機器學習模型 130:收發器 15:建議入針路徑 20:目標 21:表面 30:機械手臂 31:夾具 40:針具 41:針尖 50:標記 51、52、53、54:子標記 60:目標物件 600:呼吸訊號 61、63:呼吸起始時間點 610、620:呼吸週期 62、64:呼吸停止時間點 700:特徵訊號 710、720、810、820:時段 800:閾值 S111、S112、S113、S114、S310、S320、S321、S322、S323、S324、S330、S331、S332、S333、S334、S335、S336、S340、S341、S342、S343:步驟10: Needle path 100: Computer Aided Needle Insertion System 110: Processor 120:storage media 121, 122, 123: Machine Learning Models 130: Transceiver 15: Suggested insertion path 20: target 21: surface 30: Mechanical arm 31: Fixture 40: Needle 41: needle tip 50:mark 51, 52, 53, 54: Sub-tags 60: target object 600: breathingsignal 61, 63: Respiration starttime point 610, 620: breathingcycle 62, 64: Respiration stop time point 700:characteristic signal 710, 720, 810, 820: time slot 800: Threshold S111, S112, S113, S114, S310, S320, S321, S322, S323, S324, S330, S331, S332, S333, S334, S335, S336, S340, S341, S342, S343: steps

圖1根據本揭露的一實施例繪示一種電腦輔助入針系統的示意圖。 圖2根據本揭露的一實施例繪示對目標進行入針的示意圖。 圖3根據本揭露的一實施例繪示一種電腦輔助入針方法的流程圖。 圖4根據本揭露的一實施例繪示步驟S320的流程圖。 圖5根據本揭露的一實施例繪示步驟S330的流程圖。 圖6根據本揭露的一實施例繪示呼吸訊號的示意圖。 圖7根據本揭露的一實施例繪示標記的示意圖。 圖8根據本揭露的一實施例繪示特徵訊號的示意圖。 圖9根據本揭露的一實施例繪示步驟S340的流程圖。 圖10根據本揭露的一實施例繪示一種電腦輔助入針方法的流程圖。FIG. 1 is a schematic diagram of a computer-aided needle insertion system according to an embodiment of the present disclosure. FIG. 2 shows a schematic diagram of inserting a needle into a target according to an embodiment of the present disclosure. FIG. 3 shows a flow chart of a computer-assisted needle insertion method according to an embodiment of the present disclosure. FIG. 4 shows a flowchart of step S320 according to an embodiment of the present disclosure. FIG. 5 shows a flowchart of step S330 according to an embodiment of the present disclosure. FIG. 6 shows a schematic diagram of a breathing signal according to an embodiment of the present disclosure. FIG. 7 shows a schematic diagram of a marker according to an embodiment of the present disclosure. FIG. 8 is a schematic diagram illustrating characteristic signals according to an embodiment of the present disclosure. FIG. 9 shows a flowchart of step S340 according to an embodiment of the present disclosure. FIG. 10 shows a flowchart of a computer-assisted needle insertion method according to an embodiment of the present disclosure.

S111、S112、S113、S114:步驟S111, S112, S113, S114: steps

Claims (11)

Translated fromChinese
一種電腦輔助入針系統,適用於控制針具,包括:儲存媒體,儲存第一機器學習模型、第二機器學習模型以及第三機器學習模型,其中所述第一機器學習模型為深度Q-學習模型;以及處理器,耦接所述儲存媒體,其中所述處理器經配置以執行:取得電腦斷層攝影影像以及入針路徑,並且根據所述第一機器學習模型、所述電腦斷層攝影影像以及所述入針路徑產生建議入針路徑,並指示所述針具接近目標的表面上的入針點,其中所述入針點位於所述建議入針路徑,其中所述電腦斷層攝影影像包括所述針具、所述目標的所述表面上的標記以及目標物件,所述Q-學習模型的狀態集合中的狀態包括所述針具的針尖的第一座標、所述標記的第二座標以及所述目標物件的第三座標;取得所述目標的呼吸訊號,根據所述第二機器學習模型以及所述呼吸訊號估計所述目標的未來呼吸狀態是否正常;以及響應於判斷所述未來呼吸狀態正常,根據所述呼吸訊號輸出建議入針時段,包括:根據所述呼吸訊號判斷所述第二座標的位移量,根據所述位移量決定時域參數;根據所述時域參數產生特徵訊號;響應於所述特徵訊號在時段期間小於閾值,將所述時段設為呼吸停滯時段;以及根據所述呼吸停滯時段決定所述建議入針時段,包括:取得對應於所述呼吸停滯時段的第二呼吸訊號;自所述第二呼吸訊號擷取出呼吸參數;根據所述第二呼吸訊號判斷所述第二座標的第二位移量,並且根據所述第二位移量決定第二時域參數;對所述第二呼吸訊號執行短時傅立葉轉換以產生頻域參數;基於所述第三機器學習模型而根據所述呼吸參數、所述第二時域參數以及所述頻域參數判斷所述目標的當前呼吸狀態是否正常;以及響應於所述當前呼吸狀態正常,將所述呼吸停滯時段設為所述建議入針時段。A computer-aided needle insertion system, suitable for controlling needles, comprising: a storage medium storing a first machine learning model, a second machine learning model, and a third machine learning model, wherein the first machine learning model is deep Q-learning a model; and a processor coupled to the storage medium, wherein the processor is configured to perform: obtaining a computed tomography image and a needle path, and according to the first machine learning model, the computed tomography image, and The needle entry path generates a suggested needle entry path and indicates a needle entry point on a surface of the needle approach target, wherein the needle entry point is located on the proposed needle entry path, wherein the computed tomography image includes the The needle, the mark on the surface of the target, and the target object, the states in the state set of the Q-learning model include the first coordinate of the needle tip of the needle, the second coordinate of the mark, and The third coordinate of the target object; obtaining the breathing signal of the target, estimating whether the future breathing state of the target is normal according to the second machine learning model and the breathing signal; and in response to judging the future breathing state Normal, outputting the suggested needle insertion period according to the breathing signal, including: judging the displacement of the second coordinate according to the breathing signal, determining a time-domain parameter according to the displacement; generating a characteristic signal according to the time-domain parameter; In response to the characteristic signal being less than a threshold during a time period, theThe time period is set as a respiratory arrest period; and determining the suggested needle insertion period according to the respiratory arrest period includes: obtaining a second respiratory signal corresponding to the respiratory arrest period; extracting respiratory parameters from the second respiratory signal; determining a second displacement of the second coordinates according to the second respiratory signal, and determining a second time-domain parameter according to the second displacement; performing a short-time Fourier transform on the second respiratory signal to generate a frequency domain parameter; judge whether the current breathing state of the target is normal according to the breathing parameter, the second time domain parameter and the frequency domain parameter based on the third machine learning model; and respond to the normal breathing state of the target , setting the breathing stagnation period as the recommended needle insertion period.如請求項1所述的電腦輔助入針系統,其中所述Q-學習模型的獎勵關聯於下列的至少其中之一:所述第一座標與所述第三座標之間的第一距離;所述第一座標、所述第二座標和所述第三座標所形成的單位向量與所述入針路徑之間的夾角;所述單位向量與所述入針路徑之間的第二距離;所述第一座標停止更新的時間;以及所述第一座標與所述表面之間的第三距離。The computer-aided needle insertion system according to claim 1, wherein the reward of the Q-learning model is associated with at least one of the following: a first distance between the first coordinate and the third coordinate; The angle between the unit vector formed by the first coordinate, the second coordinate and the third coordinate and the needle entry path; the second distance between the unit vector and the needle entry path; a time when the first coordinate stops being updated; and a third distance between the first coordinate and the surface.如請求項1所述的電腦輔助入針系統,更包括:收發器,耦接所述處理器,其中所述處理器通過所述收發器通訊連接至機械手臂,其中所述處理器通過所述收發器傳送指令至所述機械手臂以控制所述針具,其中所述Q-學習模型的動作集合中的動作包括:所述機械手臂的關節動作。The computer-aided needle insertion system according to claim 1, further comprising: a transceiver, coupled to the processor, wherein the processor is communicatively connected to the robotic arm through the transceiver, wherein the processor communicates with the robotic arm through the The transceiver transmits instructions to the robotic arm to control the needle, wherein the actions in the action set of the Q-learning model include: joint actions of the robotic arm.如請求項3所述的電腦輔助入針系統,其中所述處理器從所述動作集合中選出對應於所述Q-學習模型的累積獎勵的最大期望值的所述動作,並且根據所述動作更新所述Q-學習模型的所述狀態以訓練所述Q-學習模型。The computer-aided needle insertion system according to claim 3, wherein the processor selects the action corresponding to the maximum expected cumulative reward of the Q-learning model from the action set, and updates the action according to the action The state of the Q-learning model to train the Q-learning model.如請求項4所述的電腦輔助入針系統,其中所述處理器根據更新的所述狀態決定所述建議入針路徑。The computer-aided needle insertion system according to claim 4, wherein the processor determines the suggested needle insertion path according to the updated state.如請求項3所述的電腦輔助入針系統,其中所述機械手臂包括固定所述針具的夾具,其中所述處理器通過所述收發器指示所述機械手臂在所述建議入針時段以外的時段鬆開所述夾具。The computer-aided needle insertion system according to claim 3, wherein the robotic arm includes a clamp for fixing the needle, wherein the processor instructs the robotic arm to be outside the recommended needle insertion time period through the transceiver period of time to loosen the clamp.如請求項1所述的電腦輔助入針系統,其中所述處理器更經配置以執行:根據歷史資料決定參考呼吸週期,並且根據所述參考呼吸週期決定取樣時段;根據所述取樣時段取樣心電圖訊號;以及將所述心電圖訊號轉換為所述呼吸訊號。The computer-assisted needle insertion system according to claim 1, wherein the processor is further configured to execute: determine a reference breathing cycle according to historical data, and determine a sampling period according to the reference breathing cycle; sample an electrocardiogram according to the sampling period signal; and converting the electrocardiogram signal into the breathing signal.如請求項1所述的電腦輔助入針系統,其中所述處理器更經配置以執行:自所述呼吸訊號擷取出第二呼吸參數;對所述呼吸訊號執行短時傅立葉轉換以產生第二頻域參數;基於所述第三機器學習模型而根據所述第二呼吸參數、所述時域參數以及所述第二頻域參數判斷所述目標的長期呼吸狀態是否正常;以及響應於所述長期呼吸狀態正常,估計所述目標的所述未來呼吸狀態是否正常。The computer-assisted needle insertion system according to claim 1, wherein the processor is further configured to perform: extracting a second breathing parameter from the breathing signal; performing a short-time Fourier transform on the breathing signal to generate a second frequency domain parameters; based on the third machine learning model, judging whether the long-term respiratory state of the target is normal according to the second breathing parameters, the time domain parameters and the second frequency domain parameters; and responding to the The long-term breathing state is normal, and it is estimated whether the future breathing state of the target is normal.如請求項8所述的電腦輔助入針系統,其中所述標記包括對應於第一子座標的第一子標記以及對應於第二子座標的第二子標記,其中所述時域參數包括下列的至少其中之一:所述第一子座標在X軸的第一位移量;所述第一子座標在Y軸的第二位移量;所述第一子座標在Z軸的第三位移量;所述第一位移量、所述第二位移量以及所述第三位移量的第一總和;所述第二子座標在所述X軸的第四位移量;所述第二子座標在所述Y軸的第五位移量;所述第二子座標在所述Z軸的第六位移量;所述第四位移量、所述第五位移量以及所述第六位移量的第二總和;以及所述第一總和與所述第二總和的總和。The computer-aided needle insertion system according to claim 8, wherein the markers include a first sub-mark corresponding to the first sub-coordinate and a second sub-mark corresponding to the second sub-coordinate, wherein the time-domain parameters include the following At least one of: the first displacement of the first sub-coordinate on the X-axis; the second displacement of the first sub-coordinate on the Y-axis; the third displacement of the first sub-coordinate on the Z-axis ; The first sum of the first displacement, the second displacement and the third displacement; the fourth displacement of the second sub-coordinate on the X axis; the second sub-coordinate in The fifth displacement of the Y axis; the sixth displacement of the second sub-coordinate on the Z axis; the second of the fourth displacement, the fifth displacement, and the sixth displacement a sum; and the sum of said first sum and said second sum.如請求項8所述的電腦輔助入針系統,其中所述處理器更經配置以執行:基於所述第二機器學習模型而根據所述時域參數以及所述第二頻域參數估計所述目標的所述未來呼吸狀態是否正常。The computer-aided needle insertion system according to claim 8, wherein the processor is further configured to perform: based on the second machine learning model, according to the time-domain parameters and the firstTwo frequency domain parameters estimate whether the future breathing state of the target is normal.如請求項1所述的電腦輔助入針系統,其中所述處理器根據所述建議入針時段決定針具移動時間以及針具移動距離,並且輸出所述針具移動時間以及所述針具移動距離。The computer-aided needle insertion system according to claim 1, wherein the processor determines the needle movement time and the needle movement distance according to the suggested needle insertion period, and outputs the needle movement time and the needle movement distance.
TW110136416A2020-12-292021-09-30Computer-assisted needle insertion system and computer-assisted needle insertion methodTWI792592B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
US17/534,418US12171460B2 (en)2020-12-292021-11-23Computer-assisted needle insertion system and computer-assisted needle insertion method
US18/942,801US20250064478A1 (en)2020-12-292024-11-11Computer-assisted needle insertion method

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US202063131765P2020-12-292020-12-29
US63/131,7652020-12-29

Publications (2)

Publication NumberPublication Date
TW202226031A TW202226031A (en)2022-07-01
TWI792592Btrue TWI792592B (en)2023-02-11

Family

ID=83436741

Family Applications (1)

Application NumberTitlePriority DateFiling Date
TW110136416ATWI792592B (en)2020-12-292021-09-30Computer-assisted needle insertion system and computer-assisted needle insertion method

Country Status (1)

CountryLink
TW (1)TWI792592B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20190117317A1 (en)*2016-04-122019-04-25Canon U.S.A., Inc.Organ motion compensation
US20190223958A1 (en)*2018-01-232019-07-25Inneroptic Technology, Inc.Medical image guidance
TW201941218A (en)*2018-01-082019-10-16美商普吉尼製藥公司Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
CN110364238A (en)*2018-04-112019-10-22西门子医疗有限公司 Machine learning-based contrast agent management
CN111565632A (en)*2017-10-132020-08-21奥特美医疗有限责任公司 Systems and microtubule conductivity for characterizing, diagnosing, and treating a patient's health condition, and methods of using the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20190117317A1 (en)*2016-04-122019-04-25Canon U.S.A., Inc.Organ motion compensation
CN111565632A (en)*2017-10-132020-08-21奥特美医疗有限责任公司 Systems and microtubule conductivity for characterizing, diagnosing, and treating a patient's health condition, and methods of using the same
TW201941218A (en)*2018-01-082019-10-16美商普吉尼製藥公司Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
US20190223958A1 (en)*2018-01-232019-07-25Inneroptic Technology, Inc.Medical image guidance
CN110364238A (en)*2018-04-112019-10-22西门子医疗有限公司 Machine learning-based contrast agent management

Also Published As

Publication numberPublication date
TW202226031A (en)2022-07-01

Similar Documents

PublicationPublication DateTitle
JP4836122B2 (en) Surgery support apparatus, method and program
JP6813592B2 (en) Organ movement compensation
JP6843639B2 (en) Ultrasonic diagnostic device and ultrasonic diagnostic support device
CN104055520B (en)Human organ motion monitoring method and operation guiding system
JP6441051B2 (en) Dynamic filtering of mapping points using acquired images
JP2020520691A (en) Biopsy devices and systems
JP6475324B2 (en) Optical tracking system and coordinate system matching method of optical tracking system
US20110160569A1 (en) system and method for real-time surface and volume mapping of anatomical structures
JP2007083038A5 (en)
JP2019528899A (en) Visualization of image objects related to instruments in extracorporeal images
US20250064478A1 (en)Computer-assisted needle insertion method
CN114073581B (en)Bronchus electromagnetic navigation system
JP2022095582A (en) Probe cavity motion modeling
CN114748141A (en)Puncture needle three-dimensional pose real-time reconstruction method and device based on X-ray image
JP2016534811A (en) Support device for supporting alignment of imaging device with respect to position and shape determination device
CN111403017A (en)Medical assistance device, system, and method for determining a deformation of an object
US20230186471A1 (en)Providing a specification
JP2019063518A (en) Estimation of ablation size and visual representation
JP2007537816A (en) Medical imaging system for mapping the structure of an object
US20240099776A1 (en)Systems and methods for integrating intraoperative image data with minimally invasive medical techniques
TWI792592B (en)Computer-assisted needle insertion system and computer-assisted needle insertion method
US7039226B2 (en)Method and apparatus for modeling momentary conditions of medical objects dependent on at least one time-dependent body function
JP6881945B2 (en) Volume measurement map update
WO2024222402A1 (en)Catheter robot and registration method thereof
JP2023513383A (en) Medical imaging systems, devices, and methods for visualizing deployment of internal therapeutic devices

[8]ページ先頭

©2009-2025 Movatter.jp