Movatterモバイル変換


[0]ホーム

URL:


CN113378759B - VR intelligent glasses-based oral cavity examination method and system - Google Patents

VR intelligent glasses-based oral cavity examination method and system
Download PDF

Info

Publication number
CN113378759B
CN113378759BCN202110709366.6ACN202110709366ACN113378759BCN 113378759 BCN113378759 BCN 113378759BCN 202110709366 ACN202110709366 ACN 202110709366ACN 113378759 BCN113378759 BCN 113378759B
Authority
CN
China
Prior art keywords
doctor
oral
person
payment
oral cavity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110709366.6A
Other languages
Chinese (zh)
Other versions
CN113378759A (en
Inventor
杨秀巧
文萍
张锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Maternity & Child Healthcare Hospital
Original Assignee
Shenzhen Maternity & Child Healthcare Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Maternity & Child Healthcare HospitalfiledCriticalShenzhen Maternity & Child Healthcare Hospital
Priority to CN202110709366.6ApriorityCriticalpatent/CN113378759B/en
Publication of CN113378759ApublicationCriticalpatent/CN113378759A/en
Application grantedgrantedCritical
Publication of CN113378759BpublicationCriticalpatent/CN113378759B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

When detecting that the state of the VR intelligent glasses is changed from an unworn state to a worn state, detecting whether eye images of a wearer are acquired within a specified duration through an internal miniature camera module arranged on the inner wall of the glasses frame, and if so, extracting iris characteristic data from the eye images; if the iris characteristic data is checked to be matched with the iris data of a doctor stored in advance, sending a shooting instruction to an external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person; and acquiring and outputting a real-time video image obtained by shooting the oral cavity of the checked person by the external miniature shooting module, so that the oral cavity of the checked person is checked by the wearer through the real-time video image under the posture of looking at the oral cavity. By implementing the embodiment of the application, the doctor can be prevented from frequently lowering the head to carry out oral examination on the examined person, and the cervical spondylosis can be reduced.

Description

VR intelligent glasses-based oral cavity examination method and system
Technical Field
The application relates to the technical field of oral cavity examination, in particular to an oral cavity examination method and system based on VR intelligent glasses.
Background
Currently, when a doctor performs an oral examination on an examinee (e.g., a child examinee), it is often necessary to frequently lower his head so that the doctor's line of sight enters the examinee's oral cavity. In practice, it has been found that frequent head lowering by doctors for oral examination of the examinee easily causes cervical spondylosis.
Disclosure of Invention
The embodiment of the application discloses an oral cavity examination method and system based on VR intelligent glasses, which can prevent doctors from frequently lowering the head to carry out oral cavity examination on an examined person and is beneficial to reducing cervical spondylosis.
The first aspect of the embodiment of the application discloses an oral cavity inspection method based on VR intelligent glasses, wherein the VR intelligent glasses are in communication connection with an external miniature shooting module through a soft connecting wire, and the method comprises the following steps:
When detecting that the state of the VR intelligent glasses is changed from the unworn state to the unworn state, detecting whether eye images of a wearer are acquired within a first appointed duration through an internal miniature camera module arranged on the inner wall of a glasses frame of the VR intelligent glasses; if the eye images of the wearer are acquired within the first appointed duration, extracting iris characteristic data from the eye images of the wearer;
The VR intelligent glasses check whether the iris characteristic data are matched with iris data of a doctor stored in advance, and if so, a shooting instruction is sent to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person;
the VR intelligent glasses acquire real-time video images obtained by shooting the oral cavity of the checked person by the external miniature shooting module and output the real-time video images, so that the oral cavity of the checked person is checked by the wearer through the real-time video images under the posture of looking at the eye.
As an optional implementation manner, in the first aspect of the embodiment of the present application, after verifying that the iris feature data matches iris data of a doctor stored in advance, the method further includes:
And the VR intelligent glasses recognize whether the doctor belongs to an oral medical practitioner according to iris data matching of the doctor, and if so, execute the step of sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person.
As an optional implementation manner, in the first aspect of the embodiment of the present application, the method further includes:
If the VR intelligent glasses recognize that the doctor does not belong to the oral practitioner, recognizing and guiding the oral practitioner of the doctor based on the iris data matching of the doctor;
the VR intelligent glasses send an oral examination assistance permission request of a person to be examined to a first doctor terminal corresponding to an oral medical practitioner guiding the doctor;
And the VR intelligent glasses judge whether the oral practitioner guiding the doctor receives the response of the oral examination assistance permission request of the examinee sent by the first doctor terminal within a second appointed time, and if so, execute the step of sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the examinee.
As an optional implementation manner, in the first aspect of the embodiment of the present application, after verifying that the iris feature data matches iris data of a doctor stored in advance, the method further includes:
The VR intelligent glasses determine a second doctor terminal corresponding to the doctor according to the iris data matching of the doctor;
After the VR intelligent glasses acquire real-time video images obtained by shooting the oral cavity of the checked person by the external miniature shooting module and output the real-time video images, the method further comprises:
The VR intelligent glasses synchronously store the real-time video image and diagnostic voice sent by the wearer when the wearer performs oral examination on the examined person through the real-time video image in a visual line head-up posture to the second doctor terminal, so that after the external miniature shooting module shoots the oral cavity of the examined person, the second doctor terminal obtains personal identity information of the examined person and establishes a mapping relation among the personal identity information of the examined person, the real-time video image and the diagnostic voice;
When a doctor belongs to an oral medical practitioner, the second doctor terminal uploads the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice to a service device;
Or when the doctor does not belong to the oral practitioner, the second doctor terminal uploads the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice to the first doctor terminal corresponding to the oral practitioner guiding the doctor, so that the oral practitioner guiding the doctor rechecks the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice on the first doctor terminal, and then the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice is uploaded to a service device by the first doctor terminal.
As an optional implementation manner, in a first aspect of the embodiment of the present application, the examinee performs an oral examination for a student of a school organization, the service device is a service device set by the school, and the VR intelligent glasses and the external miniature shooting module are both located in a school doctor room of the school, and the method further includes:
the service equipment acquires the personal identity information of the checked person from the mapping relation and determines guardian equipment bound with the personal identity information of the checked person based on the personal identity information of the checked person;
The service equipment sends an inquiry message comprising the personal identity information of the checked person to guardian equipment bound with the personal identity information of the checked person, wherein the inquiry message is used for inquiring whether the checked person needs to check real-time video images and diagnostic voices when checking the oral cavity;
And the service equipment judges whether a reply response which is sent by guardian equipment and used for showing that the real-time video image and the diagnosis voice of the checked person are required to be checked when the checked person performs oral examination is received within a third appointed time period, and if the reply response is received, the personal identification information of the checked person, the real-time video image and the diagnosis voice are sent to the guardian equipment together.
In an optional implementation manner, after the service device determines that a reply response sent by the guardian device bound with the personal identity information of the checked person and used for indicating that the checked person needs to view a real-time video image and a diagnostic voice when checking the oral cavity is received within a third specified duration, the method further includes:
The service equipment acquires a first personal card punching locus track of guardian equipment bound with personal identity information of the checked person in the last moving process; the first personal card punching locus comprises a first appointed number of card punching loci, and any two card punching loci in the first appointed number of card punching loci are different from each other;
The service equipment inserts a second appointed number of non-punching points between every two adjacent punching points in the first personal punching point track to form a second personal punching point track;
the service equipment sends the second personal card-punching locus track to guardian equipment bound with personal identity information of the checked person;
the service equipment acquires a third personal card-punching locus track sent by guardian equipment bound with personal identity information of the checked person; wherein the third person punch-card location track is composed of a second specified number of target locations selected from the second person punch-card location track;
The service device performs the step of transmitting the personal identity information of the inspected person, the real-time video image and the diagnostic voice to the guardian device together when it is verified that the third personal punch-out location track is identical to the first personal punch-out location track, and the number of the plurality of target locations included in the third personal punch-out location track is equal to the first specified number and the set of the plurality of target locations included in the third personal punch-out location track is identical to the set of the first specified number of punch-out locations;
The service device inserts a second designated number of non-punch-out sites between every two adjacent punch-out sites in the first personal punch-out site track to form a second personal punch-out site track, comprising:
The service device determines all places located between the adjacent two punching places in the first personal punching place track aiming at each adjacent two punching places, randomly selects a second appointed number of places from all places to be used as non-punching places to be displayed between the adjacent two punching places, and therefore a second personal punching place track is formed.
The second aspect of the embodiment of the application discloses an oral cavity checking system based on VR intelligent glasses, which comprises VR intelligent glasses and an external miniature shooting module; wherein, VR intelligence glasses keep communication connection with outside miniature shooting module through soft connecting wire, VR intelligence glasses include:
The detection unit is used for detecting whether eye images of a wearer are acquired within a first appointed duration or not through an internal miniature camera module arranged on the inner wall of a glasses frame of the VR intelligent glasses when detecting that the state of the VR intelligent glasses is changed from a non-worn state to a worn state;
An extracting unit configured to extract iris feature data from an eye image of a wearer when the detecting unit detects that the eye image of the wearer is acquired within the first specified period of time;
the checking unit is used for checking whether the iris characteristic data are matched with iris data of a certain doctor stored in advance;
The transmitting unit is used for transmitting a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person when the checking result of the exchanging unit is matching;
and the acquisition unit is used for acquiring and outputting a real-time video image obtained by shooting the oral cavity of the checked person by the external miniature shooting module, so that the wearer performs oral examination on the checked person through the real-time video image under the posture of looking at the eye.
The third aspect of the embodiment of the application discloses an oral cavity checking system based on VR intelligent glasses, which comprises VR intelligent glasses and an external miniature shooting module; wherein, VR intelligence glasses keep communication connection with outside miniature shooting module through soft connecting wire, VR intelligence glasses include:
A memory storing executable program code;
A processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the steps of:
Detecting whether eye images of a wearer are acquired within a first appointed duration or not through an internal miniature camera module arranged on the inner wall of a glasses frame of the VR intelligent glasses when detecting that the state of the VR intelligent glasses is changed from an unworn state to a worn state; if the eye images of the wearer are acquired within the first appointed duration, extracting iris characteristic data from the eye images of the wearer;
Checking whether the iris characteristic data is matched with iris data of a doctor stored in advance, and if so, sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person;
And acquiring and outputting a real-time video image obtained by shooting the oral cavity of the checked person by the external miniature shooting module, so that the wearer performs oral cavity checking on the checked person through the real-time video image under the posture of looking at the head-up.
In a third aspect of the embodiment of the present application, after verifying that the iris feature data matches with iris data of a doctor stored in advance, the processor further performs the steps of:
and identifying whether the doctor belongs to an oral medical practitioner according to the iris data matching of the doctor, and if so, executing the step of sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person.
As an alternative implementation manner, in the third aspect of the embodiment of the present application, the processor further performs the following steps:
if the doctor is identified not to belong to the oral practitioner, identifying the oral practitioner guiding the doctor according to the iris data matching of the doctor;
Transmitting an oral examination assistance permission request of an examinee to a first doctor terminal corresponding to an oral medical practitioner guiding the doctor;
Judging whether the oral cavity practitioner guiding the doctor receives an oral cavity examination assistance permission request of the examinee, which is sent by the first doctor terminal, within a second appointed time, and if so, executing the step of sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the examinee.
In a third aspect of the present embodiment, after verifying that the iris feature data matches with iris data of a doctor stored in advance, the processor further performs the following steps:
determining a second doctor terminal corresponding to the doctor according to the iris data matching of the doctor;
After the processor acquires and outputs a real-time video image obtained by shooting the oral cavity of the checked person by the external miniature shooting module, the processor also executes the following steps:
synchronously storing the real-time video image and diagnostic voice sent by the wearer when the wearer performs oral examination on the examined person through the real-time video image in a state of looking up the eye, so that after the external miniature shooting module shoots the oral cavity of the examined person, the second doctor terminal obtains personal identity information of the examined person and establishes a mapping relation among the personal identity information of the examined person, the real-time video image and the diagnostic voice;
When a doctor belongs to an oral medical practitioner, the second doctor terminal uploads the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice to a service device;
Or when the doctor does not belong to the oral practitioner, the second doctor terminal uploads the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice to the first doctor terminal corresponding to the oral practitioner guiding the doctor, so that the oral practitioner guiding the doctor rechecks the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice on the first doctor terminal, and then the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice is uploaded to a service device by the first doctor terminal.
In a third aspect of the embodiment of the present application, the examinee performs oral examination for a student of a certain school organization, the service device is a service device set by the school, and the VR intelligent glasses and the external miniature shooting module are both located in a school doctor room of the school, then:
the service equipment acquires the personal identity information of the checked person from the mapping relation and determines guardian equipment bound with the personal identity information of the checked person based on the personal identity information of the checked person;
The service equipment sends an inquiry message comprising the personal identity information of the checked person to guardian equipment bound with the personal identity information of the checked person, wherein the inquiry message is used for inquiring whether the checked person needs to check real-time video images and diagnostic voices when checking the oral cavity;
And the service equipment judges whether a reply response which is sent by guardian equipment and used for showing that the real-time video image and the diagnosis voice of the checked person are required to be checked when the checked person performs oral examination is received within a third appointed time period, and if the reply response is received, the personal identification information of the checked person, the real-time video image and the diagnosis voice are sent to the guardian equipment together.
In a third aspect of the embodiment of the present application, after the service device determines that a reply response sent by the guardian device bound with the personal identity information of the inspected person and used for indicating that the inspected person needs to check a real-time video image and a diagnostic voice when the inspected person performs oral examination is received within a third specified duration, the service device further obtains a first personal card-punching location track previously reported by the guardian device bound with the personal identity information of the inspected person and in a last moving process of the guardian device of the inspected person; the first personal card punching locus comprises a first appointed number of card punching loci, and any two card punching loci in the first appointed number of card punching loci are different from each other;
The service equipment inserts a second appointed number of non-punching points between every two adjacent punching points in the first personal punching point track to form a second personal punching point track;
the service equipment sends the second personal card-punching locus track to guardian equipment bound with personal identity information of the checked person;
the service equipment acquires a third personal card-punching locus track sent by guardian equipment bound with personal identity information of the checked person; wherein the third person punch-card location track is composed of a second specified number of target locations selected from the second person punch-card location track;
The service device performs the step of transmitting the personal identity information of the inspected person, the real-time video image and the diagnostic voice to the guardian device together when it is verified that the third personal punch-out location track is identical to the first personal punch-out location track, and the number of the plurality of target locations included in the third personal punch-out location track is equal to the first specified number and the set of the plurality of target locations included in the third personal punch-out location track is identical to the set of the first specified number of punch-out locations;
The service device inserts a second designated number of non-punch-out sites between every two adjacent punch-out sites in the first personal punch-out site track to form a second personal punch-out site track, comprising:
The service device determines all places located between the adjacent two punching places in the first personal punching place track aiming at each adjacent two punching places, randomly selects a second appointed number of places from all places to be used as non-punching places to be displayed between the adjacent two punching places, and therefore a second personal punching place track is formed.
A fourth aspect of the embodiments of the present application discloses a computer readable storage medium storing a computer program, where the computer program when executed causes a computer to execute the steps of the VR intelligent glasses-based oral cavity inspection method disclosed in the first aspect of the embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
In the embodiment of the application, when the VR intelligent glasses detect that the state of the VR intelligent glasses is changed from the unworn state to the unworn state, if an internal miniature camera module arranged on the inner wall of the glasses frame detects that an eye image of a wearer is acquired within a first appointed time period and verifies that iris characteristic data extracted from the eye image of the wearer is matched with iris data of a doctor stored in advance, an external miniature camera module which is in communication connection with the VR intelligent glasses through a soft connecting wire (the external miniature camera module can be held by an inspector or the wearer of the VR intelligent glasses) is triggered to shoot the oral cavity of the inspector; and the VR intelligent glasses can acquire real-time video images obtained by shooting the oral cavity of the checked person through the external miniature shooting module and project the real-time video images onto lenses of the VR intelligent glasses, so that the oral cavity of the checked person is checked through the real-time video images under the posture of looking at the eye level of the wearer of the VR intelligent glasses, thereby being capable of avoiding the doctor from frequently lowering the head to check the oral cavity of the checked person and being beneficial to reducing the cervical spondylosis.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a first embodiment of an oral inspection method based on VR smart glasses according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a second embodiment of an oral inspection method based on VR smart glasses according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a first person punch-out location trajectory as disclosed in an embodiment of the present application;
FIG. 4 is a schematic diagram of a second person punch-out location trajectory as disclosed in an embodiment of the present application;
Fig. 5 is a schematic structural diagram of a first embodiment of an oral inspection system based on VR smart glasses as disclosed in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a second embodiment of an oral inspection system based on VR smart glasses as disclosed in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "comprises" and "comprising," along with any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses an oral cavity examination method and system based on VR intelligent glasses, which can prevent doctors from frequently lowering the head to carry out oral cavity examination on an examined person and is beneficial to reducing cervical spondylosis. It is understood that in embodiments of the present application, oral examination includes soft tissue examination, hard tissue examination, and tooth related examination of the oral cavity. Wherein, the soft tissue examination comprises the presence or absence of abnormality of the soft tissue morphology, color and texture. Hard tissue examination includes the presence or absence of morphological and structural abnormalities. The dental examination includes dental caries defect, abnormal tooth number and shape, oral hygiene condition, periodontal tissue condition, auxiliary examination, and dental occlusion examination, and the auxiliary examination includes radiographic examination, X-ray film, cold diagnosis of teeth and teeth, hot diagnosis, percussion, loosening degree examination, etc. The following detailed description is made with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a first embodiment of an oral cavity inspection method based on VR smart glasses according to an embodiment of the present application. In the oral cavity inspection method based on VR intelligent glasses described in fig. 1, VR intelligent glasses are in communication connection with an external miniature shooting module through a soft connecting wire. As shown in fig. 1, the oral examination method based on VR smart glasses may include the following steps:
101. When detecting that the state of the VR intelligent glasses is changed from the unworn state to the unworn state, detecting whether eye images of a wearer are acquired within a first appointed duration through an internal miniature camera module arranged on the inner wall of a glasses frame of the VR intelligent glasses; if not, ending the flow; if yes, go to step 102-step 103.
In the embodiment of the application, when the VR intelligent glasses detect that the state of the VR intelligent glasses is changed from the unworn state to the unworn state through the built-in wear detection assembly (also called a wear detection sensor), whether eye images of a wearer are acquired within a first appointed duration can be detected through the internal miniature camera module arranged on the inner wall of the frame of the VR intelligent glasses. The first designated duration may be set according to actual needs, which is not shown in the embodiment of the present application.
As an optional implementation manner, a fingerprint identification area may be provided on the outer side of a side (such as a right side) of the VR smart glasses, and indicator lamps are distributed on the circumference of the fingerprint identification area, and when the VR smart glasses detect that the state of the VR smart glasses is changed from the unworn state to the unworn state through a built-in wear detection component (also referred to as a wear detection sensor) of the VR smart glasses, the VR smart glasses may light the indicator lamps distributed on the circumference of the fingerprint identification area to prompt the wearer to press the fingerprint identification area with fingers; and when the wearer presses the fingerprint identification area with fingers, the VR intelligent glasses identify whether the finger fingerprints pressing the fingerprint identification area belong to any legal finger fingerprints preconfigured by the VR intelligent glasses through the fingerprint identification area, if so, the VR intelligent glasses can acquire the current positions of the fingerprint intelligent glasses through the built-in positioning module of the VR intelligent glasses, judge whether the current positions are positioned in legal appointed using areas matched with any legal finger fingerprints, if so, start an internal miniature camera module arranged on the inner wall of the mirror frame of the VR intelligent glasses, so that the internal miniature camera module after starting detects whether the eye images of the wearer are acquired in first appointed duration, thereby effectively reducing the power consumption of the VR intelligent glasses and prolonging the duration of battery life of the VR intelligent glasses.
102. The VR smart glasses extract iris feature data from an eye image of the wearer.
For example, the VR smart glasses may extract iris feature data from the wearer's eye image using iris feature extraction methods including, but not limited to, wavelet packet analysis.
103. The VR intelligent glasses check whether the iris characteristic data are matched with iris data of a doctor stored in advance, and if not, the process is ended; if so, go to step 104-step 105.
It will be appreciated that the matching of the iris feature data with the pre-stored iris data of a certain doctor includes not only that the iris feature data is identical to the pre-stored iris data of a certain doctor, but also that the similarity between the iris feature data and the pre-stored iris data of a certain doctor is higher than a certain threshold (e.g. 95%).
104. The VR intelligent glasses send shooting instructions to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person.
In the embodiment of the application, the VR intelligent glasses can send a shooting instruction to the external miniature shooting module through the software connection line to trigger the external miniature shooting module to shoot the oral cavity of the checked person. Specifically, the VR intelligent device may obtain a pre-stored personal shooting parameter (such as resolution) that is matched with iris data of the doctor, and send, to the external miniature shooting module via the software connection line, a shooting instruction including the personal shooting parameter to trigger the external miniature shooting module to shoot the oral cavity of the checked person according to the personal shooting parameter.
105. The VR intelligent glasses acquire real-time video images obtained by shooting the oral cavity of the checked person by the external miniature shooting module and output the real-time video images, so that the oral cavity of the checked person is checked by the wearer through the real-time video images under the posture of looking at the eye.
After the VR intelligent glasses acquire real-time video images obtained by shooting the oral cavity of the checked person by the external miniature shooting module, the real-time video images can be projected to eyes of the checked person through the optical lenses, so that the checked person can check the oral cavity by the real-time video images under the posture of looking at the head-up.
Therefore, by implementing the oral cavity inspection method based on the VR intelligent glasses described in fig. 1, the wearer of the VR intelligent glasses can inspect the oral cavity of the inspected person by watching the real-time video image in the posture of looking at the sight line in a head-up manner, so that the condition that the doctor frequently drops the head to inspect the oral cavity of the inspected person can be avoided, and the occurrence of cervical spondylosis can be reduced.
Referring to fig. 2, fig. 2 is a flowchart illustrating a second embodiment of an oral cavity inspection method based on VR smart glasses according to an embodiment of the present application. In the oral cavity inspection method based on VR intelligent glasses described in fig. 2, VR intelligent glasses are in communication connection with an external miniature shooting module through a soft connecting wire. As shown in fig. 2, the oral examination method based on VR smart glasses may include the following steps:
201. When detecting that the state of the VR intelligent glasses is changed from the unworn state to the unworn state, detecting whether eye images of a wearer are acquired within a first appointed duration through an internal miniature camera module arranged on the inner wall of a glasses frame of the VR intelligent glasses; if not, ending the flow; if yes, go to step 202-step 203.
202. The VR smart glasses extract iris feature data from an eye image of the wearer.
203. The VR intelligent glasses check whether the iris characteristic data are matched with iris data of a doctor stored in advance, and if not, the process is ended; if so, step 204 is performed.
204. The VR intelligent glasses recognize whether the doctor belongs to an oral medical practitioner according to the iris data matching of the doctor, and if not, execute the steps 205-207; if so, go to steps 208-209.
In the embodiment of the present application, the VR intelligent glasses are based on the iris data matching of the doctor, and if it is identified that the doctor does not belong to the oral medical practitioner, the doctor (i.e. the wearer) may be the oral medical practitioner, and step 205-step 207 are correspondingly performed.
205. The VR intelligent glasses are used for identifying and guiding the oral medical practitioner of the doctor based on the iris data matching of the doctor.
In the embodiment of the application, the VR intelligent glasses can identify the identity of the oral practitioner guiding the doctor according to the iris data matching of the doctor, and then identify the oral practitioner guiding the doctor according to the identity of the oral practitioner guiding the doctor.
206. And the VR intelligent glasses send an oral examination assistance permission request of the checked person to a first doctor terminal corresponding to the oral medical practitioner guiding the doctor.
For example, the first doctor terminal corresponding to the oral practitioner guiding the doctor may include an electronic device such as a doctor mobile phone, a doctor computer, a doctor smart glasses, and the like guiding the oral practitioner of the doctor, which are not limited by the embodiments of the present application.
207. The VR intelligent glasses judge whether the oral cavity practitioner guiding the doctor receives the response of the oral cavity examination assistance permission request of the examined person sent by the first doctor terminal within a second appointed time period, and if not, the process is ended; if so, go to steps 208-209.
208. The VR intelligent glasses send shooting instructions to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person.
209. The VR intelligent glasses acquire real-time video images obtained by shooting the oral cavity of the checked person by the external miniature shooting module and output the real-time video images, so that the oral cavity of the checked person is checked by the wearer through the real-time video images under the posture of looking at the eye.
As an alternative embodiment, in the oral examination method based on VR smart glasses as described in fig. 2, after verifying that the iris feature data matches iris data of a doctor stored in advance, the method further includes the steps of:
the VR intelligent glasses determine a second doctor terminal corresponding to the doctor according to the iris data matching of the doctor; the second doctor terminal corresponding to the doctor may include an electronic device such as a doctor mobile phone, a doctor computer, a doctor smart glasses, etc. for guiding the doctor to use, and the embodiment of the present application is not limited;
After the VR intelligent glasses acquire real-time video images obtained by shooting the oral cavity of the checked person by the external miniature shooting module and output the real-time video images, the method further comprises:
The VR intelligent glasses synchronously store the real-time video image and diagnostic voice sent by the wearer when the wearer performs oral examination on the examined person through the real-time video image in a visual line head-up posture to the second doctor terminal, so that after the external miniature shooting module shoots the oral cavity of the examined person, the second doctor terminal obtains personal identity information of the examined person and establishes a mapping relation among the personal identity information of the examined person, the real-time video image and the diagnostic voice;
When a doctor belongs to an oral medical practitioner, the second doctor terminal uploads the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice to a service device;
Or when the doctor does not belong to the oral practitioner, the second doctor terminal uploads the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice to the first doctor terminal corresponding to the oral practitioner guiding the doctor, so that the oral practitioner guiding the doctor rechecks the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice on the first doctor terminal, and then the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice is uploaded to a service device by the first doctor terminal.
In the embodiment of the application, the second doctor terminal corresponding to the doctor can identify whether the doctor is an oral medical practitioner according to the iris data of the doctor.
As yet another alternative implementation manner, in the oral examination method based on VR intelligent glasses as described in fig. 2, the examinee performs an oral examination for a student of a certain school organization, the service device is a service device set by the school, and both the VR intelligent glasses and the external miniature shooting module are located in a school doctor room of the school. The VR smart glasses-based oral inspection method described in fig. 2 may further include the following steps:
Step A, the service equipment acquires the personal identity information of the checked person from the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice, and determines guardian equipment bound with the personal identity information of the checked person based on the personal identity information of the checked person; wherein, the personal identity information of the checked person can include the name, grade, class and number of the student.
And B, the service equipment sends an inquiry message comprising the personal identity information of the checked person to the guardian equipment bound with the personal identity information of the checked person, wherein the inquiry message is used for inquiring whether the checked person needs to check the real-time video image and the diagnosis voice when checking the oral cavity.
And C, the service equipment judges whether a reply response which is sent by guardian equipment and used for showing that the real-time video image and the diagnosis voice of the checked person are required to be checked when the checked person performs oral examination is received within a third appointed time period, and if so, the personal identification information of the checked person, the real-time video image and the diagnosis voice are sent to the guardian equipment together.
The third specified duration may be set according to actual needs, which is not limited in the embodiment of the present application.
As still another alternative embodiment, in the VR smart glasses-based oral cavity inspection method illustrated in fig. 2, after the service device determines that the response sent by the parent device bound to the personal identity information of the inspected person and used for indicating that the inspected person needs to view the real-time video image and the diagnostic voice when the inspected person performs oral cavity inspection is received in the third specified duration in the above step C, the method may further include the following steps:
Step C11, the service equipment acquires a first personal card punching position track of guardian equipment bound with personal identity information of the checked person in the last moving process; the first personal card punching locus comprises a first appointed number of card punching loci, and any two card punching loci in the first appointed number of card punching loci are different from each other.
And C12, the service equipment inserts a second appointed number of non-punching points between every two adjacent punching points in the first personal punching point track to form a second personal punching point track.
As an optional implementation manner, in the step C12, the service device inserts a second specified number of non-punching sites between every two adjacent punching sites in the first personal punching site track, to form a second personal punching site track, and may include:
The service device determines all places located between the adjacent two punching places in the first personal punching place track aiming at each adjacent two punching places, randomly selects a second appointed number of places from all places to be used as non-punching places to be displayed between the adjacent two punching places, and therefore a second personal punching place track is formed.
And step C13, the service equipment sends the second personal card punching position track to guardian equipment bound with personal identity information of the checked person.
Step C14, the service equipment acquires a third personal card punching locus track sent by guardian equipment bound with personal identity information of the checked person; the third person card punching place track is composed of a third appointed number of target places selected from the second person card punching place track.
And C15, when the service equipment verifies that the third person punching place track is identical to the first person punching place track, and the third appointed number is equal to the first appointed number and the set formed by the third appointed number of target places contained in the third person punching place track is identical to the set formed by the first appointed number of punching places, executing the step of sending the personal identity information of the checked person, the real-time video image and the diagnosis voice to the guardian equipment together.
In the embodiment of the present application, the steps C11 to C15 are implemented, only the guardian bound with the personal identity information of the checked person is allowed to check the real-time video image and the diagnostic voice of the checked person during the oral examination, so as to prevent the real-time video image and the diagnostic voice of the checked person during the oral examination from being leaked as a kind of privacy information to other non-guardians.
As an optional implementation manner, the guardian device bound with the personal identity information of the inspected person in the step C11 may be a mobile phone, a tablet or a PC of the guardian of the inspected person, or a vehicle (such as a new energy automobile) of the guardian of the inspected person.
When the guardian device bound with the personal identity information of the inspected person in the step C11 is a vehicle (such as a new energy automobile) of the guardian of the inspected person, a location punch button may be further arranged on a steering wheel of the guardian vehicle of the inspected person. Recording an instant location of the vehicle as a checking location each time the location checking button is detected to be pressed in each moving process (namely, driving process), and generating a first personal checking location track consisting of the recorded first designated number of checking locations when the number of the checking locations recorded in the moving process reaches the first designated number; any two of the first specified number of punching sites are different from each other.
Further, the vehicle of the guardian of the inspected person may further generate a time point for each stored first personal card-punching place track, so that the service device may acquire, from the driving computer of the guardian of the inspected person, a first personal card-punching place track in a last moving process of the guardian of the inspected person (for example, a driving process that the guardian of the inspected person last guards the inspected person to reach the school or a driving process that the guardian of the inspected person last meets the inspected person to leave the school) according to each first personal card-punching place track stored in the driving computer of the guardian of the inspected person; or the service device may generate a time point according to each first person card punching location track reported by the guardian vehicle of the checked person, and obtain a first person card punching location track in a last moving process of the guardian vehicle of the checked person (such as a running process that the guardian vehicle of the checked person last guards the checked person to arrive at the school or a running process that the guardian vehicle of the checked person last meets the checked person to leave the school) from the reported first person card punching location tracks of the guardian vehicle of the checked person; the first person point track in the last moving process comprises a first appointed number of point, and any two of the first appointed number of point are different from each other.
In the embodiment of the application, the vehicle of the guardian of the checked person can be internally provided with a positioning module, and the positioning module can comprise an outdoor positioning module (such as a Beidou positioning module, a GPS positioning module and a base station positioning module) and an indoor positioning module (such as an indoor WIFI positioning module and an indoor ultra wideband UWB positioning module). In each moving process of the guardian of the checked person, if the guardian of the checked person finds that the vehicle reaches any place of interest of the guardian of the checked person, the guardian of the checked person can quickly press a place punching button arranged on a steering wheel of the vehicle by using a finger (such as a thumb), so that the vehicle can detect that the place punching button is pressed once, and the corresponding vehicle can comprehensively position the instant place of the vehicle through an outdoor positioning module and an indoor positioning module which are included by the positioning module and record the instant place of the vehicle as a punching place.
In the embodiment of the present application, the first specified number may be set according to actual needs, and the embodiment of the present application is not specifically limited.
For example, the first designated number is 5, and the 5 punching sites recorded by the guardian vehicle of the inspected person during a certain movement are respectively "Toyota 4S store", "Haili stainless steel pipe hardware store", "North sea New Ming Yuan Sheng Jiu", and "Xingyuan Hotel", the vehicle may determine that the number of 5 punching sites recorded during the certain movement is up to the first designated number, and the first personal punching site as shown in FIG. 3, which is composed of the 5 punching sites "Toyota 4S store", "Haili stainless steel pipe hardware store", "North sea New Ming Yuan Sheng", and "Xingyuan Hotel", is generated and stored in the vehicle computer of the guardian vehicle of the inspected person.
In the embodiment of the application, any location contained in the first person card-punching location track is presented in the form of a location name (such as Toyota 4S shop).
For example, the service apparatus may determine all places located between "Toyota 4S store" and "stainless steel pipe hardware store" for two adjacent card-punching places "Toyota 4S store" and "stainless steel pipe hardware store" in the first personal card-punching place track shown in FIG. 3, and randomly select 2 places of "Yue Feng automobile dermis stereo" and "Wei Ji Tang" as non-card-punching places to be displayed between "Toyota 4S store" and "stainless steel pipe hardware store" from all places located between "Toyota 4S store" and "stainless steel pipe hardware store"; and the service device may determine all the places located between the "maritime stainless steel pipe hardware store" and the "north sea new open garden property" for the two adjacent card punching places "maritime stainless steel pipe hardware store" and "north sea new open garden property" in the first personal card punching place track shown in fig. 3, and randomly select 2 places of the "mammy postpartum recovery center" and the "north bay insurance" from all the places located between the "maritime stainless steel pipe hardware store" and the "north sea new open garden property" as non-card punching places to be displayed between the "maritime stainless steel pipe hardware store" and the "north sea new open garden property"; and, the service device may determine all the places located between the "north sea new open garden property" and the "jon sea lube specialty store" for the adjacent two punch points "north sea new open garden property" and "jon sea lube specialty store" in the first individual punch point track shown in fig. 3, and randomly select 2 places of the "yu qingquan sang shop" and the "liuwanda mountain baicao hall" as non-punch points to be displayed between the "north sea new open garden property" and the "jon sea lube specialty store" from all the places located between the "north sea new open garden property" and the "jon sea lube specialty store"; and, the service device may determine all places located between the "jones oil monopoly store" and the "Xingyuan hotel" for the adjacent two of the first personal punch-card place trajectories shown in fig. 3, and randomly select 2 places of the "same name smoke and wine consign for sale on commission row" and the "laozhong chafing dish city" from all places located between the "jones oil monopoly store" and the "Xingyuan hotel" as non-punch-card places to display the "jones oil monopoly store" and the "Xingyuan hotel" between them, thereby forming the second personal punch-card place trajectories shown in fig. 4.
It may be appreciated that the service device indicates that the third person location trajectory is identical to the first person location trajectory when it is verified that the third person location trajectory is identical to the first person location trajectory, and the third specified number is equal to the first specified number and the set of the third specified number of target locations included in the third person location trajectory is identical to the set of the first specified number of locations. For example, the service apparatus performs the step of transmitting the personal identification information of the inspected person, the real-time video image, and the diagnostic voice to the guardian apparatus together only if it is verified that the third person punch-card locus includes 5 punch-card loci of "Toyota 4S store", "stainless steel pipe hardware store", "North sea New Ming Yuan property", "Jones lubricating oil monopoly store", and "Xingyuan hotel" and does not include other loci.
As an optional implementation, in the foregoing step 207, after the VR smart glasses determine that the receiving the oral medical practitioner guiding the doctor within the second specified time period in response to the examinee oral examination assistance permission request sent through the first doctor terminal, and before the VR smart glasses perform steps 208 to 209, the VR smart glasses may further perform the following steps:
The VR intelligent glasses acquire first payment location tracks which are issued to the VR intelligent glasses in advance by first doctor terminals corresponding to the oral medical practitioners guiding the doctor; the first payment place track comprises a first appointed number of payment places, and any two payment places in the first appointed number of payment places are different from each other;
The VR intelligent glasses insert a second appointed number of non-payment places between every two adjacent payment places in the first payment place track to form a second payment place track;
the VR intelligent glasses send the second payment location track to a first doctor terminal corresponding to the oral practitioner guiding the doctor;
The VR intelligent glasses acquire a third payment location track sent by a first doctor terminal corresponding to the oral practitioner guiding the doctor; wherein the third payment location trajectory consists of a selected third specified number of locations in the second payment location trajectory;
The VR smart glasses perform steps 208-209 only when the VR smart glasses verify that the third payment location track is the same as the first payment location track, and the third specified number is equal to the first specified number and the set of the third specified number of locations included in the third payment location track is the same as the set of the first specified number of payment locations.
In fact, implementation of the above steps can prompt the VR intelligent glasses to send shooting instructions to the external miniature shooting module to trigger the accuracy of shooting of the oral cavity of the checked person by the external miniature shooting module, which is favorable for reducing the power consumption of the VR intelligent glasses and prolonging the battery endurance time of the VR intelligent glasses.
Therefore, by implementing the oral examination method based on the VR intelligent glasses described in fig. 2, the wearer of the VR intelligent glasses can carry out oral examination on the examined person by watching the real-time video image in the posture of looking at the sight line in a head-up manner, so that the doctor can be prevented from frequently lowering the head to carry out oral examination on the examined person, and the cervical spondylosis can be reduced.
In addition, implementing the oral examination method based on VR intelligent glasses described in fig. 2 only allows the guardian bound with the personal identity information of the examinee to view the real-time video image and the diagnostic voice of the examinee during oral examination, so as to prevent the real-time video image and the diagnostic voice of the examinee during oral examination from being revealed to other non-guardians as a kind of privacy information.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a first embodiment of an oral inspection system based on VR smart glasses according to an embodiment of the present application. The oral cavity inspection system based on VR intelligent glasses depicted in fig. 5 includes VR intelligent glasses 500 and an external miniature shooting module 501; wherein, VR intelligent glasses 500 keep communication connection through flexible connecting wire 503 and outside miniature shooting module 502, VR intelligent glasses 500 includes:
A detection unit 5001, configured to detect, when detecting that a state of the VR intelligent glasses changes from a non-worn state to a worn state, whether an eye image of a wearer is acquired within a first specified duration through an internal miniature camera module disposed on an inner wall of a frame of the VR intelligent glasses;
An extracting unit 5002 for extracting iris feature data from an eye image of a wearer when the detecting unit 5001 detects that an eye image of the wearer is acquired within the first specified period of time;
a verification unit 5003, configured to verify whether the iris feature data matches iris data of a doctor stored in advance;
A sending unit 5004, configured to send a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the inspected person when the verification result of the exchanging unit 5003 is matching;
and an acquisition unit 5005, configured to acquire and output a real-time video image obtained by capturing an oral cavity of the person under inspection by the external miniature capturing module, so that the person under inspection performs oral examination on the person under inspection by the real-time video image in a posture of looking at the eye.
In the embodiment of the present application, the VR intelligent glasses 500 may further perform other steps and functions in the previous method embodiments, which are not described herein.
Therefore, the oral cavity inspection system based on the VR intelligent glasses described in FIG. 5 can enable the VR intelligent glasses to be used for carrying out oral cavity inspection on the inspected person by watching the real-time video image under the posture of looking at the sight line in a head-up manner, so that the condition that doctors frequently lower the head to carry out oral cavity inspection on the inspected person can be avoided, and the cervical spondylosis generation can be reduced.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a second embodiment of an oral inspection system based on VR smart glasses according to an embodiment of the present application. The oral cavity inspection system based on VR intelligent glasses depicted in fig. 6 includes VR intelligent glasses 600 and an external miniature shooting module 601; wherein, VR intelligent glasses 600 keep communication connection with outside miniature shooting module 602 through soft connecting wire 603, VR intelligent glasses 600 includes:
a memory 6001 in which executable program code is stored;
A processor 6002 coupled to the memory 6001;
the processor 6002 calls executable program code stored in the memory 601, and performs the following steps:
Detecting whether eye images of a wearer are acquired within a first appointed duration or not through an internal miniature camera module arranged on the inner wall of a glasses frame of the VR intelligent glasses when detecting that the state of the VR intelligent glasses is changed from an unworn state to a worn state; if the eye images of the wearer are acquired within the first appointed duration, extracting iris characteristic data from the eye images of the wearer;
Checking whether the iris characteristic data is matched with iris data of a doctor stored in advance, and if so, sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person;
And acquiring and outputting a real-time video image obtained by shooting the oral cavity of the checked person by the external miniature shooting module, so that the wearer performs oral cavity checking on the checked person through the real-time video image under the posture of looking at the head-up.
As an alternative embodiment, after verifying that the iris feature data matches the iris data of a certain doctor stored in advance, the processor 6002 further performs the steps of:
and identifying whether the doctor belongs to an oral medical practitioner according to the iris data matching of the doctor, and if so, executing the step of sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person.
As an alternative embodiment, the processor 6002 further performs the steps of:
if the doctor is identified not to belong to the oral practitioner, identifying the oral practitioner guiding the doctor according to the iris data matching of the doctor;
Transmitting an oral examination assistance permission request of an examinee to a first doctor terminal corresponding to an oral medical practitioner guiding the doctor;
Judging whether the oral cavity practitioner guiding the doctor receives an oral cavity examination assistance permission request of the examinee, which is sent by the first doctor terminal, within a second appointed time, and if so, executing the step of sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the examinee.
As an alternative embodiment, after verifying that the iris feature data matches the iris data of a certain doctor stored in advance, the processor 6002 further performs the steps of:
determining a second doctor terminal corresponding to the doctor according to the iris data matching of the doctor;
after acquiring and outputting a real-time video image obtained by the external miniature photographing module photographing the oral cavity of the examinee, the processor 6002 further performs the following steps:
synchronously storing the real-time video image and diagnostic voice sent by the wearer when the wearer performs oral examination on the examined person through the real-time video image in a state of looking up the eye, so that after the external miniature shooting module shoots the oral cavity of the examined person, the second doctor terminal obtains personal identity information of the examined person and establishes a mapping relation among the personal identity information of the examined person, the real-time video image and the diagnostic voice;
When a doctor belongs to an oral medical practitioner, the second doctor terminal uploads the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice to a service device;
Or when the doctor does not belong to the oral practitioner, the second doctor terminal uploads the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice to the first doctor terminal corresponding to the oral practitioner guiding the doctor, so that the oral practitioner guiding the doctor rechecks the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice on the first doctor terminal, and then the mapping relation among the personal identity information of the checked person, the real-time video image and the diagnosis voice is uploaded to a service device by the first doctor terminal.
As an optional implementation manner, the inspector performs the student of oral examination for a certain school organization, the service device is a service device set by the school, and the VR intelligent glasses and the external miniature shooting module are both located in a school doctor room of the school, then:
the service equipment acquires the personal identity information of the checked person from the mapping relation and determines guardian equipment bound with the personal identity information of the checked person based on the personal identity information of the checked person;
The service equipment sends an inquiry message comprising the personal identity information of the checked person to guardian equipment bound with the personal identity information of the checked person, wherein the inquiry message is used for inquiring whether the checked person needs to check real-time video images and diagnostic voices when checking the oral cavity;
And the service equipment judges whether a reply response which is sent by guardian equipment and used for showing that the real-time video image and the diagnosis voice of the checked person are required to be checked when the checked person performs oral examination is received within a third appointed time period, and if the reply response is received, the personal identification information of the checked person, the real-time video image and the diagnosis voice are sent to the guardian equipment together.
As an optional implementation manner, after the service device determines that the response sent by the guardian device bound with the personal identity information of the checked person and used for indicating that the checked person needs to check the real-time video image and the diagnostic voice when checking the oral cavity is received within the third specified duration, the following steps are further executed:
The service equipment acquires a first personal card punching locus track of guardian equipment bound with personal identity information of the checked person in the last moving process; the first personal card punching locus comprises a first appointed number of card punching loci, and any two card punching loci in the first appointed number of card punching loci are different from each other;
The service equipment inserts a second appointed number of non-punching points between every two adjacent punching points in the first personal punching point track to form a second personal punching point track;
the service equipment sends the second personal card-punching locus track to guardian equipment bound with personal identity information of the checked person;
The service equipment acquires a third personal card-punching locus track sent by guardian equipment bound with personal identity information of the checked person; wherein the third person punch-card location track is composed of a third designated number of target locations selected from the second person punch-card location track;
The service device performs the step of transmitting the personal identity information of the inspected person, the real-time video image and the diagnostic voice to the guardian device together when it is verified that the third personal punch-out location track is identical to the first personal punch-out location track and the third designated number is equal to the first designated number and the set of the third designated number of target locations included in the third personal punch-out location track is identical to the set of the first designated number of punch-out locations;
The service device inserts a second designated number of non-punch-out sites between every two adjacent punch-out sites in the first personal punch-out site track to form a second personal punch-out site track, comprising:
The service device determines all places located between the adjacent two punching places in the first personal punching place track aiming at each adjacent two punching places, randomly selects a second appointed number of places from all places to be used as non-punching places to be displayed between the adjacent two punching places, and therefore a second personal punching place track is formed.
As an alternative embodiment, after the processor 6002 determines that the oral practitioner who instructs the certain doctor receives the response of the examinee oral examination assistance permission request sent through the first doctor terminal within a second instruction period, and before sending a photographing instruction to the external micro photographing module to trigger the external micro photographing module to photograph the oral cavity of the examinee, the following steps may be further performed:
Acquiring a first payment location track of the oral practitioner guiding the doctor, which is issued to the VR intelligent glasses in advance by a first doctor terminal corresponding to the oral practitioner guiding the doctor; the second card punching place track comprises a first specified number of payment places, and any two payment places in the first specified number of payment places are different from each other;
inserting a second specified number of non-payment sites between every two adjacent payment sites in the first payment site trajectory to form a second payment site trajectory;
transmitting the second payment location track to a first doctor terminal corresponding to the oral practitioner guiding the doctor;
acquiring a third payment location track transmitted by a first doctor terminal corresponding to the oral practitioner guiding the doctor; wherein the third payment location trajectory consists of a selected third specified number of locations in the second payment location trajectory;
When it is verified that the third payment location track is the same as the first payment location track, the third specified number is equal to the first specified number, and the set of the third specified number of locations included in the third payment location track is the same as the set of the first specified number of payment locations, the VR intelligent glasses send a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person.
In fact, implementation of the above steps can prompt the VR intelligent glasses to send shooting instructions to the external miniature shooting module to trigger the accuracy of shooting of the oral cavity of the checked person by the external miniature shooting module, which is favorable for reducing the power consumption of the VR intelligent glasses and prolonging the battery endurance time of the VR intelligent glasses.
Therefore, the oral cavity inspection system based on the VR intelligent glasses described in FIG. 6 can enable the VR intelligent glasses to be used for carrying out oral cavity inspection on the inspected person by watching the real-time video image under the posture of looking at the sight line in a head-up manner, so that the condition that doctors frequently lower the head to carry out oral cavity inspection on the inspected person can be avoided, and the cervical spondylosis generation can be reduced.
In addition, implementing the oral examination system based on VR intelligent glasses described in fig. 6 only allows the guardian bound with the personal identity information of the examinee to view the real-time video image and the diagnostic voice of the examinee during oral examination, preventing the real-time video image and the diagnostic voice of the examinee during oral examination from being revealed to other non-guardians as a kind of privacy information.
The present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program, when run, causes a computer to perform some or all of the steps of the methods in the above method embodiments.
Embodiments of the invention disclose a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform part or all of the steps of the method as in the method embodiments above.
The embodiment of the invention discloses an application release platform which is used for releasing a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps of the method in the embodiment of the method.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data.
The above describes in detail an oral cavity inspection method and system based on VR intelligent glasses disclosed in the embodiments of the present invention, and specific examples are applied herein to illustrate the principles and embodiments of the present invention, where the above description of the embodiments is only for helping to understand the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (5)

The VR intelligent glasses check whether the iris characteristic data are matched with iris data of a doctor stored in advance, if so, the VR intelligent glasses recognize whether the doctor belongs to an oral practitioner according to the iris data of the doctor, and if so, the VR intelligent glasses send shooting instructions to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of a checked person; if the doctor is identified not to belong to the oral practitioner, identifying the oral practitioner guiding the doctor according to the iris data matching of the doctor; transmitting an oral examination assistance permission request of an examinee to a first doctor terminal corresponding to an oral medical practitioner guiding the doctor; judging whether the oral cavity practitioner guiding the doctor receives an oral cavity examination assistance permission request of the examinee, which is sent by the first doctor terminal, within a second appointed time, and if so, executing the step of sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the examinee;
After the VR intelligent glasses judge that the oral medical practitioner guiding the doctor receives the response of the examinee oral examination assistance permission request sent by the first doctor terminal within the second appointed time, acquiring a first payment site track of the oral medical practitioner guiding the doctor, which is issued to the VR intelligent glasses in advance by the first doctor terminal corresponding to the oral medical practitioner guiding the doctor; the first payment place track comprises a first appointed number of payment places, and any two payment places in the first appointed number of payment places are different from each other; and inserting a second specified number of non-payment sites between every two adjacent payment sites in the first payment site trajectory to form a second payment site trajectory; transmitting the second payment location track to a first doctor terminal corresponding to the oral practitioner guiding the doctor; acquiring a third payment location track transmitted by a first doctor terminal corresponding to the oral practitioner guiding the doctor; wherein the third payment location trajectory consists of a selected third specified number of locations in the second payment location trajectory; and when the third payment place track is verified to be the same as the first payment place track, and the third appointed number is equal to the first appointed number, and the set formed by the third appointed number of places contained in the third payment place track is verified to be the same as the set formed by the first appointed number of payment places, executing the step of sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person.
Checking whether the iris characteristic data is matched with iris data of a doctor stored in advance, if so, identifying whether the doctor belongs to an oral practitioner according to the iris data matching of the doctor, and if so, sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person; if the doctor is identified not to belong to the oral practitioner, identifying the oral practitioner guiding the doctor according to the iris data matching of the doctor; transmitting an oral examination assistance permission request of an examinee to a first doctor terminal corresponding to an oral medical practitioner guiding the doctor; judging whether the oral cavity practitioner guiding the doctor receives an oral cavity examination assistance permission request of the examinee, which is sent by the first doctor terminal, within a second appointed time, and if so, executing the step of sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the examinee; acquiring and outputting a real-time video image obtained by shooting the oral cavity of the checked person by the external miniature shooting module, so that the wearer performs oral cavity checking on the checked person through the real-time video image in a sight line head-up posture;
After the VR intelligent glasses judge that the oral medical practitioner guiding the doctor receives the response of the examinee oral examination assistance permission request sent by the first doctor terminal within the second appointed time, acquiring a first payment site track of the oral medical practitioner guiding the doctor, which is issued to the VR intelligent glasses in advance by the first doctor terminal corresponding to the oral medical practitioner guiding the doctor; the first payment place track comprises a first appointed number of payment places, and any two payment places in the first appointed number of payment places are different from each other; and inserting a second specified number of non-payment sites between every two adjacent payment sites in the first payment site trajectory to form a second payment site trajectory; transmitting the second payment location track to a first doctor terminal corresponding to the oral practitioner guiding the doctor; acquiring a third payment location track transmitted by a first doctor terminal corresponding to the oral practitioner guiding the doctor; wherein the third payment location trajectory consists of a selected third specified number of locations in the second payment location trajectory; and when the third payment place track is verified to be the same as the first payment place track, and the third appointed number is equal to the first appointed number, and the set formed by the third appointed number of places contained in the third payment place track is verified to be the same as the set formed by the first appointed number of payment places, executing the step of sending a shooting instruction to the external miniature shooting module to trigger the external miniature shooting module to shoot the oral cavity of the checked person.
CN202110709366.6A2021-06-252021-06-25VR intelligent glasses-based oral cavity examination method and systemActiveCN113378759B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110709366.6ACN113378759B (en)2021-06-252021-06-25VR intelligent glasses-based oral cavity examination method and system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110709366.6ACN113378759B (en)2021-06-252021-06-25VR intelligent glasses-based oral cavity examination method and system

Publications (2)

Publication NumberPublication Date
CN113378759A CN113378759A (en)2021-09-10
CN113378759Btrue CN113378759B (en)2024-05-28

Family

ID=77579123

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110709366.6AActiveCN113378759B (en)2021-06-252021-06-25VR intelligent glasses-based oral cavity examination method and system

Country Status (1)

CountryLink
CN (1)CN113378759B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107277463A (en)*2017-08-032017-10-20苏州医视医疗科技有限公司Video acquisition platform based on intelligent glasses
CN107529968A (en)*2015-02-032018-01-02弗朗索瓦·迪雷 device for viewing the inside of the mouth
CN109299678A (en)*2018-09-082019-02-01太若科技(北京)有限公司A kind of method, tripper and AR glasses using iris unlock AR glasses
CN209153607U (en)*2018-11-082019-07-26福州职业技术学院A kind of stomatology endoscope device of detectable saprodontia
CN112008740A (en)*2020-09-082020-12-01济南爱维互联网有限公司Morning check robot and use method thereof
CN112242190A (en)*2020-09-032021-01-19北京小乔机器人科技发展有限公司 A method for remote medication consultation and review

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107529968A (en)*2015-02-032018-01-02弗朗索瓦·迪雷 device for viewing the inside of the mouth
CN107277463A (en)*2017-08-032017-10-20苏州医视医疗科技有限公司Video acquisition platform based on intelligent glasses
CN109299678A (en)*2018-09-082019-02-01太若科技(北京)有限公司A kind of method, tripper and AR glasses using iris unlock AR glasses
CN209153607U (en)*2018-11-082019-07-26福州职业技术学院A kind of stomatology endoscope device of detectable saprodontia
CN112242190A (en)*2020-09-032021-01-19北京小乔机器人科技发展有限公司 A method for remote medication consultation and review
CN112008740A (en)*2020-09-082020-12-01济南爱维互联网有限公司Morning check robot and use method thereof

Also Published As

Publication numberPublication date
CN113378759A (en)2021-09-10

Similar Documents

PublicationPublication DateTitle
CN111611865B (en)Examination cheating behavior identification method, electronic equipment and storage medium
CN111414831B (en)Monitoring method and system, electronic device and storage medium
CN112328999B (en)Double-recording quality inspection method and device, server and storage medium
US10674953B2 (en)Skin feature imaging system
Speth et al.Deception detection and remote physiological monitoring: A dataset and baseline experimental results
CN110123257A (en)A kind of vision testing method, device, sight tester and computer storage medium
US20130194421A1 (en)Information processing apparatus, information processing method, and recording medium, for displaying information of object
US10441198B2 (en)Face detection device, face detection system, and face detection method
CN110246292B (en)Household video monitoring method, device and storage medium
JP6449504B1 (en) Information processing apparatus, information processing method, and information processing program
JP5197188B2 (en) Event identification system
CN115409774A (en)Eye detection method based on deep learning and strabismus screening system
CN115414002A (en)Eye detection method based on video stream and strabismus screening system
CN113378759B (en)VR intelligent glasses-based oral cavity examination method and system
CN111914763B (en)Living body detection method, living body detection device and terminal equipment
JP5088463B2 (en) Monitoring system
WO2023166605A1 (en)Action determination device, action determination method, and non-transitory computer-readable medium
KR100874186B1 (en) Method and apparatus for photographing snow-collected images of subjects by themselves
CN111583732B (en)Evaluation state monitoring method, device and equipment based on head-mounted display equipment
JP2009095387A (en) Measuring system and measuring method
CN116206392A (en)Method and device for monitoring on-duty information of construction site and electronic equipment
CN117038051A (en)Remote auscultation method, device and remote auscultation system based on intelligent stethoscope
JP2022188457A (en) Nursing care information recording device and nursing care information recording method
JP6283620B2 (en) Biological information acquisition apparatus, biological information acquisition method, and biological information acquisition program in a predetermined space
CN114998778A (en) A wearable compliance detection method, detection device and computer-readable storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp