Disclosure of Invention
Based on the above problems existing in the prior art, an object of the embodiment of the present invention is to provide a method and a system for pushing a multi-modal VR advertisement based on face recognition.
In order to achieve the above purpose, the technical scheme adopted by the invention is that the multi-mode VR advertisement pushing method based on face recognition comprises the following steps:
S1, acquiring multi-mode data of a user in a virtual reality environment, and preprocessing the multi-mode data;
S2, extracting features from the preprocessed multi-mode data by adopting a deep learning network model;
S3, carrying out dynamic weighted fusion on the image features, the voice features and the touch data after feature extraction to generate uniform multi-modal feature vectors;
S4, constructing and training a multi-mode VR advertisement prediction model, inputting the fused multi-mode feature vector into the multi-mode VR advertisement prediction model, and outputting user interest labels and behavior prediction results;
S5, constructing an advertisement database, and selecting corresponding advertisement content to be embedded into a VR scene for pushing based on the current interest tag and the historical behavior data of the user;
s6, acquiring interaction data of the user on the pushed advertisements, and feeding back and dynamically adjusting advertisement matching algorithm weights according to the interaction data of the user.
Further, the multi-mode data comprises image data, voice data and touch data, the image data acquisition comprises the steps of adopting at least two high-resolution RGB cameras which are respectively arranged on two sides of the front of the VR helmet and used for capturing facial expressions, eyeball movements and head gestures of a user in real time, and the touch data acquisition comprises the step of configuring a piezoelectric touch sensor on a VR handle and used for detecting the holding power and operation gestures of the user.
Further, the preprocessing of the multi-modal data includes compression processing of the image data, noise reduction processing of the voice data, and normalization processing of the haptic data.
Further, the feature extraction of the network model adopting deep learning from the preprocessed multi-modal data includes:
s21, performing feature extraction on image data by adopting a deep-learning convolutional neural network, wherein the convolutional neural network adopts ResNet-50;
Step S22, performing feature extraction on voice data by adopting a deep learning cyclic neural network, wherein the cyclic neural network adopts a long-term and short-term memory network;
and S23, performing feature extraction on the tactile data by adopting a one-dimensional convolutional neural network.
Further, the weighting and fusing the image features, the voice features and the touch data after feature extraction to generate a unified multi-modal feature vector includes:
Step S31, judging according to the current scene of the user, and dynamically updating the weight coefficient by adopting an exponential smoothing method;
The calculation formula of the dynamic updating weight coefficient by the exponential smoothing method is as follows:
Wherein, theTo represent the weighting factor of the mth modality (e.g. visual, auditory, tactile) at time t,The weight coefficient of the mth mode at the time t-1 is represented by a smoothing coefficient,Presetting a weight for a scene of an mth modality;
And S32, mapping the visual, auditory and tactile feature vectors to a unified dimension through a deep learning full-connection layer, and then screening key features through an attention mechanism to generate a fused multi-modal feature vector.
Further, the constructing and training a multi-mode VR advertisement prediction model, inputting the fused multi-mode feature vector into the multi-mode VR advertisement prediction model, and outputting a user interest tag and a behavior prediction result, including:
Step S41, a multi-mode VR advertisement prediction model adopts a mixed model architecture combining a support vector machine and a deep neural network;
step S42, training a multi-mode VR advertisement prediction model by using a large amount of marked multi-mode data;
step S43, taking the fused multi-mode feature vector as the input of the model, comprising visual features, auditory features and information, integrating the multi-mode feature vector through a full-connection layer, ensuring that each feature can influence the prediction result of the model, and outputting user interest labels and behavior prediction results;
step S44, adopting an online learning mechanism, the multi-mode VR advertisement prediction model can receive new user data in real time and dynamically update model parameters to adapt to user preference changes.
Further, the obtaining the interactive data of the user on the push advertisement, and feeding back and dynamically adjusting the weight of the advertisement matching algorithm according to the interactive data of the user, includes:
Step S61, obtaining interactive data of a user on the push advertisement, wherein the interactive data comprise click rate, stay time, haptic feedback intensity and voice evaluation;
Step S62, carrying out deep analysis on the collected interaction data, and extracting key features;
and step S63, adopting a reinforcement learning framework, and dynamically adjusting the weight of the advertisement matching algorithm according to user feedback.
The utility model provides a multimode VR advertisement push system based on face identification, is applied to above-mentioned multimode VR advertisement push method based on face identification, the system includes:
the data acquisition module is used for acquiring multi-mode data of a user in a virtual reality environment and preprocessing the multi-mode data;
The feature extraction module is used for extracting features from the preprocessed multi-mode data by adopting a deep learning network model;
the weighting fusion module is used for carrying out dynamic weighting fusion on the image features, the voice features and the touch data after the feature extraction to generate a unified multi-mode feature vector;
the modeling training module is used for constructing and training a multi-mode VR advertisement prediction model;
The behavior prediction module is used for inputting the fused multi-mode feature vector into a multi-mode VR advertisement prediction model and outputting a user interest tag and a behavior prediction result;
the advertisement pushing module is used for constructing an advertisement database, and selecting corresponding advertisement content to be embedded into the VR scene for pushing based on the current interest tag and the historical behavior data of the user;
and the algorithm optimization module is used for acquiring the interaction data of the user on the push advertisement and dynamically adjusting the weight of the advertisement matching algorithm according to the feedback of the interaction data of the user.
The embodiment of the invention also provides a network side service end, which comprises the following steps:
The multi-modal VR advertisement push method based on face recognition comprises the steps of providing a plurality of face recognition-based multi-modal VR advertisement push methods, providing at least one face recognition-based multi-modal VR advertisement push method, providing at least one face recognition-based multi-modal VR advertisement, and providing at least one face recognition-based multi-modal VR advertisement.
The embodiment of the invention also provides a computer readable storage medium which stores a computer program, and the computer program realizes the multi-mode VR advertisement pushing method based on face recognition when being executed by a processor.
The multi-modal VR advertisement pushing method based on face recognition has the advantages that multi-modal data of a user in a virtual reality environment are collected, the multi-modal data are preprocessed, a deep learning network model is adopted to conduct feature extraction from the preprocessed multi-modal data, dynamic weighted fusion is conducted on image features, voice features and touch data after feature extraction, unified multi-modal feature vectors are generated, a multi-modal VR advertisement prediction model is built and trained, the fused multi-modal feature vectors are input into the multi-modal VR advertisement prediction model, user interest labels and behavior prediction results are output, an advertisement database is built, corresponding advertisement content is selected to be embedded into a VR scene for pushing based on current interest labels and historical behavior data of the user, interactive data of the user for pushing advertisements are obtained, and advertisement matching algorithm weights are dynamically adjusted according to feedback of the interactive data of the user. According to the multi-mode VR advertisement pushing method based on face recognition, through collecting multi-mode data and preprocessing, dynamic weighting fusion is carried out after the deep learning model is used for accurately extracting features, user interest and behavior prediction are output in real time by combining the prediction model and an online learning mechanism, personalized advertisements are pushed by utilizing a multi-dimensional tag advertisement library and a dynamic matching algorithm, and finally closed-loop optimization is fed back according to user interaction, so that more accurate and personalized advertisement pushing is realized, the conversion rate of advertisements is improved, the user experience is enhanced, and meanwhile, a more efficient advertisement throwing platform is provided for advertisers.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
First embodiment:
The first embodiment of the invention provides a multimode VR advertisement pushing method based on face recognition, which comprises the steps of collecting multimode data of a user in a virtual reality environment, preprocessing the multimode data, extracting features from the preprocessed multimode data by adopting a network model for deep learning, dynamically weighting and fusing image features, voice features and touch data after the features are extracted to generate unified multimode feature vectors, constructing and training a multimode VR advertisement prediction model, inputting the fused multimode feature vectors into the multimode VR advertisement prediction model, outputting user interest tags and behavior prediction results, constructing an advertisement database, selecting corresponding advertisement content to be pushed into the VR scene based on current interest tags and historical behavior data of the user, acquiring interactive data of the pushed advertisement by the user, and dynamically adjusting advertisement matching algorithm weights according to feedback of the interactive data of the user. According to the multi-mode VR advertisement pushing method based on face recognition, through collecting multi-mode data and preprocessing, dynamic weighting fusion is carried out after the deep learning model is used for accurately extracting features, user interest and behavior prediction are output in real time by combining the prediction model and an online learning mechanism, personalized advertisements are pushed by utilizing a multi-dimensional tag advertisement library and a dynamic matching algorithm, and finally closed-loop optimization is fed back according to user interaction, so that more accurate and personalized advertisement pushing is realized, the conversion rate of advertisements is improved, the user experience is enhanced, and meanwhile, a more efficient advertisement throwing platform is provided for advertisers.
The implementation details of the multi-mode VR advertisement pushing method based on face recognition in the present embodiment are specifically described below, and the following details are provided only for easy understanding, but are not required to implement the present embodiment, and a specific flow in the present embodiment is shown in fig. 1.
Step S1, acquiring multi-mode data of a user in a virtual reality environment, and preprocessing the multi-mode data.
Specifically, the multimodal data includes image data, voice data, and haptic data.
The image data acquisition comprises the adoption of at least two high-resolution RGB cameras which are respectively arranged on two sides of the front part of the VR helmet and used for capturing facial expressions, eyeball movements and head gestures of a user in real time.
The voice data collection includes configuring four high sensitivity microphones distributed at the edge of the VR headset for capturing the user's voice instructions and ambient sounds.
The collection of haptic data includes configuring a piezoelectric haptic sensor on the VR handle for detecting a user's grip and operation gestures.
Preprocessing the multimodal data includes compressing the image data, denoising the speech data, and normalizing the haptic data.
Specifically, because the collected image data has larger data volume, the direct processing of the image data can lead to higher calculation cost and transmission delay, and the H.264 coding standard is adopted to perform the inter-frame compression of the video stream, so that the data transmission bandwidth requirement can be obviously reduced while the video quality is ensured. In the compression process, an image denoising algorithm is adopted, a non-local mean filtering algorithm is adopted as an example to effectively eliminate motion blur and noise by analyzing similar areas in an image, further detail and edge information of the image are kept, clearer image data are provided for subsequent feature extraction, the local mean filtering algorithm can effectively eliminate the motion blur, and the system can be ensured to accurately capture facial expression and head gesture of a user.
The spectral subtraction is adopted to eliminate environmental noise, and by analyzing the spectral characteristics of the voice signals, the spectral subtraction can effectively remove background noise, improve the accuracy of voice recognition and provide high-quality data for subsequent voice feature extraction.
Because the sensitivity of the tactile sensors may be different in different VR handles, the sensitivity difference between the different sensors can be eliminated by the normalization processing of the tactile data, the normalized tactile signals are normalized to generate normalized tactile intensity vectors, and the normalization processing ensures the comparability of the data collected by the different sensors by mapping the values of the tactile signals into a uniform range, thereby providing standardized data support for subsequent tactile feature extraction and behavior analysis.
And S2, extracting features from the preprocessed multi-mode data by adopting a deep learning network model.
Specifically, the specific steps of extracting features from the preprocessed multi-modal data by adopting the deep learning network model include:
And S21, performing feature extraction on the image data by adopting a deep learning convolutional neural network, wherein the convolutional neural network adopts ResNet-50.
Specifically, by extracting the facial expression key points of the user, such as the mouth angle radian, the eye contour, and the like, the facial expression key points of the user can reflect the emotional state and the focus of attention of the user. As an example, a corner of the mouth rising may indicate that the user is in a pleasant state, while a change in the eye contour may indicate the degree of concentration of the user. In addition, the head movement track features of the user are extracted through the convolutional neural network, and the head movement track can reflect the visual focus and behavior habit of the user. For example, a user moving his head around frequently may indicate curiosity about the surrounding environment, while a long gaze in a certain direction may indicate an interest in that direction.
And S22, performing feature extraction on the voice data by adopting a deep learning cyclic neural network, wherein the cyclic neural network adopts a long-term and short-term memory network.
Specifically, a long-short-term memory network is used to extract time sequence emotion characteristics, such as intonation, speech speed and the like, from the voice signal, and the time sequence emotion characteristics can reflect the emotion state of the user. For example, an increase in intonation may indicate that the user is in an excited state, while an increase in speech rate may indicate that the user is in a stressed state.
And S23, performing feature extraction on the tactile data by adopting a one-dimensional convolutional neural network.
The time sequence features in the touch data, such as click frequency, sliding direction and the like, are automatically learned through the one-dimensional convolutional neural network, so that the operation habit and preference of a user can be reflected.
And step S3, carrying out dynamic weighted fusion on the image features, the voice features and the touch data after feature extraction to generate a unified multi-mode feature vector.
Specifically, the specific step of performing weighted fusion on the image features, the voice features and the touch data after feature extraction to generate a unified multi-modal feature vector includes:
and S31, judging according to the current scene of the user, and dynamically updating the weight coefficient by adopting an exponential smoothing method.
Specifically, the current interaction scene is judged through head gesture and gaze point detection in the image data. As an example, when a user gives a voice instruction, a voice interaction scene is determined, at which time the auditory mode weight is 50%, the visual weight is 40%, and the tactile weight is 10%. When the fixation point in the image data is fixed on the virtual object (stay >1 s), the visual focus scene is judged, the visual weight is 60%, the auditory weight is 25%, and the tactile weight is 15%. When a slide/pinch gesture in the haptic data is detected, a gesture operation scene is determined, at which time the haptic weight is 40%, the visual weight is 35%, and the auditory weight is 25%.
The calculation formula of the dynamic updating weight coefficient by the exponential smoothing method is as follows:
Wherein, theTo represent the weighting factor of the mth modality (e.g. visual, auditory, tactile) at time t,The weight coefficient of the mth mode at the time t-1 is represented by a smoothing coefficient,And presetting a weight for the scene of the mth modality.
By exponential smoothing method, combining historical weightsAnd scene preset weightsDynamically updating current modal weightsThe weight distribution is more fit with the real-time scene and the history rule.
And S32, mapping the visual, auditory and tactile feature vectors to a unified dimension through a deep learning full-connection layer, and then screening key features through an attention mechanism to generate a fused multi-modal feature vector.
And S4, constructing and training a multi-mode VR advertisement prediction model, inputting the fused multi-mode feature vector into the multi-mode VR advertisement prediction model, and outputting user interest labels and behavior prediction results.
Specifically, the steps of constructing and training a multi-mode VR advertisement prediction model, inputting the fused multi-mode feature vector into the multi-mode VR advertisement prediction model, and outputting the user interest tag and the behavior prediction result include:
in step S41, the multi-mode VR advertisement prediction model adopts a hybrid model architecture combining a Support Vector Machine (SVM) and a Deep Neural Network (DNN).
The SVM part is mainly used for processing linear separable characteristics and can effectively classify user interests, and the DNN part is used for capturing complex nonlinear relations and predicting user behaviors.
Step S42, training the multi-mode VR advertisement prediction model by using a large amount of marked multi-mode data.
Specifically, the training data includes behavior characteristics of the user in different scenes and corresponding interest tags and behavior results. Model parameters are adjusted by an optimization algorithm (such as an Adam optimizer) so that the model can accurately predict the interests and behaviors of the user.
Step S43, taking the fused multi-mode feature vector as the input of the model, comprising visual features, auditory features and information, integrating the multi-mode feature vector through a full-connection layer, ensuring that each feature can influence the prediction result of the model, and outputting the user interest labels and the behavior prediction result.
Step S44, adopting an online learning mechanism, the multi-mode VR advertisement prediction model can receive new user data in real time and dynamically update model parameters to adapt to user preference changes.
Specifically, when a user operates or interacts in a VR environment, multi-mode data is collected in real time, preprocessed and extracted in characteristics, and then input into a multi-mode VR advertisement prediction model. The multi-mode VR advertisement prediction model adopts an online learning mechanism, can receive new user data in real time and dynamically update model parameters, ensures that the model can always accurately reflect the latest behaviors and preferences of the user, and further improves the prediction accuracy and the advertisement pushing correlation.
And S5, constructing an advertisement database, and selecting corresponding advertisement content to be embedded into the VR scene for pushing based on the current interest tag and the historical behavior data of the user.
In particular, the advertisement database stores advertisement content in a multi-dimensional tagged manner. Each advertisement entry corresponds to a user's behavior and a user's interest tag, facilitating rapid matching of the model to appropriate advertisement content. Based on the current interest tag and the historical behavior data of the user, selecting the most suitable advertisement content by adopting a dynamic matching algorithm, and calculating the matching degree score of each advertisement and the user by comprehensively considering the interest preference, the behavior mode and the relevance and the priority weight of the advertisement content of the user by the algorithm. And generating an advertisement pushing queue according to the matching degree score, and selecting the first few advertisements with the highest matching degree score for pushing so as to ensure that the advertisement content received by the user is in the best line with the interests and preferences of the user.
Furthermore, the advertisement content is seamlessly embedded into the VR scene, so that the immersion of the user is enhanced, the advertisement content and the virtual environment are naturally fused, and the advertisement acceptability and the interactive willingness of the user are improved.
And S6, acquiring interaction data of the user on the pushed advertisements, and feeding back and dynamically adjusting advertisement matching algorithm weights according to the interaction data of the user.
Specifically, the specific step of obtaining the interaction data of the user on the push advertisement and feeding back the interaction data of the user to dynamically adjust the weight of the advertisement matching algorithm comprises the following steps:
step S61, the interactive data of the user on the push advertisement is obtained, wherein the interactive data comprise click rate, stay time, haptic feedback intensity and voice evaluation.
The method comprises the steps of determining a click rate of a user, wherein the click rate is the ratio of the times of clicking the advertisement by the user to the times of displaying the advertisement, reflecting the direct interest degree of the user on the advertisement, the stay time is the stay time of the user on the advertisement, reflecting the attention degree and the potential interest of the user on the advertisement, the haptic feedback strength is the strength and the frequency of the haptic operation of the user on advertisement content through a VR handle, indicating the participation degree and the preference of the user on the advertisement, the voice evaluation is the voice evaluation content of the user on the advertisement, analyzing the emotional response of the user through an emotion analysis model, and judging the preference degree of the user on the advertisement.
And S62, carrying out deep analysis on the collected interaction data, and extracting key features.
Specifically, the preference of the user to the advertisement of the specific type is identified by analyzing the click behavior pattern of the user, the interest concentration degree of the user to different advertisement contents is known by analyzing the distribution of the stay time, and the analysis result provides data support for the optimization of the advertisement matching algorithm.
Step S63, adopting a reinforcement learning framework, and dynamically adjusting the weight of the advertisement matching algorithm according to user feedback.
Specifically, the advertisement matching algorithm is dynamically adjusted by adopting a reinforcement learning framework. In the reinforcement learning process, the interactive data of the user is regarded as an environment feedback signal, and the weight of the advertisement matching algorithm is adjusted according to the positive and negative conditions of the feedback signal.
As an example, according to the click rate and the stay time of the user, the weight of the advertisement matching algorithm is dynamically adjusted to improve the priority of the advertisement with high click rate, so as to ensure that the most suitable advertisement content can be selected for pushing under different scenes, thereby improving the conversion rate of the advertisement and the satisfaction degree of the user.
The multi-modal VR advertisement pushing method based on face recognition comprises the steps of collecting multi-modal data of a user in a virtual reality environment, preprocessing the multi-modal data, extracting features from the preprocessed multi-modal data by adopting a network model for deep learning, dynamically weighting and fusing image features, voice features and touch data after the features are extracted to generate unified multi-modal feature vectors, constructing and training a multi-modal VR advertisement prediction model, inputting the fused multi-modal feature vectors into the multi-modal VR advertisement prediction model, outputting user interest tags and behavior prediction results, constructing an advertisement database, selecting corresponding advertisement content to be embedded into the VR scene for pushing based on current interest tags and historical behavior data of the user, acquiring interactive data of the pushed advertisement by the user, and dynamically adjusting advertisement matching algorithm weights according to feedback of the interactive data of the user. According to the multi-mode VR advertisement pushing method based on face recognition, through collecting multi-mode data and preprocessing, dynamic weighting fusion is carried out after the deep learning model is used for accurately extracting features, user interest and behavior prediction are output in real time by combining the prediction model and an online learning mechanism, personalized advertisements are pushed by utilizing a multi-dimensional tag advertisement library and a dynamic matching algorithm, and finally closed-loop optimization is fed back according to user interaction, so that more accurate and personalized advertisement pushing is realized, the conversion rate of advertisements is improved, the user experience is enhanced, and meanwhile, a more efficient advertisement throwing platform is provided for advertisers.
Second embodiment:
As shown in fig. 2, a second embodiment of the present invention provides a multi-modal VR advertisement push system based on face recognition, which includes a data acquisition module 201, a feature extraction module 202, a weighted fusion module 203, a modeling training module 204, a behavior prediction module 205, an advertisement push module 206, and an algorithm optimization module 207.
Specifically, the system comprises a data acquisition module 201 for acquiring multi-modal data of a user in a virtual reality environment and preprocessing the multi-modal data, a feature extraction module 202 for extracting features from the preprocessed multi-modal data by adopting a network model for deep learning, a weighted fusion module 203 for dynamically weighting and fusing image features, voice features and touch data after feature extraction to generate unified multi-modal feature vectors, a modeling training module 204 for constructing and training a multi-modal VR advertisement prediction model, a behavior prediction module 205 for inputting the fused multi-modal feature vectors into the multi-modal VR advertisement prediction model and outputting user interest tags and behavior prediction results, an advertisement pushing module 206 for constructing an advertisement database and selecting corresponding advertisement content to be embedded into a VR scene for pushing based on the current interest tags and historical behavior data of the user, and an algorithm optimization module 207 for acquiring interactive data of the pushed advertisement by the user and feeding back and dynamically adjusting advertisement matching algorithm weights according to the interactive data of the user.
It is to be noted that this embodiment is a system example corresponding to the first embodiment, and can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and in order to reduce repetition, a detailed description is omitted here. Accordingly, the related art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module in this embodiment is a logic module, and in practical application, one logic unit may be one physical unit, or may be a part of one physical unit, or may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, units that are not so close to solving the technical problem presented by the present invention are not introduced in the present embodiment, but this does not indicate that other units are not present in the present embodiment.
The third embodiment of the present invention relates to a network-side server, as shown in fig. 3, including at least one processor 302, and a memory 301 communicatively connected to the at least one processor 302, where the memory 301 stores instructions executable by the at least one processor 302, and the instructions are executed by the at least one processor 302, so that the at least one processor 302 can perform the above-mentioned data processing method.
Where the memory 301 and the processor 302 are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors 302 and the memory 301 together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 302 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor 302.
The processor 302 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory 301 may be used to store data used by processor 302 in performing operations.
A fourth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program, when executed by the processor, implements the face recognition based multi-modal VR advertisement push method in the first embodiment.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments of the application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing is merely an embodiment of the present application, and a specific structure and characteristics of common knowledge in the art, which are well known in the scheme, are not described herein, so that a person of ordinary skill in the art knows all the prior art in the application date or before the priority date, can know all the prior art in the field, and has the capability of applying the conventional experimental means before the date, and a person of ordinary skill in the art can complete and implement the present embodiment in combination with his own capability in the light of the present application, and some typical known structures or known methods should not be an obstacle for a person of ordinary skill in the art to implement the present application. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present application, and these should also be considered as the scope of the present application, which does not affect the effect of the implementation of the present application and the utility of the patent. The protection scope of the present application is subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.