Disclosure of Invention
The invention solves the problem of how to analyze medical data so as to improve the teaching effect.
In order to solve the above problems, the present invention provides a method for comprehensively analyzing data, including:
Acquiring historical medical data of each data node, and preprocessing the historical medical data to obtain first standard data;
Recursively constructing the first standard data according to a preset construction method to obtain storage data and a storage data index;
For each data node, constructing at least one data warehouse according to the stored data and the stored data index, and taking all the data warehouses as distributed data warehouses;
Acquiring actual data, and preprocessing the actual data to acquire second standard data;
Performing data mining on all the stored data in the distributed data warehouse to obtain mining data;
taking the mining data matched with the second standard data as matching data through a preset method;
and determining a distinguishing point from the actual data according to the matching data.
Optionally, the acquiring the historical medical data of each data node, preprocessing the historical medical data, and obtaining the first standard data includes:
carrying out data cleaning and standardization processing on the historical medical data, and converting the historical medical data into a first data vector;
And taking the set of the first data vectors as the first standard data.
Optionally, recursively constructing the first standard data according to a preset construction method, and obtaining storage data and a storage data index;
Randomly selecting a dimension and a segmentation point from the first standard data;
Dividing the first data vector into two primary groups according to the dimension and the division point to form a first node, wherein the primary groups are direct child nodes of the first node;
Randomly selecting a new dimension and a new segmentation point in each primary group under the first node, and further segmenting each primary group into two secondary groups;
Selecting new dimensions and new segmentation points, segmenting the secondary groups until a termination condition is met, completing the recursion construction, taking the first standard data after the recursion construction as the storage data, and taking a structure formed by all groups and all nodes obtained by the recursion construction as the storage data index, wherein the termination condition comprises that only one first data vector in each n-level group is adopted, or the preset segmentation times are reached.
Optionally, the data mining of all the stored data in the distributed data warehouse includes:
Retrieving, from the stored data index, an approximate nearest neighbor to the second standard data in the distributed data warehouse, wherein if there are a plurality of the data warehouses and a similarity of the type of the historical medical data in the plurality of the data warehouses to the type of the second standard data exceeds a first similarity threshold, then searching in parallel for an approximate nearest neighbor to the second standard data in the plurality of the data warehouses;
And taking the approximate nearest neighbor as the mining data.
Optionally, the step of using the mined data matched with the second standard data as the matching data by a preset method includes:
Constructing a co-occurrence matrix by a co-occurrence analysis method, wherein the co-occurrence matrix is used for recording the co-occurrence times of the mining data and the second standard data;
Determining a first co-occurrence frequency and a first co-occurrence similarity of the mined data and the second standard data in the co-occurrence matrix;
and taking the mining data with the first co-occurrence frequency higher than a frequency threshold and the first co-occurrence similarity higher than a second similarity threshold as the matching data.
Optionally, after the co-occurrence matrix is constructed by co-occurrence analysis, the method further includes:
Determining a second co-occurrence frequency and a second co-occurrence similarity of all the stored data and the second standard data in the co-occurrence matrix;
Taking the stored data, for which the second co-occurrence frequency is higher than the frequency threshold and the second co-occurrence similarity is higher than the second similarity threshold and is not present in the matching data, as optimization data;
and optimizing the structure of the co-occurrence matrix based on the second co-occurrence frequency and the second co-occurrence similarity corresponding to the optimization data.
Optionally, the determining the distinguishing point from the actual data according to the matching data includes:
Analyzing the matching data to obtain multi-dimensional data contained in the matching data, wherein the multi-dimensional data comprises user codes, doctor codes, evaluation results and target completion degrees;
obtaining factor similarity among each dimension of the multi-dimensional data through a pearson correlation coefficient matrix;
taking the multi-dimensional data with the factor similarity larger than a third similarity threshold value as target data;
determining a factor and a characteristic value of the target data by a minimum residual error method, and reserving the factor with the characteristic value larger than 1 as a target factor;
rotating the target factors to obtain a factor load matrix;
Determining multidimensional data lower than a first preset score in the second standard data as low-score data according to the factor load matrix;
extracting multidimensional data higher than a second preset score from the matching data as high-score data;
And optimizing the low score data according to the high score data until the score of the low score data is higher than a third preset score, wherein the first preset score is smaller than or equal to the third preset score, and the third preset score is smaller than or equal to the second preset score.
In a second aspect, the present invention also provides a data analysis-by-synthesis system, including:
The first acquisition module is used for acquiring the historical medical data of each data node, preprocessing the historical medical data and acquiring first standard data;
The first construction module is used for recursively constructing the first standard data according to a preset construction method to obtain storage data and a storage data index;
a second construction module, configured to construct, for each of the data nodes, at least one data warehouse according to the stored data and the stored data index, and use all the data warehouses as distributed data warehouses;
The second acquisition module is used for acquiring actual data, preprocessing the actual data and acquiring second standard data;
The mining module is used for carrying out data mining on all the stored data in the distributed data warehouse to obtain mining data;
the matching module is used for taking the mining data matched with the second standard data as matching data through a preset method;
and the optimizing module is used for determining a distinguishing point from the actual data according to the matching data.
In a third aspect, the present invention also provides an electronic device, including a memory and a processor;
the memory is used for storing a computer program;
The processor is configured to implement the data analysis-by-synthesis method as described above when executing the computer program.
In a fourth aspect, the present invention also provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements a data analysis-by-synthesis method as described above.
Compared with the prior art, the method and the device have the advantages that the storage and the retrieval of the actual data are considered, the historical medical data are stored through the distributed storage system, the historical medical data are preprocessed, and the first standard data are obtained and serve as the basis for optimizing the medical data. For a high-data-volume distributed medical data storage scene, the first standard data is recursively constructed to obtain storage data and storage data indexes, and a plurality of data warehouses are constructed according to the number of data nodes, so that the advantage of distributed storage can be effectively utilized, and the retrieval efficiency is improved. The data of the data warehouse is subjected to data mining, the data is matched with the second standard data to obtain matched data, the actual data is optimized according to the matched data, the matched data matched with the second standard data can be quickly searched by utilizing a data index under the condition that the medical data are distributed in different storage places, the distinguishing point between the medical data and the actual data is found out according to the matched data, and the conventional data is mined and analyzed to improve the teaching quality of inexperienced users.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. While the invention is susceptible of embodiment in the drawings, it is to be understood that the invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided to provide a more thorough and complete understanding of the invention. It should be understood that the drawings and embodiments of the invention are for illustration purposes only and are not intended to limit the scope of the present invention.
It should be understood that the various steps recited in the method embodiments of the present invention may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the invention is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment," another embodiment "means" at least one additional embodiment, "some embodiments" means "at least some embodiments," and "optionally" means "an alternative embodiment. Related definitions of other terms will be given in the description below. It should be noted that the concepts of "first", "second", etc. mentioned in this disclosure are only used to distinguish between different devices, modules or units, and are not intended to limit the order or interdependence of functions performed by these devices, modules or units.
It should be noted that references to "a" and "an" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
An embodiment of the present invention provides a method for comprehensively analyzing data, including:
step S100, acquiring historical medical data of each data node, and preprocessing the historical medical data to obtain first standard data.
In an embodiment, the data nodes represent components responsible for storing actual data, e.g. when historical medical data is stored in a plurality of hospitals or departments, respectively, each as a data node. The history medical data is stored through the storage medium, wherein the history medical data comprises past teaching data which can be recorded, such as audio data, video data, text data and the like, and specifically data such as diagnostic videos, diagnostic audios, teaching courseware, teaching lectures, teaching material contents and the like. The teaching task and diagnosis history are used as storage indexes and used as history medical data. For example, all public data such as diagnostic videos, diagnostic audio, teaching courseware, teaching lectures, teaching material contents, etc. of 1 month 1 day 2000 are taken as one piece of historical medical data. The historical medical data is used as a database for teaching, and cases matched with the current actual data are obtained through case analysis and used for analyzing distinguishing points of the historical medical data and the current actual data, so that more case guidance is brought for teachers and students.
And performing pretreatment such as data screening, de-duplication, noise reduction and the like on the historical medical data to obtain first standard data. For example, deleting unrecognizable teaching courseware and teaching lecture data, deleting repeated audio data when repeated contents are included in the audio data and the video data, performing noise reduction processing on noisy audio data, and performing noise reduction processing on blurred image data or video data. The first standard data obtained by processing is the standardized data stored in the data warehouse.
Step S200, recursively constructing the first standard data according to a preset construction method, to obtain storage data and a storage data index.
In one embodiment, because of the wide source of the historical medical data, which may originate from various units, or other types of nodes, some nodes may not be convenient to disclose the historical medical data, and it is necessary to store the data and construct a data index for retrieval under each node. For the historical medical data stored in a distributed mode, the method has great significance in improving the retrieval efficiency, and constructing a proper data index can greatly improve the retrieval efficiency.
And step S300, constructing at least one data warehouse according to the stored data and the stored data index, and taking all the data warehouses as distributed data warehouses for each data node.
In one embodiment, some data nodes have larger data size, and the stored data needs to be stored separately to improve the retrieval efficiency, so that at least one data warehouse is constructed for each data node, and all data warehouses are used as distributed data warehouses.
Step S400, obtaining actual data, and preprocessing the actual data to obtain second standard data.
The method comprises the steps of obtaining actual data, carrying out standardized processing on the data to obtain second standard data consistent with the format of stored data, wherein the second standard data and the second standard data are consistent in format, and the second standard data are used for assisting in comparing differences between the actual data and historical medical data, and optimizing the actual data according to the differences so as to realize a teaching optimization method based on data analysis.
And S500, performing data mining on all the stored data in the distributed data warehouse to obtain mining data.
And analyzing and mining the stored data in the distributed data warehouse to determine the characteristics of the stored data. After the actual data is obtained, the data matched with the actual data can be quickly searched and determined according to the data index and the characteristics of the stored data.
And S600, taking the mining data matched with the second standard data as matching data through a preset method.
And step S700, determining a distinguishing point from the actual data according to the matching data.
After the matching data matched with the second standard data is determined from the distributed data warehouse, the deficiency of the actual data is optimized according to the advantages of the matching data, so that an effective optimization method is formed, and the teaching quality is improved. In the traditional teaching mode of teachers and students, teaching effects and teaching examples are related to experience storage of teachers, through the data processing method in the embodiment of the invention, historical medical data similar to the current case can be mined from a database, the historical medical data is determined to be matched data, and distinguishing points and similar points of the historical medical data and actual data are extracted for teaching, so that the cases are richer, and the teaching quality can be effectively improved.
Optionally, as shown in fig. 2, the obtaining the historical medical data of each data node, and preprocessing the historical medical data, obtaining the first standard data includes:
Step S110, data cleaning and normalization processing are carried out on the historical medical data, and the historical medical data are converted into first data vectors.
And step S120, taking the set of the first data vectors as the first standard data.
In one embodiment, first, published historical medical data is collected from sources such as information systems, online platforms, etc., including but not limited to basic information (e.g., age, personal base), etc. The data exists in structured (e.g., database tables) and unstructured (e.g., text, image material) forms. And cleaning the data, namely removing the duplication, namely identifying and removing the duplicate records, and ensuring the uniqueness of each observation point. Missing value processing, namely, adopting proper strategy processing for missing data, such as filling numerical data by using average values and median, or reasonably deducing filling type data according to context logic. Outlier detection and processing by statistical analysis (e.g., box plot analysis) to identify and process outliers, which may be selected for culling, correction, or smoothing using more complex statistical methods (e.g., winsorization). Consistency checks-ensure that the data fields are consistent in format, e.g., date format normalization, text data unify case, remove extraneous symbols, etc.
After the data are cleaned, the data are subjected to standardized treatment to eliminate dimension influence, so that the comparability between different features is realized, and the subsequent analysis is convenient. The usual standardized methods are:
Min-max normalization maps all eigenvalues into the [0,1] interval. Z-Score normalization data were normalized to a standard normal distribution, i.e., each eigenvalue minus its mean divided by its standard deviation. For the above mentioned historical medical data, a suitable normalization method is selected according to the nature of the features, each feature being processed independently.
After normalization, each data record can be converted into a fixed length data vector, where each element represents a normalized value of a feature. For example, if the teaching data contains three features, the record of each piece of data would be transformed to form a data vector as [0.65,0.92,0.78 ].
And summarizing the data vectors of all individuals to form a set, namely 'first standard data'. The set is the basis of subsequent data analysis, can ensure that all data are processed in a unified scale and format, is convenient for algorithm understanding and processing, and reduces deviation caused by data inconsistency.
Optionally, recursively constructing the first standard data according to a preset construction method, and obtaining storage data and a storage data index;
Randomly selecting a dimension and a segmentation point from the first standard data;
Dividing the first data vector into two primary groups according to the dimension and the division point to form a first node, wherein the primary groups are direct child nodes of the first node;
Randomly selecting a new dimension and a new segmentation point in each primary group under the first node, and further segmenting each primary group into two secondary groups;
Selecting new dimensions and new segmentation points, segmenting the secondary groups until a termination condition is met, completing the recursion construction, taking the first standard data after the recursion construction as the storage data, and taking a structure formed by all groups and all nodes obtained by the recursion construction as the storage data index, wherein the termination condition comprises that only one first data vector in each n-level group is adopted, or the preset segmentation times are reached.
Preset parameters of the recursively constructed decision tree are determined, including but not limited to maximum recursion depth (preset number of divisions), whether pruning is to be performed, and the like. These parameters are set according to the characteristics of the historical medical data and the analysis targets to avoid over-fitting or under-fitting phenomena. A feature dimension (attribute) and a particular value in that dimension are randomly selected from the first standard dataset as the partitioning point. This segmentation point will typically be chosen to be the value that maximizes the purity increase after segmentation of the dataset, but in this case is chosen directly at random for simplicity of illustration. The data vectors in the first standard dataset are divided into two subsets, i.e. two primary groups (where n=1), depending on the selected dimension and segmentation point. These two subsets will be the direct child nodes of the current node (i.e., the first node). For each primary group under the first node, the above procedure is repeated by randomly selecting a new dimension among the remaining unused features and selecting a split point in this dimension, and continuing to split the primary group into two secondary groups (where n=2). The recursion continues by repeating the process of selecting dimensions and partitioning points in subsets within the current set each time until a termination condition is reached. The termination condition includes that each group contains only one data vector, which indicates that no more efficient segmentation can be performed, as there is no more data to distinguish further. The preset segmentation times are reached, which is a mechanism for preventing excessive subdivision, ensures the depth of the tree to be within a reasonable range and improves the generalization capability. Once any subgroup satisfies the above-described termination condition, the subgroup is not subdivided into leaf nodes. The leaf nodes represent the final decision region in which the data vectors contained have similar characteristic properties. Throughout the recursive construction process, each node (including internal nodes and leaf nodes) should store its corresponding partition dimension, partition point information, and pointers (or indexes) to its child nodes. In addition, the leaf node also needs to store the index or direct data of all the first data vectors it contains for subsequent query and interpretation. The decision tree can be used for tasks such as classification, regression and the like, and helps analyze patterns and rules in historical medical data.
Optionally, recursively constructing the first standard data according to a preset construction method, and obtaining the storage data and the storage data index includes:
And for each data node, carrying out the recursion construction on the first standard data according to the preset construction method to obtain the stored data index corresponding to the data node.
In one embodiment, the data nodes represent each entity in the distributed data system that stores data, which is scattered across different servers or devices of the system. This distributed storage improves the accessibility of the data and the overall fault tolerance of the system.
In a distributed storage system, there are multiple data nodes, each with its own stored data index.
Optionally, the data mining of all the stored data in the distributed data warehouse includes:
Retrieving, from the stored data index, an approximate nearest neighbor to the second standard data in the distributed data warehouse, wherein if there are a plurality of the data warehouses and a similarity of the type of the historical medical data in the plurality of the data warehouses to the type of the second standard data exceeds a first similarity threshold, then searching in parallel for an approximate nearest neighbor to the second standard data in the plurality of the data warehouses;
And taking the approximate nearest neighbor as the mining data.
Since the stored data has multiple dimensions (images, text vectors), for high-dimensional data that needs to be retrieved, an index is built on each node using a distributed approximate nearest neighbor algorithm. This step requires that the data be distributed to different nodes, each of which is responsible for index building and query processing of a portion of the data to improve efficiency and scalability. In an embodiment of the present invention, the first similarity threshold value indicates that the type between the historical medical data and the second standard data has a higher similarity, for example, semantics of the historical medical data and the second standard data are calculated, and when the similarity of the type represented by the semantics of the historical medical data and the second standard data exceeds the first similarity threshold value, approximate nearest neighbors of the second standard data in the plurality of data warehouses are searched in parallel. In other embodiments, the similarity may also be a similarity between the data types of the historical medical data and the second standard data. The first similarity threshold may be set according to data type or semantic type.
And (3) quickly searching the approximate nearest neighbor of the current data in the historical medical data by using the constructed stored data index, namely the approximate nearest neighbor of the second standard data in the stored data. This step may be parallelized when there are similar storage data in multiple storage nodes, each of which is responsible for processing a portion of the query request. For example, if the data warehouse of the first unit and the data warehouse of the second unit have storage data similar to the second standard data, the node corresponding to the first unit and the node corresponding to the second unit are searched in parallel, so that the searching efficiency is accelerated. And taking the approximate nearest neighbor as the mining data.
Optionally, as shown in fig. 3, the step of using the mined data that matches the second standard data as the matching data by a preset method includes:
and step S610, constructing a co-occurrence matrix through a co-occurrence analysis method, wherein the co-occurrence matrix is used for recording the co-occurrence times of the mining data and the second standard data.
Step S620, determining a first co-occurrence frequency and a first co-occurrence similarity of the mined data and the second standard data in the co-occurrence matrix.
Step S630, taking the mined data with the first co-occurrence frequency higher than a frequency threshold and the first co-occurrence similarity higher than a second similarity threshold as the matching data.
Co-occurrence matrices are constructed using co-occurrence analysis, with rows and columns representing different dimensions in the dataset, and each element in the matrix representing a co-occurrence frequency or number of times between corresponding terms. For example, if item A and item B occur N times in the same context, then the value of position (A, B) in the matrix is N. Sparse matrix storage may be employed to reduce spatial complexity when the scale of processing mined data is large.
And quantizing the relation between the mined data and the second standard data on the basis of the co-occurrence matrix. The first co-occurrence frequency is directly read from the matrix, i.e. the frequency at which the mined data and the second standard data co-occur. The first co-occurrence similarity may then require further computation, such as measuring the strength of association of the two by Jaccard similarity coefficients, cosine similarity, or pearson correlation coefficients. This step aims at a deep understanding of the association pattern between the different data items. Two key thresholds, a frequency threshold and a second similarity threshold, are set. Each mined data in the co-occurrence matrix is evaluated and only if its co-occurrence frequency with the second standard data exceeds a frequency threshold and the co-occurrence similarity exceeds a second similarity threshold will the mined data be marked as matching data. This step ensures that the screened matching data is not only frequently co-occurring, but also has a high correlation in terms of semantics or behavioral patterns.
Optionally, after the co-occurrence matrix is constructed by co-occurrence analysis, the method further includes:
And determining a second co-occurrence frequency and a second co-occurrence similarity of all the stored data and the second standard data in the co-occurrence matrix.
And taking the stored data, which is not present in the matching data and has the second co-occurrence frequency higher than the frequency threshold and the second co-occurrence similarity higher than the second similarity threshold, as optimization data.
And optimizing the structure of the co-occurrence matrix based on the second co-occurrence frequency and the second co-occurrence similarity corresponding to the optimization data.
And retrieving the relation between all the stored data items and the second standard data again on the basis of the co-occurrence matrix. The second co-occurrence frequency refers to the number of times that the stored data co-occurs with the second standard data. For retrieving similar data that is missing based on the mined data match. Due to the screening settings of the mined data, the stored data may be too stringent to be ignored. These missed stored data are defined as optimized data, which may contain important potentially relevant information. The co-occurrence similarity represents a measure of the strength of relationship between a pair of words calculated based on the co-occurrence matrix, and when a certain data association between the stored data and the second standard data is high, the co-occurrence frequency and the co-occurrence similarity of the stored data and the second standard data are correspondingly high. Alternatively, co-occurrence similarity may be calculated based on existing similarity algorithms, such as point-to-point information, cosine similarity, and the like.
The process is designed as an iterative loop, the performance of the matrix is re-evaluated after each optimization, and the optimization strategy is adjusted according to feedback until a satisfactory analysis result is achieved.
Through the steps, the efficiency and the accuracy of subsequent data analysis can be improved by optimizing the co-occurrence matrix structure, and a solid foundation is laid for deep insight into complex connection behind data.
Optionally, the determining the distinguishing point from the actual data according to the matching data includes:
Analyzing the matching data to obtain multi-dimensional data contained in the matching data, wherein the multi-dimensional data comprises user codes, doctor codes, evaluation results and target completion degrees;
obtaining factor similarity among each dimension of the multi-dimensional data through a pearson correlation coefficient matrix;
taking the multi-dimensional data with the factor similarity larger than a third similarity threshold value as target data;
determining a factor and a characteristic value of the target data by a minimum residual error method, and reserving the factor with the characteristic value larger than 1 as a target factor;
rotating the target factors to obtain a factor load matrix;
Determining multidimensional data lower than a first preset score in the second standard data as low-score data according to the factor load matrix;
extracting multidimensional data higher than a second preset score from the matching data as high-score data;
And optimizing the low score data according to the high score data until the score of the low score data is higher than a third preset score, wherein the first preset score is smaller than or equal to the third preset score, and the third preset score is smaller than or equal to the second preset score.
The evaluation result comprises objective evaluation in the diagnosis process, which is filled in by human, and the target completion degree comprises prognosis evaluation of the diagnosed person.
In one embodiment, the correlation between dimensions in the matching data is analyzed according to a factor analysis method, and then the multidimensional data in the actual data is optimized according to a correlation pattern. Firstly analyzing matching data, carrying out factor similarity calculation on multidimensional data in the matching data according to a Pearson correlation coefficient matrix, taking multidimensional data with similarity larger than a third similarity threshold value as target data, wherein the multidimensional data which shows that the data has certain relevance with data of other dimensions, and multidimensional data which does not have relevance with data of other dimensions has little help to optimization. And determining factors and characteristic values of target data by a minimum residual method, and taking factors with the characteristic values larger than 1 as target factors for optimizing the number of the factors.
To improve the interpretability of the factor, the target factor is rotated. Rotation methods include orthogonal rotation (e.g., varimax) and Oblique rotation (e.g., oblique). The purpose of the rotation is to have each target data have a high load on as few factors as possible, thus making the factors easier to interpret. And evaluating the scores of the multidimensional data of the second standard data according to the factor load matrix, grading the multidimensional data in the matched data to obtain low-score data and high-score data respectively, and optimizing the low-score data through the high-score data to realize the optimization of the actual data according to the historical medical data. The first preset score, the second preset score and the third preset score respectively represent preset score thresholds for comparing with the absolute value (i.e. the score) of the factor load of the second standard data in the factor load matrix. The first preset score is used for screening low-score data in the second standard data, the second preset score is used for screening high-score data in the matched data, and the third standard data is used as the lowest score standard of the second standard data after optimization. For example, when the factor load absolute value of the second standard data is smaller than the first preset score, the second standard data is low score data, and the high score data in the matched data needs to be optimized to re-mine the second standard data with higher matching degree.
An embodiment of the present invention provides a data analysis-by-synthesis system, including:
The first acquisition module is used for acquiring the historical medical data of each data node, preprocessing the historical medical data and acquiring first standard data;
The first construction module is used for recursively constructing the first standard data according to a preset construction method to obtain storage data and a storage data index;
a second construction module, configured to construct, for each of the data nodes, at least one data warehouse according to the stored data and the stored data index, and use all the data warehouses as distributed data warehouses;
The second acquisition module is used for acquiring actual data, preprocessing the actual data and acquiring second standard data;
The mining module is used for carrying out data mining on all the stored data in the distributed data warehouse to obtain mining data;
the matching module is used for taking the mining data matched with the second standard data as matching data through a preset method;
and the optimizing module is used for determining a distinguishing point from the actual data according to the matching data.
Optionally, the data analysis by synthesis system further comprises an image input device, an audio input device and a display device.
An electronic device according to still another embodiment of the present invention includes a memory for storing a computer program and a processor for implementing the data analysis-by-synthesis method as described above when the computer program is executed.
A further embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a data analysis-by-synthesis method as described above.
An electronic device that can be a server or a client of the present invention will now be described, which is an example of a hardware device that can be applied to aspects of the present invention. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
The electronic device includes a computing unit that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) or a computer program loaded from a storage unit into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device may also be stored. The computing unit, ROM and RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random-access Memory (Random Access Memory, RAM), or the like. In the present application, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present application. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Although the invention is disclosed above, the scope of the invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications will fall within the scope of the invention.