Movatterモバイル変換


[0]ホーム

URL:


US8065244B2 - Neural-network based surrogate model construction methods and applications thereof - Google Patents

Neural-network based surrogate model construction methods and applications thereof
Download PDF

Info

Publication number
US8065244B2
US8065244B2US12/048,045US4804508AUS8065244B2US 8065244 B2US8065244 B2US 8065244B2US 4804508 AUS4804508 AUS 4804508AUS 8065244 B2US8065244 B2US 8065244B2
Authority
US
United States
Prior art keywords
ensemble
ensembles
data
global
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/048,045
Other versions
US20080228680A1 (en
Inventor
Dingding Chen
Allan Zhong
Syed Hamid
Stanley Stephenson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Halliburton Energy Services Inc
Original Assignee
Halliburton Energy Services Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Halliburton Energy Services IncfiledCriticalHalliburton Energy Services Inc
Priority to US12/048,045priorityCriticalpatent/US8065244B2/en
Assigned to HALLIBURTON ENERGY SERVICES, INC.reassignmentHALLIBURTON ENERGY SERVICES, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: STEPHENSON, STANLEY, HAMID, SYED, CHEN, DINGDING, ZHONG, ALLAN
Publication of US20080228680A1publicationCriticalpatent/US20080228680A1/en
Application grantedgrantedCritical
Publication of US8065244B2publicationCriticalpatent/US8065244B2/en
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Various neural-network based surrogate model construction methods are disclosed herein, along with various applications of such models. Designed for use when only a sparse amount of data is available (a “sparse data condition”), some embodiments of the disclosed systems and methods: create a pool of neural networks trained on a first portion of a sparse data set; generate for each of various multi-objective functions a set of neural network ensembles that minimize the multi-objective function; select a local ensemble from each set of ensembles based on data not included in said first portion of said sparse data set; and combine a subset of the local ensembles to form a global ensemble. This approach enables usage of larger candidate pools, multi-stage validation, and a comprehensive performance measure that provides more robust predictions in the voids of parameter space.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
The present application claims priority to U.S. Pat. App. 60/894,834, entitled “Neural-Network Based Surrogate Model Construction Methods and Applications Thereof” filed Mar. 14, 2007 by inventors Dingding Chen, Allan Zhong, Syed Hamid, and Stanley Stephenson, which is hereby incorporated herein by reference.
BACKGROUND
The following references are helpful to understand the present disclosure and are hereby incorporated herein by reference:
    • [1] Y. S. Ong, P. B. Nair, and A. J. Keane, “Evolutionary optimization of computationally expensive problems via surrogate modeling,”AIAA Journal, vol. 41, No. 4, 2003, pp. 687-696.
    • [2] K. Hamza and K. Saitou, “Vehicle crashworthiness design via a surrogate model ensemble and a co-evolutionary genetic algorithm,”Proc. of ASME International Design Engineering Technical Conferences&Computers and Information in Engineering Conference, Long Beach, Calif., September 2005.
    • [3] S. Obayashi, D. Sasaki, Y. Takeguchi, and N. Hirose, “Multiobjective evolutionary computation for supersonic wing-shape optimization,”IEEE Transactions on Evolutionary Computation, vol. 4, No. 2, 2000, pp. 182-187.
    • [4] Z. Zhou, Y. S. Ong, M. H. Nguyen, D. Lim, “A study on polynomial regression and Gaussian process global surrogate model in hierarchical surrogate-assisted evolutionary algorithm,”Proc. of IEEE Congress on Evolutionary Computation, Edinburgh, United Kingdom, September 2005.
    • [5] S. Dutta, D. Misra, R. Ganguli, B. Samanta and S. Bandopadhyay, “A hybrid ensemble model of Kriging and neural networks for ore grade estimation,”International Journal of Surface Mining, Reclamation and Environment, vol. 20, no. 1, 2006, pp. 33-45.
    • [6] J. M. Twomey and A. E. Smith, “Committee networks by resampling,”in Intelligent Engineering Systems through Artificial Neural Networks, C. H. Dagli, M. Akay, C. L. P. Chen, B. R. Fernandez and J. Ghosh, Eds. ASME Press, 1995, vol. 5, pp. 153-158.
    • [7] A. Krogh, J. Vedelsby, “Neural network ensembles, cross validation, and active learning,” inAdvances in NeuralInformation Processing System7, Cambridge, Mass.: MIT Press, 1995, pp. 231-238.
    • [8] G. Brown, J. Wyatt, R. Harris and X. Yao, “Diversity creation methods: A survey and categorization,”Journal of Information Fusion, vol. 6, no. 1, January 2005, pp. 5-20.
    • [9] Y. Liu, X. Yao, “Ensemble learning via negative correlation,”Neural Networks, vol. 12, pp. 1399-1404.
    • [10] M. Islam, X. Yao, “A constructive algorithm for training cooperative neural network ensembles,” vol. 14, no. 4, pp. 820-834.
    • [11] G. P. Coelho and F. J. Von Zuben, “The influence of the pool of candidates on the performance of selection and combination techniques in ensembles,” inProc. of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, 2006, pp. 10588-10595.
    • [12] J. Torres-Sospedra, M. Femandez-Redondo, and C. Hernandez-Espinosa, “A research on combination methods for ensembles of multilayer feedforward,”Proc. of International Joint Conference on Neural Networks,2005, pp. 1125-1130
    • [13] D. Chen, J. A. Quirein, H. D. Smith, S. Hamid, J. Grable, “Neural network ensemble selection using a multi-objective genetic algorithm in processing pulsed neutron data,”Society of Petrophysicists Well Log Analysts(SPLWA)45thAnnual Logging Symposium, Jun. 6-9, 2004, Noordwijk, The Netherlands.
    • [14] D. Chen, J. A. Quirein, H. Smith, S. Hamid, J. Grable, and S. Reed, “Variable input neural network ensembles in generating synthetic well logs,”Proc. of International Joint Conference on Neural Networks, Vancouver, BC, Canada, 2006, pp. 2273-2280.
    • [15] Y. Jin, T. Okabe, and B. Sendhoff, “Neural network regularization and ensembling using multi-objective evolutionary algorithms,” inProc. Congress on Evolutionary Computation, Portland, Oreg., 2004, pp. 1-8.
    • [16] H. A. Abbass, “Pareto neuro-evolution: Constructing ensemble of neural networks using multi-objective optimization,” inProc. Congress on Evolutionary Computation, Can berra, Australia, 2003, pp. 2074-2080.
    • [17] A. Chandra and X. Yao, “DIVACE: Diverse and accurate ensemble learning algorithm,” inThe Fifth International Conference on Intelligent Data Engineering and Automated Learning, Exeter, UK, 2004, pp. 619-625.
    • [18] P. Castillo, M. Arenas, J. Merelo, V. Rivas, and G. Romero, “Multiobjective optimization of ensembles of multilayer perceptrons for pattern classification,” inParallel Problem Solving from Nature IX, Reykjavik, Iceland, 2006, pp. 453-462.
    • [19] Y. Jin, M. Olhofer, and B. Sendhoff, “A framework for evolutionary optimization with approximate fitness functions,”IEEE Transactions on Evolutionary Computation, vol. 6, no. 5, 2002, pp. 481-494.
    • [20] B. S. Yang, Y. Yeun, and W. Ruy, “Managing approximation models in multiobjective optimization,” inStructure and Multidisciplinary Optimization, vol. 24, no. 2, 2002, pp. 141-156.
    • [21] R. Maclin, J. W. Shavlik, “Combining the predictions of multiple classifiers: using competitive learning to initialize neural networks,” inProc. of the14thInternational Joint Conference on Artificial Intelligence, Montreal, Canada, 1995, pp. 524-530.
    • [22] P. Sollich and A. Krogh, “Learning with ensembles: how over-fitting can be useful,” inAdvances in Neural Information Processing Systems8, D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, Eds. Cambridge, Mass.: MIT Press, 1996, pp. 190-196
    • [23] R. S. Renner, “Combining constructive neural networks for ensemble classification,” inProc. Of the International Joint Conference on Intelligent System, Atlantic City, N.J., 2000, pp. 887-891.
Usage of high-fidelity simulation tools such as Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD), for example, has become standard practice in engineering today. However, the expensive computational cost associated with running such simulation tools is often prohibitive, preventing engineers from conducting enough simulations to discern an optimal design. To address this issue and facilitate product optimization, engineers have in some cases developed surrogate models that are computationally efficient, robust, and can be used for preliminary analysis before unleashing the high-fidelity simulation tools on selected designs. The surrogate models can be incorporated into a search engine to locate potentially feasible designs and to identify design problem areas [1-3].
Several surrogate modeling techniques (neural networks, polynomial regression, Gaussian process, etc.) are available today. The most suitable surrogate model technique will vary based on the specific problem and the engineer's experience [4-5], and the performance of the various techniques can be expected to vary significantly when only a limited amount of design data is available from which to develop the surrogate model. In neural network modeling, for example, an over-trained neural network developed under sparse data conditions will memorize the training data and fail to generalize well on the unseen new data. However, an under-trained neural network whose development is terminated by conventional early-stopping will perform poorly even on the given training examples. Traditionally, the prediction error of a neural network generated from sparse data has been estimated using resampling based cross-validation (leave-one-out) and bootstrap methods [6]. When only a single neural network is employed, the estimated prediction error is usually quite high.
Compared to single neural networks, neural network ensembles offer a more robust surrogate model by combining multiple predictions from diverse member networks. Many studies in this area are related to incorporative training (ambiguity decomposition [7-8], negative correlation learning [9-10]) and selection/combination methods [11-12], but less attention has been paid to surrogate model development from sparse data.
BRIEF DESCRIPTION OF THE DRAWINGS
A better understanding of the various disclosed embodiments can be obtained when the following detailed description is considered in conjunction with the following drawings, in which:
FIG. 1 shows an illustrative surrogate model development environment;
FIG. 2 shows an illustrative expandable screen tool suitable for sand control in a well;
FIG. 3 shows some of the parameters that define the expandable pipe design space;
FIG. 4 is a flowchart of an illustrative tool construction method using a neural network based surrogate model;
FIG. 5 shows an illustrative division of a data set into subsets;
FIG. 6 shows an illustrative neural network architecture;
FIG. 7 shows an illustrative ensemble architecture;
FIG. 8 shows an illustrative determination of local ensembles;
FIGS. 9A-9B illustrate the model predictions of two local ensembles;
FIG. 10 shows an illustrative global ensemble architecture;
FIGS. 11A-11C show an illustrative global ensemble's tensile load predictions as a function of axial spacing and holes per circular row;
FIGS. 12A-12B show an illustrative global ensemble's plastic strain and tensile load predictions as a function the hole dimensions;
FIGS. 13A-13B show plastic strain predictions as a function of axial spacing and holes per circular row for two different global ensembles; and
FIG. 14 shows an illustrative global ensemble's predictions of plastic strain vs tensile load for selected parameter values.
While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.
DETAILED DESCRIPTION
Various neural-network based surrogate model construction methods are disclosed herein, along with various applications of such surrogate models. Designed for use when only a sparse amount of data is available (a “sparse data condition”), some embodiments of the disclosed systems and methods: create a pool of neural networks trained on a portion of a sparse data set; generate for each of various multi-objective functions a set of neural network ensembles that minimize the multi-objective function; select a local ensemble from each set of ensembles based on data not included in the training process; and combine a subset of the local ensembles to form a global ensemble. This approach enables usage of larger candidate pools, multi-stage validation, and a comprehensive performance measure that provides more robust predictions in the voids of parameter space.
The set of neural network ensembles may be generated using evolutionary selection based on a multi-objective function that assigns different weighting factors to balance ensemble fidelity, complexity, and ambiguity. We call this scheme fidelity, complexity, and ambiguity based ensemble selection (FCAES). This approach is different from other reported approaches dealing with ensemble related multi-objective learning such as minimizing the regularized fitness function [15], minimizing prediction errors on both pure training data and noise added data [16], optimizing ensemble prediction accuracy and member network diversity [17], and optimizing the type I (false positive) and type II (false negative) errors for pattern classification [18]. Our sparse data ensemble construction approach is better characterized as an evolutionary selection/combination scheme rather than a learning/training approaches of other references. The combination of local ensembles is beneficial because the local ensembles provide multiple solutions with similar results over known data samples, but significantly different extrapolations over large voids outside of the available data. The global ensemble is generally capable of providing not only an improved prediction over the available validation data, but also a better generalization throughout the data voids. Compared with other methods in literature for managing approximation models and improving the model fidelity in the trust-region through evolutionary optimization [19-20], our approach has the potential to extend the region of robust prediction with low-complexity framework.
Turning now to the figures,FIG. 1 shows an illustrative surrogate model development environment. In the illustrative environment, an engineer is given responsibility for developing or improving the performance of a tool for use in a completion system102. The engineer's tools include acomputer104 and software (represented by removable storage media106), which they control via one or more input devices108 andoutput devices110. The software is stored in the computer's internal memory for execution by one or more processors. The software configures the processor to accept commands and data from the engineer, to process the data in accordance with one or more of the methods disclosed below, and to responsively provide predictions for the performance of the tool being developed or improved.
Upon receiving the predictions viaoutput device110, the engineer can select certain tool parameters for further analysis and/or implementation. In alternative system embodiments, the predictions are subject to further processing and/or verification before being provided in a perceptible form to a user. For example, the predictions of a surrogate model can be displayed graphically to the user, but they might alternatively be systematically searched internal to the computer to identify one or more optimal regions for verification by a high-fidelity simulation model. When verified, the optimal solution could be displayed to a user, or alternatively a subsequent process could use the solution to determine a useful and tangible effect. As just one example, the optimal solution may indicate a particular tool configuration that the computer then generates using a rapid prototyping machine (sometimes referred to as a ‘3D printer’). As another example, the computer may use the predictions of a surrogate model to generate perceptible signals for controlling or communicating with an external system.
An expandable pipe is an example of a tool for use in a borehole after it has been drilled. The expandable pipe is a part of a subterranean well screen that is useful for sand control in oil and gas production. Typical well screens include a tubular base pipe with a series of rows of holes perforated through a sidewall of the base pipe, and a filtering media disposed externally on the base pipe. Drilling techniques and equipment exist today to expand a screen with a fixed cone methodology in the well to place the screen in intimate contact with the borehole wall. Modern well completion systems often install expandable screens to reduce the mobility of the sand within the formation, minimize the occurrence of borehole collapse, facilitate production control, and provide zonal isolation with increased internal space for the tool string.
FIG. 2 illustrates the operation of expandable pipe. Anexpandable pipe202 is inserted into aborehole204 and expanded to theborehole diameter206 via motion of a hydraulically poweredexpander208 to produce aperforated screen210 seated against the wall of the borehole. Such products are available commercially under the brand names PoroFlex, VersaFlex, and SSSV.
One crucial risk in expandable screen design is expansion failure of the base pipe, which may result from improper geometric parameter and material selection or other manufacturing factors. In contrast to expansion failure, the optimal design allows a high expansion rate while maintaining the tensile and collapse strength of the perforated base pipe. Conventional engineering approaches primarily involve high-fidelity finite element analysis (FEA) applied to the selected hole patterns on the base pipe. However, since the computational cost for FEA modeling is high, the final design might be sub-optimal due to limited number of simulations. Thus, expandable screen design will be used as a case study and discussed as an aid to understanding the surrogate model design methods disclosed herein.
FIG. 3 shows geometric parameters for expandable base pipe. The hole is generally elliptical, with a and b the major and minor axes respectively. (Note, however, that the model does not require the “major” axis a to be greater than the “minor” axis b. Alternative terminology might be “longitudinal” and “circumferential” axes a and b.) The parameter s is the axial spacing between the circular rows, and HPC is the number of holes per circular row. The initial pipe thickness and diameter are fixed constants. The selected range of geometeric parameters is 0.25-0.60 inches for a and b, 1-4 inches for s and 3-18 holes for HPC.
FIG. 4 is a flowchart of an illustrative tool design method. Beginning inblock402, a model is constructed to predict the tool's performance. For the present case study, the existing FEA model serves this role. The FEA model takes the four parameters in the design space as input variables, and performs a simulation to measure the resulting plastic strain and tensile load at the given expansion rate.
Inblock404, the engineer determines whether this model is too complex, e.g., whether an excessive amount of time will be required to fully explore the solution space to identify an optimal solution. If the model is not too complex, a computer simulates the tool's performance inblock406 for different parameter values until an optimal solution is identified inblock408. The computer displays the optimal solution to the engineer inblock410 for use in implementing the tool. In the present case study, the optimal solution will be one or more values of parameters a, b, s, and HPC that provide a maximum tensile load while minimizing plastic strain, thereby indicating the perforated screen configuration having a minimum chance of expansion failure.
Depending on the complexity of the model, the size of the parameter search space, and the step sizes for each parameter, the engineer may determine inblock404 that a full exploration of the solution space with the high fidelity model is infeasible. In that event, a surrogatemodel construction process412 is performed to identify a much smaller subspace for usage inblocks406 and408. In some embodiments, the subspace consists of the parameter values identified as optimal by the surrogate model, plus one step size in each direction to verify that the solution is at least a locally optimum value.
Process412 begins with the engineer obtaining a sparse data set from the high-fidelity tool model. In the illustrative case study, results from a total of 62 FEA simulations were obtained for use in developing a surrogate model. These 62 data points were partitioned inblock416 into two disjoint data sets. About 10-25% of the data points are excluded from the primary data set and used to form the secondary data set. In the present case study, 52 data points are put into primary data set, and 10 data points are put into the secondary data set. The primary data set is then used to form multiple training sets, using a “leave-H out” approach, meaning that a different selection of H data points is left out of each training set. In the present case, eight training sets were used, each having 46 data points.
FIG. 5 shows a data set tree illustrating this process of subdividing the data set.Set502 includes 62 points, each representing a six-element vector (with four elements for inputs a, b, s, and HPC, and two elements for the plastic strain and tensile load outputs).Secondary data set504 contains 10 data points that have been removed fromset502 to formprimary data set506. Eight different training data sets508-512 are then obtained by omitting a different random selection of six data points from the primary data set.
Returning toFIG. 4, the computer selects parameters to form a pool of neural network candidates inblock418. The size of the pool may be conveniently set to 32, 64, or 128 neural networks because these numbers can be conveniently represented using binary numbers in the evolutionary selection algorithm. The purpose of selecting different training parameters for each neural network is to create diversity in the candidate pool. Many suitable techniques for generating neural network candidates are available today in training multi-layer feed-forward networks with adequate diversity [21-23]. Illustrative training variations can include: varying the number of hidden nodes in the neural network, varying the number of hidden layers in the network, varying the training data set, randomly varying the starting values of neural network coefficients, and randomly varying noise added to the training inputs and/or outputs.
FIG. 6 shows an illustrative neural network architecture for the perforated screen example. The illustrative network includes an input layer with four nodes (one for each input a, b, s, and HPC), a first layer with five hidden nodes, a second layer with a variable number of hidden nodes, and an output layer with two nodes (one for each output Plastic Strain and Tensile Load). The input nodes simply reproduce their input values. The output nodes produce a linear combination of their inputs. All of the hidden nodes output the hyperbolic tangent of a weighted sum of their inputs and an adjustable offset. Some neural network embodiments have only one hidden layer. Note that each neural network in the candidate pool accepts the same inputs and provides predictions for the same outputs. In one experiment for the present case study, four different neural network structures were used in combination with eight different training data sets, for a total of 32 neural network candidates. In another experiment, eight different neural network structures were used in combination with sixteen different data sets for a total of 128 neural network candidates.
Returning again toFIG. 4, the computer trains a set of neural networks inblock420, varying the training parameters for each network. In each case, the neural network is given adequate training time, with appropriate control on training epochs and network complexity. By varying the training parameters, the computer obtains a pool of unique neural networks that each perform adequately over their respective training sets.
Inblock422, the computer formulates a diverse set of evolutionary selection parameters to form a pool of candidate ensembles. As with the pool of candidate networks, it is desirable to provide a pool of candidate ensembles with sufficient diversity.FIG. 7 shows an illustrativeneural network ensemble702 formed by selecting multiple neural networks fromcandidate pool704. In the illustrated example, the neural networks inpool704 are indexed by training data set (A-P) and by number of hidden nodes in the second hidden layer (3-10). An evolutionary selection algorithm, represented byarrow706, determines the combination of neural networks712-716 that formensemble702. The inputs to each neural network (“member”) of the ensemble are the same, and the outputs of each member are combined as represented byblocks718 and720. Usually, blocks718 and720 average the corresponding outputs of each neural network712-716, but in some embodiments a weighted average is taken. However, other statistical combination techniques can be employed, including root mean square, inverse mean inverse, average after excluding maximum and minimum values, etc. The outputs ofblocks718 and720 are the predictions of the ensemble responsive to the inputs.
In some method embodiments, the computer uses a fidelity, complexity, and ambiguity evolutionary selection (FCAES) algorithm to create many candidate ensembles with fixed size (i.e., each candidate ensemble includes the same number of neural networks). To achieve diversity, the computer assigns different combinations of weighting factors (as explained further below) for ensemble validation error, ensemble complexity and ensemble negative correlation or ambiguity. This variation in weighting factors is one distinguishing factor over previous studies [13-14] in which the weighting factors were fixed. The computer then applies the different evolutionary selection parameters to construct the candidate ensemble pool inblock424.
The process carried out by the computer in block424 (FIG. 4) is now explained in detail. The computer selects neural networks from the candidate pool by multiple runs of a genetic algorithm to form a pool of local neural network ensembles. The multi-objective performance function to be minimized during evolutionary computation is a weighted form of three measurements:
f=k1×EMSE+k2×SSW±k3×P  (1)
In equation (1), EMSE is the ensemble mean-squared-error measured on the validation data set (in the present case study, the validation data set is the primary data set506),SSW is the ensemble sum-squared-weights averaged over networks in the ensemble (the “member networks”),P is the ensemble ambiguity in the batch-mode form (as defined further below), and k1, k2and k3are normalized coefficients with summation k1+k2+k3=1.
The ensemble batch-mode ambiguity is an extension of Krogh and Vedelby's network ambiguity [7] for a single data point
P(n)=1Mi(Fi(n)-F_(n))2(2)
where Fi(n) andF(n) are the output of the ithindividual neural network and the output of the ensemble, respectively, for the nthsample. P(n) is averaged over M member networks. For multi-output neural network, we can obtain batch mode ensemble ambiguity by averaging P(n) over number of samples c and number of outputs r
P_=1(c×r)n=1ck=1rP(nk)(3)
Note that the ensemble ambiguity defined in equation (3) and the ensemble negative correlation described in [14] are same in magnitude but different in sign. The multi-objective function used in FCAES provides a comprehensive measure of ensemble prediction accuracy on the given data (EMSE), ensemble complexity (SSW), and ensemble diversity (P). Increasing k1will put more focus on the ensemble prediction accuracy of the given data set. The coefficient k2is an ensemble regularization parameter. Although regularization is not an explicit concern for training candidate networks in this method, it could provide additional controllability and flexibility in creating candidate ensembles. Putting a minus sign before k3will encourage diversity among the member networks, while choosing a plus sign will penalize the diversity. Under sparse data conditions, it is preferred to run FCAES repeatedly with different performance function weights. The other settings that have been employed for running FCAES in the present case study include a fixed ensemble size (5 member networks), population size (60 ensembles), generation number (30 generations), and eight different sets of coefficients k1, k2, and k3for evaluating the weighted performance function. (To test the sensitivity to each of these values, multiple experiments were also run with different values in this case study.) After each run, the ranked ensembles from the final generation are saved for further processing.
To this point (block424 ofFIG. 4), the neural network training and ensemble selection have been performed using the primary data set. Inblock426, the secondary data set is used to select local ensembles from the pool of neural network ensembles developed inblock424. As shown inFIG. 8, theensemble candidate pool802 includes the final generations804-808 from each setting of the weighting coefficients k1, k2, and k3. A pool oflocal ensembles810 is formed by selecting one local ensemble from each final generation804-808. That is, each local ensemble is selected from a group of candidate ensembles derived based on a given set of parameters k1, k2and k3during evolutionary selection. Thus the total number of local ensembles inpool810 obtained will equal the number of settings of k1, k2and k3, at the previous stage.
To select each local ensemble, the mean-squared error of the ensemble predictions for the secondary data set is measured for each of the ensembles in each final generation, and the best-performing ensemble in each group804-808 is selected as the local ensemble to represent that group. Since different objective functions and data sets are used inblocks424 and426 (FIG. 4), the ensemble which gives the minimum EMSE on the primary data set may not be the same one which minimizes the prediction error on the secondary data set. It is wise to monitor the EMSE on both data sets during local ensemble selection to evaluate whether the process should be re-started with a different primary-secondary data set division and/or different multi-objective weighting coefficients. The local ensembles determined in this way usually provide adequate diversity for global ensemble construction. In alternative method embodiments, different performance criteria over the secondary data set are used to select the local ensembles. For example, ensemble ambiguity may be desirable and hence included in the ensemble performance function.
In the present case study, the candidate ensemble selection was performed using FCAES algorithm. In one experiment, the objective function (see equation (1)) was used with five different sets of weighting coefficients k1, k2, and k3. After 30 generations of evolutionary selection user each version of the objective function, the final generation (having 32 ensembles varied in members) were kept to form the candidate ensemble pool. The computer then selected a local ensemble for each setting of the objective function, based on the candidates' performance on the secondary data set. Table 1 summarizes the characteristics of the local ensembles. The index range of member networks is from 0 to 31 (32 network candidates), and the validation error is calculated by percentage of absolute difference between the ensemble output (each ensemble outputs the average of the outputs of the five member networks) and the FEA simulated output.
TABLE I
SELECTED LOCAL ENSEMBLES AND PREDICTION ERROR
ON THE SECONDARY DATA SET
MOF WeightingLocal EnsembleStrain ErrorLoad Error
Coefficients(member index)(%)(%)
K = [1.0, 0.0, 0.0][7 10 18 19 23]9.735.94
K = [0.8, 0.1, 0.1][5 19 29 29 30]8.514.97
K = [0.7, 0.1, 0.2][7 18 23 27 28]9.544.95
K = [0.7, 0.2, 0.1][7 10 14 19 30]7.676.95
K = [0.6, 0.2, 0.2][14 18 19 19 27]8.775.18
One problem associated with sparse data modeling is the existence of a large number of voids in the parameter space. We can see from Table 1 that the local ensembles' prediction error on either plastic strain or tensile load is smaller than 10%, which is well below the design tolerance. However, simulations applied on the voids of the data space show that the variance of prediction among the local ensembles is still significant. For example,FIGS. 9A and 9B respectively display the tensile load predicted using two different local ensembles assuming fixed dimensions for circular holes 0.5-inch in diameter. The tensile load predictions are shown as a function of axial spacing and the number of holes per circular row. Significant differences can be observed in the shapes of the resulting prediction surfaces.
Though the local ensembles each provide similar results in the given validation data space (the secondary data set), they may still give significantly different predictions in the large voids beyond the available data as a result of the FCAES approach, which provides specially defined fitness functions in different runs of an evolutionary selection algorithm. A global ensemble is helpful in reducing the local-ensemble-related variance and improving prediction over the whole parameter space. Accordingly, inblock428 ofFIG. 4, local ensembles will be combined to form a global ensemble that is generally capable of providing not only the improved prediction over the available validation data, but also better generalization over the voids which can be justified from either visual inspection or an objective ambiguity measurement. Although there is no guarantee that global smoothing is the optimal method to reduce the prediction uncertainty on the unseen new data, experience suggests it probably is adequate. The separation of local ensemble selection and global ensemble determination also serves to reduce the cost in evolutionary computation.
The global ensemble can be constructed by combining several local ensembles fromensemble pool810 into alarger ensemble1002 as shown inFIG. 10.Arrow1006 represents the use of a selection algorithm such as graphic inspection, ambiguity selection, or FCAES. Theglobal ensemble1002 distributes the input values a, b, s, and HPC to each of the local ensembles1012-1016 (and hence to each of the member networks in those local ensembles), and combines the respective Plastic Strain and Tensile Load outputs from the localensembles using blocks1018 and1020 similar toblocks718 and720 described above.
To determine the best candidate local ensembles to be members of the global ensemble, we still use the given primary and secondary data sets as evaluation basis, plus some other virtual validation measure to aid in decision making. In one experiment, combinations of four local ensembles (selected from the pool of five ensembles given in Table 1) were evaluated using graphic inspection to select the global ensemble that provides the smoothest and most consistent prediction in the data space voids. Many predictions can be viewed in 2D, 3D, or even 4D graphics for anomaly identification. A good global ensemble should produce reasonably smooth predictions on both interpolated and extrapolated points of interest. The user may examine the predictions in terms of behaviors expected from experience or underlying principles. Graphical inspection could also help justify the need to acquire new training and testing data if simulation results are contrary to designer's anticipation. (Where graphic inspection is impractical, a gradient or other discontinuity measure can be used to measure prediction smoothness or consistency.)
An alternative virtual validation measure employed in this case study is ensemble ambiguity. The sample network ambiguity defined in equation (2) can be calculated without knowledge of the true output values—it simply measures the degree by which the member network predictions deviate from the (global) ensemble's prediction. Thus ensemble ambiguity can be used to evaluate the performance of the global ensemble when no additional testing data is available. By choosing some possible inputs of interest in the voids of parameter space, different global ensembles having similar prediction errors over theentire data set502 can be compared on the basis of their global ensemble ambiguity. The global ensemble with higher ambiguity, indicating higher negative correlation among the member networks, is a promising candidate. However, many exceptions exist, and other decision-making methods can be considered.
Returning to the case study—the local ensembles from Table 1 were combined in groups of four to construct a large global ensemble (20 member networks) to reduce prediction variance. Five global ensemble candidates are given in Table 2 which includes all possible four-member combinations. Table 2 also presents the simulated ensemble network ambiguity (NA) on four data sets, each spanning over a subspace of 1040 samples for a fixed hole size (0.325, 0.375, 0.45 and 0.5 inches in diameter). The last two columns are the calculated NA on all FEA examples (data set502), and the overall validation error measured on the primary and secondary data sets.
TABLE 2
SIMULATED NETWORK AMBIGUITY IN VOIDS AND
ENSEMBLE VALIDATION ERROR
Sim2NASim3NASim4NASim5NA
IndNumSim1NA (h0325)(h0375)(h0450)(h0500)(N1 + N2)ValErr (%)
GNNE11.2140.8010.3960.2280.0276.29
GNNE21.1620.7530.3570.2140.0226.20
GNNE31.4140.9040.4110.2330.0266.10
GNNE41.3340.8550.3900.2240.0236.27
GNNE51.3270.8600.3960.2270.0256.21
Table 2 reveals that the overall validation error measured on the given FEA examples (data set502) is relatively insensitive to the choice of global ensemble, which demonstrates the robustness of the principle of using a large size ensemble for sparse data modeling applications. Table 2 also reveals that the NA measured on the voids (first four columns) has a significantly larger magnitude than that measured on the primary and secondary data sets. This explains why over-fitting in training individual neural network can be useful under sparse data condition when surrogate model ensemble is used.
We also note that the variance of NA between the data sets for different hole sizes is much larger than the variance within each data set, reflecting different levels of prediction uncertainty over the data space due to the training data distribution. Since certain regions may be more important than others, model refinement can be efficiently achieved by adding new FEA data points to fill those regions exhibiting large NA. On the other hand, within each simulated data set the ensemble exhibiting larger NA is often a more effective variance reducer. In Table 2, the ensemble GNNE3 produces consistently higher NA than others over the data space, yet its prediction errors on FEA examples and additional testing points are also the smallest.
In this simplified experiment, it is not difficult to select the best global ensemble (GNNE3).FIGS. 11A-11C display the predictions of this global ensemble for circular holes of diameter 0.5, 0.45, and 0.325 inches, respectively. In each case, the predicted tensile load as a function of axial spacing and holes per row exhibits a reasonable extrapolation.FIGS. 12A-12B display the selected global ensemble's plastic strain and tensile load predictions as a function of hole dimensions for fixed values of s and HPC. We can see that again, the simulated output has a reasonable transfer over the major and minor axes of the hole. However, in other cases each global ensemble candidate could have high NA on different parts of data set. The winner could also have medium or lower NA depending on the problem.
Three additional experiments were conducted in this study to empirically investigate the effects of changing: the objective function, the resampling, the data partitioning, and the size of candidate network pool. The partial results are summarized in Table 3.
TABLE 3
EMPIRICAL COMPARISON OF NA IN VOIDS AND
ENSEMBLE VALIDATION ERROR
Exp1NaExp2NaExp3NaExp1ErrExp2ErrExp3Err
IndNum(h0325)(h0325)(h0325)(%)(%)(%)
GNNE10.9511.8191.0876.316.054.88
GNNE21.0611.9340.7616.386.114.86
GNNE31.0501.7781.0006.306.004.86
GNNE41.0261.5721.0886.435.865.06
GNNE51.0502.1541.0366.436.055.04
The first experiment listed in Table 3 was almost the same as that previously described, except that a minus sign was used in the FCAES objective function before the NA term to encourage member network diversity. (The second and third experiments also used a minus sign.) In the second experiment, the partitioning of the primary and secondary data sets was the same, but no resampling was used (meaning that the training set for all 32 neural networks in the candidate pool was the same). In the third experiment, the primary data set included all 62 data samples and the secondary data set included 3 data samples from later FEA simulations. Resampling was applied 16 times with 6 samples excluded from the primary data set to form each training set, and 128 candidate networks with eight variations in structure were created. In each experiment, five local ensembles were selected and combined to form five global ensembles using the proposed method and procedures. The NA for each experiment in Table 3 was calculated on the same void of subspace, i.e., the subspace with hole diameter equal to 0.325 inches, and the ensemble validation error was tested on the same 62 samples. The same five weight coefficient settings for the objective function were used in each experiment.
We can see from the validation error in Table 3 that training candidate NNs without resampling (Exp. 2) can yield similar ensemble prediction accuracy on the givendata set502. However, the NA values indicate that the member networks' predictions on the distant voids have greater deviations compared to the training with resampling, which might be advantageous.
As might be expected, increasing the number of network structure variations, increasing the number of data points, and increasing the number of training data sets, in combination with using a larger neural network candidate pool (Exp. 3) can improve the ensemble prediction over the training samples, and probably over the extended neighborhood as well. However, since the measured NA on the voids was close in amplitude between Exp. 1 and Exp. 3, the ensemble predictions on the distant voids may have same degree of uncertainty.
FIGS. 13A-13B show a simulated subsurface in Exp. 2 and Exp. 3 respectively. The subsurface indicates the plastic strain predictions of the selected global ensembles as a function of axial spacing and holes per row, assuming constant hole diameter of 0.45 inches. The predicted plastic strain surfaces look similar even when resampling is omitted as in Exp. 2. This comparison suggests that while it is beneficial to train diverse candidate networks to improve sparse data modeling, we may not need to overcomplicate the process by forming a large number of resampled training data sets to create a large candidate pool.
Changing weighting factors of FCAES objective function has strong effect on the member network distribution. Although the same network is allowed to show its presence more than once in the ensemble, more diverse networks will be selected by choosing negative k3. However, as shown in Table 2 (using positive k3) and Table 3 (using negative k3), the ensemble performance is not sensitive to the particular setting of weighting coefficients once multiple settings and larger global ensemble size are used.
The global ensemble that is selected inblock428 ofFIG. 4 can then be used as a surrogate model inblock430 to conduct a search of the parameter space for an optimal solution. The solutions can be evaluated for optimality inblock432 until a likely candidate or range of parameter values is identified. In the present case study, the base pipe design for an expandable screen should demonstrate (after expansion) a plastic strain below an upper limit and a tensile load above a lower limit (e.g., 63% and 550 kilo-pounds force). Given the four-input design space, we conducted an exhaustive search over a practical range of each input and calculated the strain and load outputs using the developed surrogate model ensembles. Combined with other simulation results, we found many promising solutions under manufacturing constraints.FIG. 14 depicts selected simulation results in parameter space. Although probably not perfect due to data limitations, the global ensemble performed reasonably well in generating robust predictions over a wide range of parameter space.
Having identified selected parameter ranges the computer uses the high-fidelity model inblock406 to refine the estimated performance of the tool and verify the optimality of the selected solution. In this manner, the computational requirements involved in selecting an optimal tool design can be greatly reduced.
The expandable pipe case study presented herein was used to construct a surrogate model in the form of a neural network ensemble trained over a sparse data set obtained from finite element analysis simulations. In addition to tool design optimization, the disclosed methods also have applications in material characterization, tool testing, data pattern recognition, and many other fields of endeavor. For example, adaptive control systems typically require feedback with minimal delay, implying a limit on the complexity of models employed in the feedback path. Surrogate models are hence very desirable in situations where the controlled system is unduly complex, and the data set available for developing such models may be sparse where such systems are subject to significant revisions or evolution in behavior.
As another example, many medical treatment strategies for disease may employ multiple components, and only a limited amount of information may be available regarding the effectiveness of each component alone or in combination with the other components. In such situations, a surrogate model may be a feasible alternative to massive trial programs that cannot fully explore the limits of the data space due to the risks involved to human lives.
Yet another example is the determination of properties of new materials under difficult-to-simulate conditions such as ultra-high strain rates. The variables underlying such properties may include material constituents, erosion, wear, and fatigue.
Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. For example, genetic algorithms provide a useful selection technique, but may be replaced by other suitable selection techniques including steepest descent algorithms, random selection, and exhaustive searches. Moreover, the selected neural network ensembles may be augmented with models and/or approximations derived from first principles. It is intended that the following claims be interpreted to embrace all such variations and modifications.

Claims (20)

1. A modeling system that comprises:
a memory; and
a processor coupled to the memory and configured to execute software stored in said memory, wherein said software configures the processor to:
create a pool of neural networks trained on a portion of a data set;
for each of various coefficient settings for a multi-objective function:
apply selective evolution subject to the multi-objective function with that coefficient setting to obtain a corresponding group of neural network ensembles; and
select a local ensemble from each said group of neural network ensembles, wherein the selection is based on data not included in said portion of the data set;
combine a plurality of the local ensembles to form a global ensemble of local ensembles; and
provide a perceptible output based at least in part on a prediction by the global ensemble.
US12/048,0452007-03-142008-03-13Neural-network based surrogate model construction methods and applications thereofActive2030-09-22US8065244B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US12/048,045US8065244B2 (en)2007-03-142008-03-13Neural-network based surrogate model construction methods and applications thereof

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
US89483407P2007-03-142007-03-14
US12/048,045US8065244B2 (en)2007-03-142008-03-13Neural-network based surrogate model construction methods and applications thereof

Publications (2)

Publication NumberPublication Date
US20080228680A1 US20080228680A1 (en)2008-09-18
US8065244B2true US8065244B2 (en)2011-11-22

Family

ID=39760058

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US12/048,045Active2030-09-22US8065244B2 (en)2007-03-142008-03-13Neural-network based surrogate model construction methods and applications thereof

Country Status (4)

CountryLink
US (1)US8065244B2 (en)
GB (1)GB2462380B (en)
NO (1)NO20093113L (en)
WO (1)WO2008112921A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120041734A1 (en)*2009-05-042012-02-16Thierry ChevalierSystem and method for collaborative building of a surrogate model for engineering simulations in a networked environment
US20130198007A1 (en)*2008-05-062013-08-01Richrelevance, Inc.System and process for improving product recommendations for use in providing personalized advertisements to retail customers
US9721204B2 (en)2013-10-282017-08-01Qualcomm IncorporatedEvaluation of a system including separable sub-systems over a multidimensional range
US20190025454A1 (en)*2016-09-262019-01-24Halliburton Energy Services, Inc.Neutron porosity log casing thickness corrections
US10607715B2 (en)2017-06-132020-03-31International Business Machines CorporationSelf-evaluating array of memory
US20200210864A1 (en)*2018-01-152020-07-02Dalian Minzu UniversityMethod for detecting community structure of complicated network
US10769550B2 (en)2016-11-172020-09-08Industrial Technology Research InstituteEnsemble learning prediction apparatus and method, and non-transitory computer-readable storage medium
US10891311B2 (en)2016-10-142021-01-12Red Hat, Inc.Method for generating synthetic data sets at scale with non-redundant partitioning
US10956823B2 (en)2016-04-082021-03-23Cognizant Technology Solutions U.S. CorporationDistributed rule-based probabilistic time-series classifier
US10990873B2 (en)2016-06-222021-04-27Saudi Arabian Oil CompanySystems and methods for rapid prediction of hydrogen-induced cracking (HIC) in pipelines, pressure vessels, and piping systems and for taking action in relation thereto
US11468306B2 (en)2019-11-182022-10-11Samsung Electronics Co., Ltd.Storage device with artificial intelligence and storage system including the same
US11531790B2 (en)*2020-01-032022-12-20Halliburton Energy Services, Inc.Tool string design using machine learning
US11531874B2 (en)2015-11-062022-12-20Google LlcRegularizing machine learning models
US11636336B2 (en)2019-12-042023-04-25Industrial Technology Research InstituteTraining device and training method for neural network model
US11775841B2 (en)2020-06-152023-10-03Cognizant Technology Solutions U.S. CorporationProcess and system including explainable prescriptions through surrogate-assisted evolution
US11783195B2 (en)2019-03-272023-10-10Cognizant Technology Solutions U.S. CorporationProcess and system including an optimization engine with evolutionary surrogate-assisted prescriptions
US11783099B2 (en)2018-08-012023-10-10General Electric CompanyAutonomous surrogate model creation platform
US12051237B2 (en)2021-03-122024-07-30Samsung Electronics Co., Ltd.Multi-expert adversarial regularization for robust and data-efficient deep supervised learning
US12099934B2 (en)2020-04-072024-09-24Cognizant Technology Solutions U.S. CorporationFramework for interactive exploration, evaluation, and improvement of AI-generated solutions
US12406188B1 (en)2020-03-092025-09-02Cognizant Technology Solutions U.S. CorportionSystem and method for evolved data augmentation and selection
US12424335B2 (en)2020-07-082025-09-23Cognizant Technology Solutions U.S. CorporationAI based optimized decision making for epidemiological modeling

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8374974B2 (en)*2003-01-062013-02-12Halliburton Energy Services, Inc.Neural network training data selection using memory reduced cluster analysis for field model development
US9514388B2 (en)*2008-08-122016-12-06Halliburton Energy Services, Inc.Systems and methods employing cooperative optimization-based dimensionality reduction
BRPI0820365A2 (en)*2008-08-262015-05-12Halliburton Energy Serv Inc Method, system, and computer readable storage media.
KR101012863B1 (en)*2008-09-252011-02-08한국전력공사 Load prediction comparison analysis system for calculating customer reference load
US8898045B2 (en)2009-04-212014-11-25Halliburton Energy Services, Inc.System and method of predicting gas saturation of a formation using neural networks
US9043189B2 (en)*2009-07-292015-05-26ExxonMobil Upstream Research—Law DepartmentSpace-time surrogate models of subterranean regions
US9528334B2 (en)2009-07-302016-12-27Halliburton Energy Services, Inc.Well drilling methods with automated response to event detection
US9567843B2 (en)2009-07-302017-02-14Halliburton Energy Services, Inc.Well drilling methods with event detection
JP2011107648A (en)*2009-11-202011-06-02Fujifilm CorpLens unit
US10087721B2 (en)*2010-07-292018-10-02Exxonmobil Upstream Research CompanyMethods and systems for machine—learning based simulation of flow
US8538963B2 (en)*2010-11-162013-09-17International Business Machines CorporationOptimal persistence of a business process
US8990149B2 (en)2011-03-152015-03-24International Business Machines CorporationGenerating a predictive model from multiple data sources
US8738554B2 (en)2011-09-162014-05-27International Business Machines CorporationEvent-driven universal neural network circuit
US8874498B2 (en)2011-09-162014-10-28International Business Machines CorporationUnsupervised, supervised, and reinforced learning via spiking computation
US8626684B2 (en)2011-12-142014-01-07International Business Machines CorporationMulti-modal neural network for universal, online learning
US8799199B2 (en)2011-12-142014-08-05International Business Machines CorporationUniversal, online learning in multi-modal perception-action semilattices
US8843423B2 (en)2012-02-232014-09-23International Business Machines CorporationMissing value imputation for predictive models
CN103018660B (en)*2012-12-252015-04-22重庆邮电大学Multi-fault intelligent diagnosing method for artificial circuit utilizing quantum Hopfield neural network
US9256701B2 (en)2013-01-072016-02-09Halliburton Energy Services, Inc.Modeling wellbore fluids
US20140365413A1 (en)*2013-06-062014-12-11Qualcomm IncorporatedEfficient implementation of neural population diversity in neural system
EP3393859B1 (en)*2015-12-212021-11-17Bayerische Motoren Werke AktiengesellschaftMethod for modifying safety- and/or security-relevant control devices in a motor vehicle, and a corresponding apparatus
GB2559074A (en)*2016-03-022018-07-25Halliburton Energy Services IncA space mapping optimization to characterize multiple concentric pipes
WO2018063840A1 (en)*2016-09-282018-04-05D5A1 Llc;Learning coach for machine learning system
US10824940B1 (en)2016-11-302020-11-03Amazon Technologies, Inc.Temporal ensemble of machine learning models trained during different time intervals
US11062226B2 (en)2017-06-152021-07-13Microsoft Technology Licensing, LlcDetermining a likelihood of a user interaction with a content element
US10922627B2 (en)*2017-06-152021-02-16Microsoft Technology Licensing, LlcDetermining a course of action based on aggregated data
US10805317B2 (en)2017-06-152020-10-13Microsoft Technology Licensing, LlcImplementing network security measures in response to a detected cyber attack
US11126191B2 (en)2017-08-072021-09-21Panasonic Intellectual Property Corporation Of AmericaControl device and control method
GB2573809B (en)*2018-05-182020-11-04Emotech LtdSpeaker Recognition
US11347910B1 (en)*2018-07-252022-05-31Hexagon Manufacturing Intelligence, Inc.Computerized prediction for determining composite material strength
CN109085756B (en)*2018-08-272020-11-10西安交通大学Underwater robot thrust distribution method and system based on genetic algorithm optimization
WO2020055659A1 (en)*2018-09-142020-03-19Siemens AktiengesellschaftGeneration and utilization of self-improving data-driven models with selective simulation of 3d object design
US11941513B2 (en)*2018-12-062024-03-26Electronics And Telecommunications Research InstituteDevice for ensembling data received from prediction devices and operating method thereof
CN109783918B (en)*2019-01-042023-01-20上海交通大学Speed reducer optimization design implementation method based on switching of sequential sampling mode
CN111538235B (en)*2019-02-072024-10-01松下知识产权经营株式会社 Learning device and cutting process evaluation system
JP7172706B2 (en)*2019-02-192022-11-16富士通株式会社 Arithmetic processing device, Arithmetic processing program and Arithmetic processing method
CN110175671B (en)*2019-04-282022-12-27华为技术有限公司Neural network construction method, image processing method and device
EP3757904A1 (en)*2019-06-282020-12-30Robert Bosch GmbHDevice and method for training a neural network
CN112445823B (en)*2019-09-042024-11-26华为技术有限公司 Neural network structure search method, image processing method and device
CN111026700B (en)*2019-11-212022-02-01清华大学Memory computing architecture for realizing acceleration and acceleration method thereof
DE102019218841A1 (en)*2019-12-042021-06-10Robert Bosch Gmbh Method of making a device with an optimized design feature vector
CN111783236B (en)*2020-05-142023-03-14西北工业大学Turbine casing sensitivity analysis method based on self-adaptive model and subset simulation
US11704580B2 (en)*2020-05-312023-07-18International Business Machines CorporationAutomated combination of predictions made by different prediction systems
CN112487479B (en)*2020-12-102023-10-13支付宝(杭州)信息技术有限公司Method for training privacy protection model, privacy protection method and device
CN113505545B (en)*2021-06-212022-04-15西南交通大学 Aerodynamic multi-objective optimization method for rail transit vehicles based on improved point addition criteria
US12001529B1 (en)*2021-11-052024-06-04Validate Me LLCCounting machine for manufacturing and validating event-relevant identities via an ensemble network
CN114547917A (en)*2022-04-252022-05-27国家超级计算天津中心Simulation prediction method, device, equipment and storage medium

Citations (94)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US3802259A (en)1970-11-271974-04-09Marathon Oil CoWell logging method
US3946226A (en)1973-01-101976-03-23Texaco Inc.Well logging method and apparatus
US3975157A (en)1975-07-211976-08-17Phillips Petroleum CompanyGeochemical exploration using isoprenoids
US4055763A (en)1975-03-311977-10-25Schlumberger Technology CorporationNeutron characteristic and spectroscopy logging methods and apparatus
US4122340A (en)1977-04-201978-10-24Texaco Inc.Pulsed neutron porosity logging system
US4122339A (en)1977-04-201978-10-24Texaco Inc.Earth formation pulsed neutron porosity logging system utilizing epithermal neutron and inelastic scattering gamma ray detectors
US4239965A (en)1979-03-051980-12-16Dresser Industries, Inc.Method and apparatus for neutron induced gamma ray logging for direct porosity identification
US4293933A (en)1975-03-171981-10-06Schlumberger Technology CorporationWell logging apparatus and method: synthetic logs and synthetic seismograms with extrapolated reflector dip from log measurements
US4297575A (en)1979-08-131981-10-27Halliburton CompanySimultaneous gamma ray measurement of formation bulk density and casing thickness
US4430567A (en)1981-01-221984-02-07Dresser Industries, Inc.Method and apparatus for neutron induced gamma ray logging for direct porosity identification
US4605854A (en)1984-07-161986-08-12Halliburton CompanyMeasurement of formation porosity using fast neutron spectroscopy
US4617825A (en)1985-09-121986-10-21Halliburton CompanyWell logging analysis methods for use in complex lithology reservoirs
US4645926A (en)1985-04-101987-02-24Dresser Industries, Inc.Method for induced gamma ray logging
US4646240A (en)1982-02-021987-02-24Oberto SerraMethod and apparatus for determining geological facies
US4656354A (en)1985-04-101987-04-07Dresser Industries, Inc.Method for induced gamma ray logging
US4912655A (en)1988-12-141990-03-27Gte Laboratories IncorporatedAdjusting neural networks
US4926488A (en)1987-07-091990-05-15International Business Machines CorporationNormalization of speech by adaptive labelling
US5067164A (en)1989-11-301991-11-19At&T Bell LaboratoriesHierarchical constrained automatic learning neural network for character recognition
JPH0489998A (en)1990-08-021992-03-24Nippon Telegr & Teleph Corp <Ntt>Instruction device of propelling direction of small bore pipe
US5112126A (en)1990-07-271992-05-12Chevron Research & Technology CompanyApparatuses and methods for making geophysical measurements useful in determining the deflection of the vertical
US5189415A (en)1990-11-091993-02-23Japan National Oil CorporationReceiving apparatus
EP0552073A2 (en)1992-01-091993-07-21Schlumberger LimitedFormation sigma measurement from thermal neutron detection
US5245696A (en)*1990-11-211993-09-14Ricoh Co. Ltd.Evolution and learning in neural networks: the number and distribution of learning trials affect the rate of evolution
US5251286A (en)1992-03-161993-10-05Texaco, Inc.Method for estimating formation permeability from wireline logs using neural networks
US5374823A (en)1993-10-281994-12-20Computalog U.S.A., Inc.Pulsed neutron decay tool for measuring gamma radiation energy spectra for fast neutron inelastic collisions and thermal neutron capture events
US5444619A (en)*1993-09-271995-08-22Schlumberger Technology CorporationSystem and method of predicting reservoir properties
US5461698A (en)1991-05-101995-10-24Siemens Corporate Research, Inc.Method for modelling similarity function using neural network
US5465321A (en)1993-04-071995-11-07The United States Of America As Represented By The Administrator Of The National Aeronautics And Space AdministrationHidden markov models for fault detection in dynamic systems
US5469404A (en)1992-11-121995-11-21Barber; Harold P.Method and apparatus for seismic exploration
US5475509A (en)1993-02-021995-12-12Fuji Photo Film Co., Ltd.Method and apparatus for setting image processing conditions
US5517854A (en)1992-06-091996-05-21Schlumberger Technology CorporationMethods and apparatus for borehole measurement of formation stress
US5525797A (en)1994-10-211996-06-11Gas Research InstituteFormation density tool for use in cased and open holes
US5608215A (en)1994-09-161997-03-04Schlumberger Technology CorporationMethod and apparatus for determining density of earth formations
US5659667A (en)*1995-01-171997-08-19The Regents Of The University Of California Office Of Technology TransferAdaptive model predictive process control using neural networks
US5828981A (en)1995-05-111998-10-27Texaco Inc.Generating pore types and synthetic capillary pressure curves from wireline logs using neural networks
US5848379A (en)1997-07-111998-12-08Exxon Production Research CompanyMethod for characterizing subsurface petrophysical properties using linear shape attributes
US5862513A (en)1996-11-011999-01-19Western Atlas International, Inc.Systems and methods for forward modeling of well logging tool responses
US5870690A (en)1997-02-051999-02-09Western Atlas International, Inc.Joint inversion processing method for resistivity and acoustic well log data
US5875284A (en)1990-03-121999-02-23Fujitsu LimitedNeuro-fuzzy-integrated data processing system
US5900627A (en)1997-06-191999-05-04Computalog Research, Inc.Formation density measurement utilizing pulse neutrons
US5940777A (en)1995-09-191999-08-17Elf Aquitaine ProductionAutomatic seismic pattern recognition method
WO1999064896A1 (en)1998-06-091999-12-16Geco AsSeismic data interpretation method
US6044327A (en)1997-11-132000-03-28Dresser Industries, Inc.Method for quantifying the lithologic composition of formations surrounding earth boreholes
US6092017A (en)1997-09-032000-07-18Matsushita Electric Industrial Co., Ltd.Parameter estimation apparatus
US6140816A (en)1997-12-122000-10-31Schlumberger Technology CorporationMethod of determining the permeability of sedimentary strata
US6150655A (en)1998-03-062000-11-21Computalog Research, Inc.Inferential measurement of photoelectric absorption cross-section of geologic formations from neutron-induced, gamma-ray spectroscopy
US6163155A (en)1999-01-282000-12-19Dresser Industries, Inc.Electromagnetic wave resistivity tool having a tilted antenna for determining the horizontal and vertical resistivities and relative dip angle in anisotropic earth formations
US6192352B1 (en)1998-02-202001-02-20Tennessee Valley AuthorityArtificial neural network and fuzzy logic based boiler tube leak detection systems
US6207953B1 (en)1998-04-242001-03-27Robert D. WilsonApparatus and methods for determining gas saturation and porosity of a formation penetrated by a gas filled or liquid filled borehole
US6272434B1 (en)1994-12-122001-08-07Baker Hughes IncorporatedDrilling system with downhole apparatus for determining parameters of interest and for adjusting drilling direction in response thereto
US6295504B1 (en)1999-10-252001-09-25Halliburton Energy Services, Inc.Multi-resolution graph-based clustering
US6317730B1 (en)1996-05-232001-11-13Siemens AktiengesellschaftMethod for optimizing a set of fuzzy rules using a computer
US6374185B1 (en)2000-02-182002-04-16Rdsp I, L.P.Method for generating an estimate of lithological characteristics of a region of the earth's subsurface
US6381591B1 (en)1997-02-032002-04-30Siemens AktiengesellschaftMethod for transformation of fuzzy logic, which is used to simulate a technical process, into a neural network
US6411903B2 (en)1998-09-152002-06-25Ronald R. BushSystem and method for delineating spatially dependent objects, such as hydrocarbon accumulations from seismic data
US6424956B1 (en)1993-07-132002-07-23Paul J. WerbosStochastic encoder/decoder/predictor
US6456990B1 (en)1997-02-032002-09-24Siemens AktiengesellschaftMethod for transforming a fuzzy logic used to simulate a technical process into a neural network
US20020147695A1 (en)1999-06-212002-10-10Pratap Shankar KhedkarMethod and system for automated property valuation
US6466893B1 (en)1997-09-292002-10-15Fisher Controls International, Inc.Statistical determination of estimates of process control loop parameters
US20020152030A1 (en)2001-02-162002-10-17Schultz Roger L.Downhole sensing and flow control utilizing neural networks
US6477469B2 (en)2001-01-082002-11-05Halliburton Energy Services, Inc.Coarse-to-fine self-organizing map for automatic electrofacies ordering
US20020165911A1 (en)2001-05-042002-11-07Eran GabberFile system for caching web proxies
US20020170022A1 (en)2001-04-252002-11-14Fujitsu LimitedData analysis apparatus, data analysis method, and computer products
US20020178150A1 (en)2001-05-122002-11-28X-MineAnalysis mechanism for genetic data
US20020177954A1 (en)1994-03-172002-11-28Vail William BanningProcessing formation resistivity measurements obtained from within a cased well used to quantitatively determine the amount of oil and gas present
US20020183932A1 (en)2000-09-292002-12-05West Brian P.Method for mapping seismic attributes using neural networks
US20020188424A1 (en)2001-04-202002-12-12Grinstein Georges G.Method and system for data analysis
US20020187469A1 (en)1997-07-032002-12-12Richard KolodnerMethod of detection of alterations in msh5
US20030115164A1 (en)2001-07-312003-06-19Bingchiang JengNeural network representation for system dynamics models, and its applications
US6615211B2 (en)2001-03-192003-09-02International Business Machines CorporationSystem and methods for using continuous optimization for ordering categorical data sets
US20040019427A1 (en)2002-07-292004-01-29Halliburton Energy Services, Inc.Method for determining parameters of earth formations surrounding a well bore using neural network inversion
US6704436B1 (en)1998-12-302004-03-09Schlumberger Technology CorporationMethod of obtaining a developed two-dimensional image of the wall of a borehole
US20040117121A1 (en)2002-09-272004-06-17Veritas Dgc Inc.Reservoir fracture characterization
US6760716B1 (en)2000-06-082004-07-06Fisher-Rosemount Systems, Inc.Adaptive predictive model in a process control system
US20040133531A1 (en)2003-01-062004-07-08Dingding ChenNeural network training data selection using memory reduced cluster analysis for field model development
US20040222019A1 (en)2002-07-302004-11-11Baker Hughes IncorporatedMeasurement-while-drilling assembly using real-time toolface oriented measurements
US20040257240A1 (en)2003-06-192004-12-23Dingding ChenProcessing well logging data with neural network
US20050114280A1 (en)2000-01-242005-05-26Rising Hawley K.IiiMethod and apparatus of using neural network to train a neural network
US6911824B2 (en)1999-01-282005-06-28Halliburton Energy Services, Inc.Electromagnetic wave resistivity tool having a tilted antenna for geosteering within a desired payzone
US20050246297A1 (en)*2004-03-262005-11-03Dingding ChenGenetic algorithm based selection of neural network ensemble for processing well logging data
US7043463B2 (en)2003-04-042006-05-09Icosystem CorporationMethods and systems for interactive evolutionary computing (IEC)
US20060256655A1 (en)2005-05-102006-11-16Schlumberger Technology Corporation, Incorporated In The State Of TexasUse of an effective tool model in sonic logging data processing
US20070011114A1 (en)*2005-06-242007-01-11Halliburton Energy Services, Inc.Ensembles of neural networks with different input sets
US20070011115A1 (en)*2005-06-242007-01-11Halliburton Energy Services, Inc.Well logging with reduced usage of radioisotopic sources
US20070019865A1 (en)2005-03-042007-01-25Yuri OwechkoObject recognition using a congnitive swarm vision framework with attention mechanisms
US7170418B2 (en)2000-06-162007-01-30The United States Of America As Represented By The Secretary Of The NavyProbabilistic neural network for multi-criteria event detector
US7243056B2 (en)*2001-02-262007-07-10Honda Research Institute Europe GmbhStrategy parameter adaptation in evolution strategies
US20070167846A1 (en)2003-07-012007-07-19Cardiomag Imaging, Inc.Use of machine learning for classification of magneto cardiograms
US20070183670A1 (en)2004-08-142007-08-09Yuri OwechkoGraph-based cognitive swarms for object group recognition
US7308134B2 (en)2001-05-282007-12-11Honda Research Institute Europe GmbhPattern recognition with hierarchical networks
US7328107B2 (en)2006-04-282008-02-05Kjt Enterprises, Inc.Integrated earth formation evaluation method using controlled source electromagnetic survey data and seismic data
US7363280B2 (en)*2000-11-142008-04-22Honda Research Institute Europe GmbhMethods for multi-objective optimization using evolutionary algorithms
US7363281B2 (en)2004-01-262008-04-22Honda Research Institute Europe GmbhReduction of fitness evaluations using clustering techniques and neural network ensembles
US7565833B2 (en)2001-08-132009-07-28Baker Hughes IncorporatedAutomatic adjustment of NMR pulse sequence to optimize SNR based on real time analysis

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US616355A (en)*1898-12-20Locomotive-coaling device
CA1095387A (en)*1976-02-171981-02-10Conrad M. BanasSkin melting
US5608214A (en)*1995-10-301997-03-04Protechnics International, Inc.Gamma ray spectral tool for well logging

Patent Citations (103)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US3802259A (en)1970-11-271974-04-09Marathon Oil CoWell logging method
US3946226A (en)1973-01-101976-03-23Texaco Inc.Well logging method and apparatus
US4293933A (en)1975-03-171981-10-06Schlumberger Technology CorporationWell logging apparatus and method: synthetic logs and synthetic seismograms with extrapolated reflector dip from log measurements
US4055763A (en)1975-03-311977-10-25Schlumberger Technology CorporationNeutron characteristic and spectroscopy logging methods and apparatus
US3975157A (en)1975-07-211976-08-17Phillips Petroleum CompanyGeochemical exploration using isoprenoids
US4122340A (en)1977-04-201978-10-24Texaco Inc.Pulsed neutron porosity logging system
US4122339A (en)1977-04-201978-10-24Texaco Inc.Earth formation pulsed neutron porosity logging system utilizing epithermal neutron and inelastic scattering gamma ray detectors
US4239965A (en)1979-03-051980-12-16Dresser Industries, Inc.Method and apparatus for neutron induced gamma ray logging for direct porosity identification
US4297575A (en)1979-08-131981-10-27Halliburton CompanySimultaneous gamma ray measurement of formation bulk density and casing thickness
US4430567A (en)1981-01-221984-02-07Dresser Industries, Inc.Method and apparatus for neutron induced gamma ray logging for direct porosity identification
US4646240A (en)1982-02-021987-02-24Oberto SerraMethod and apparatus for determining geological facies
US4605854A (en)1984-07-161986-08-12Halliburton CompanyMeasurement of formation porosity using fast neutron spectroscopy
US4645926A (en)1985-04-101987-02-24Dresser Industries, Inc.Method for induced gamma ray logging
US4656354A (en)1985-04-101987-04-07Dresser Industries, Inc.Method for induced gamma ray logging
US4617825A (en)1985-09-121986-10-21Halliburton CompanyWell logging analysis methods for use in complex lithology reservoirs
US4926488A (en)1987-07-091990-05-15International Business Machines CorporationNormalization of speech by adaptive labelling
US4912655A (en)1988-12-141990-03-27Gte Laboratories IncorporatedAdjusting neural networks
US5067164A (en)1989-11-301991-11-19At&T Bell LaboratoriesHierarchical constrained automatic learning neural network for character recognition
US5875284A (en)1990-03-121999-02-23Fujitsu LimitedNeuro-fuzzy-integrated data processing system
US5112126A (en)1990-07-271992-05-12Chevron Research & Technology CompanyApparatuses and methods for making geophysical measurements useful in determining the deflection of the vertical
JPH0489998A (en)1990-08-021992-03-24Nippon Telegr & Teleph Corp <Ntt>Instruction device of propelling direction of small bore pipe
US5189415A (en)1990-11-091993-02-23Japan National Oil CorporationReceiving apparatus
US5245696A (en)*1990-11-211993-09-14Ricoh Co. Ltd.Evolution and learning in neural networks: the number and distribution of learning trials affect the rate of evolution
US5461698A (en)1991-05-101995-10-24Siemens Corporate Research, Inc.Method for modelling similarity function using neural network
EP0552073A2 (en)1992-01-091993-07-21Schlumberger LimitedFormation sigma measurement from thermal neutron detection
US5251286A (en)1992-03-161993-10-05Texaco, Inc.Method for estimating formation permeability from wireline logs using neural networks
US5517854A (en)1992-06-091996-05-21Schlumberger Technology CorporationMethods and apparatus for borehole measurement of formation stress
US5469404A (en)1992-11-121995-11-21Barber; Harold P.Method and apparatus for seismic exploration
US5475509A (en)1993-02-021995-12-12Fuji Photo Film Co., Ltd.Method and apparatus for setting image processing conditions
US5465321A (en)1993-04-071995-11-07The United States Of America As Represented By The Administrator Of The National Aeronautics And Space AdministrationHidden markov models for fault detection in dynamic systems
US6424956B1 (en)1993-07-132002-07-23Paul J. WerbosStochastic encoder/decoder/predictor
US5444619A (en)*1993-09-271995-08-22Schlumberger Technology CorporationSystem and method of predicting reservoir properties
US5374823A (en)1993-10-281994-12-20Computalog U.S.A., Inc.Pulsed neutron decay tool for measuring gamma radiation energy spectra for fast neutron inelastic collisions and thermal neutron capture events
US20020177954A1 (en)1994-03-172002-11-28Vail William BanningProcessing formation resistivity measurements obtained from within a cased well used to quantitatively determine the amount of oil and gas present
US5608215A (en)1994-09-161997-03-04Schlumberger Technology CorporationMethod and apparatus for determining density of earth formations
US5525797A (en)1994-10-211996-06-11Gas Research InstituteFormation density tool for use in cased and open holes
US6272434B1 (en)1994-12-122001-08-07Baker Hughes IncorporatedDrilling system with downhole apparatus for determining parameters of interest and for adjusting drilling direction in response thereto
US5659667A (en)*1995-01-171997-08-19The Regents Of The University Of California Office Of Technology TransferAdaptive model predictive process control using neural networks
US5828981A (en)1995-05-111998-10-27Texaco Inc.Generating pore types and synthetic capillary pressure curves from wireline logs using neural networks
US5940777A (en)1995-09-191999-08-17Elf Aquitaine ProductionAutomatic seismic pattern recognition method
US6317730B1 (en)1996-05-232001-11-13Siemens AktiengesellschaftMethod for optimizing a set of fuzzy rules using a computer
US5862513A (en)1996-11-011999-01-19Western Atlas International, Inc.Systems and methods for forward modeling of well logging tool responses
US6456990B1 (en)1997-02-032002-09-24Siemens AktiengesellschaftMethod for transforming a fuzzy logic used to simulate a technical process into a neural network
US6381591B1 (en)1997-02-032002-04-30Siemens AktiengesellschaftMethod for transformation of fuzzy logic, which is used to simulate a technical process, into a neural network
US5870690A (en)1997-02-051999-02-09Western Atlas International, Inc.Joint inversion processing method for resistivity and acoustic well log data
US5900627A (en)1997-06-191999-05-04Computalog Research, Inc.Formation density measurement utilizing pulse neutrons
US20020187469A1 (en)1997-07-032002-12-12Richard KolodnerMethod of detection of alterations in msh5
US5848379A (en)1997-07-111998-12-08Exxon Production Research CompanyMethod for characterizing subsurface petrophysical properties using linear shape attributes
US6092017A (en)1997-09-032000-07-18Matsushita Electric Industrial Co., Ltd.Parameter estimation apparatus
US6466893B1 (en)1997-09-292002-10-15Fisher Controls International, Inc.Statistical determination of estimates of process control loop parameters
US6044327A (en)1997-11-132000-03-28Dresser Industries, Inc.Method for quantifying the lithologic composition of formations surrounding earth boreholes
US6140816A (en)1997-12-122000-10-31Schlumberger Technology CorporationMethod of determining the permeability of sedimentary strata
US6192352B1 (en)1998-02-202001-02-20Tennessee Valley AuthorityArtificial neural network and fuzzy logic based boiler tube leak detection systems
US6150655A (en)1998-03-062000-11-21Computalog Research, Inc.Inferential measurement of photoelectric absorption cross-section of geologic formations from neutron-induced, gamma-ray spectroscopy
US6207953B1 (en)1998-04-242001-03-27Robert D. WilsonApparatus and methods for determining gas saturation and porosity of a formation penetrated by a gas filled or liquid filled borehole
WO1999064896A1 (en)1998-06-091999-12-16Geco AsSeismic data interpretation method
US6411903B2 (en)1998-09-152002-06-25Ronald R. BushSystem and method for delineating spatially dependent objects, such as hydrocarbon accumulations from seismic data
US6704436B1 (en)1998-12-302004-03-09Schlumberger Technology CorporationMethod of obtaining a developed two-dimensional image of the wall of a borehole
US7019528B2 (en)1999-01-282006-03-28Halliburton Energy Services, Inc.Electromagnetic wave resistivity tool having a tilted antenna for geosteering within a desired payzone
US7265552B2 (en)1999-01-282007-09-04Halliburton Energy Services, Inc.Electromagnetic wave resistivity tool having a tilted antenna for geosteering within a desired payzone
US6911824B2 (en)1999-01-282005-06-28Halliburton Energy Services, Inc.Electromagnetic wave resistivity tool having a tilted antenna for geosteering within a desired payzone
US20070235225A1 (en)1999-01-282007-10-11Halliburton Energy Services, Inc.Electromagnetic wave resistivity tool having a tilted antenna for geosteering within a desired payzone
US7138803B2 (en)1999-01-282006-11-21Halliburton Energy Services, Inc.Electromagnetic wave resistivity tool having a tilted antenna for geosteering within a desired payzone
US6163155A (en)1999-01-282000-12-19Dresser Industries, Inc.Electromagnetic wave resistivity tool having a tilted antenna for determining the horizontal and vertical resistivities and relative dip angle in anisotropic earth formations
US20020147695A1 (en)1999-06-212002-10-10Pratap Shankar KhedkarMethod and system for automated property valuation
US6295504B1 (en)1999-10-252001-09-25Halliburton Energy Services, Inc.Multi-resolution graph-based clustering
US20050114280A1 (en)2000-01-242005-05-26Rising Hawley K.IiiMethod and apparatus of using neural network to train a neural network
US6374185B1 (en)2000-02-182002-04-16Rdsp I, L.P.Method for generating an estimate of lithological characteristics of a region of the earth's subsurface
US6760716B1 (en)2000-06-082004-07-06Fisher-Rosemount Systems, Inc.Adaptive predictive model in a process control system
US7170418B2 (en)2000-06-162007-01-30The United States Of America As Represented By The Secretary Of The NavyProbabilistic neural network for multi-criteria event detector
US20020183932A1 (en)2000-09-292002-12-05West Brian P.Method for mapping seismic attributes using neural networks
US7363280B2 (en)*2000-11-142008-04-22Honda Research Institute Europe GmbhMethods for multi-objective optimization using evolutionary algorithms
US6477469B2 (en)2001-01-082002-11-05Halliburton Energy Services, Inc.Coarse-to-fine self-organizing map for automatic electrofacies ordering
US20020152030A1 (en)2001-02-162002-10-17Schultz Roger L.Downhole sensing and flow control utilizing neural networks
US6789620B2 (en)2001-02-162004-09-14Halliburton Energy Services, Inc.Downhole sensing and flow control utilizing neural networks
US7243056B2 (en)*2001-02-262007-07-10Honda Research Institute Europe GmbhStrategy parameter adaptation in evolution strategies
US6615211B2 (en)2001-03-192003-09-02International Business Machines CorporationSystem and methods for using continuous optimization for ordering categorical data sets
US20020188424A1 (en)2001-04-202002-12-12Grinstein Georges G.Method and system for data analysis
US20020170022A1 (en)2001-04-252002-11-14Fujitsu LimitedData analysis apparatus, data analysis method, and computer products
US20020165911A1 (en)2001-05-042002-11-07Eran GabberFile system for caching web proxies
US20020178150A1 (en)2001-05-122002-11-28X-MineAnalysis mechanism for genetic data
US7308134B2 (en)2001-05-282007-12-11Honda Research Institute Europe GmbhPattern recognition with hierarchical networks
US20030115164A1 (en)2001-07-312003-06-19Bingchiang JengNeural network representation for system dynamics models, and its applications
US7565833B2 (en)2001-08-132009-07-28Baker Hughes IncorporatedAutomatic adjustment of NMR pulse sequence to optimize SNR based on real time analysis
US20040019427A1 (en)2002-07-292004-01-29Halliburton Energy Services, Inc.Method for determining parameters of earth formations surrounding a well bore using neural network inversion
US20040222019A1 (en)2002-07-302004-11-11Baker Hughes IncorporatedMeasurement-while-drilling assembly using real-time toolface oriented measurements
US20040117121A1 (en)2002-09-272004-06-17Veritas Dgc Inc.Reservoir fracture characterization
US20040133531A1 (en)2003-01-062004-07-08Dingding ChenNeural network training data selection using memory reduced cluster analysis for field model development
US20060195204A1 (en)2003-04-042006-08-31Icosystem CorporationMethods and Systems for Interactive Evolutionary Computing (IEC)
US7043463B2 (en)2003-04-042006-05-09Icosystem CorporationMethods and systems for interactive evolutionary computing (IEC)
US20040257240A1 (en)2003-06-192004-12-23Dingding ChenProcessing well logging data with neural network
US20070167846A1 (en)2003-07-012007-07-19Cardiomag Imaging, Inc.Use of machine learning for classification of magneto cardiograms
US7363281B2 (en)2004-01-262008-04-22Honda Research Institute Europe GmbhReduction of fitness evaluations using clustering techniques and neural network ensembles
US20050246297A1 (en)*2004-03-262005-11-03Dingding ChenGenetic algorithm based selection of neural network ensemble for processing well logging data
US7280987B2 (en)*2004-03-262007-10-09Halliburton Energy Services, Inc.Genetic algorithm based selection of neural network ensemble for processing well logging data
US20070183670A1 (en)2004-08-142007-08-09Yuri OwechkoGraph-based cognitive swarms for object group recognition
US20070019865A1 (en)2005-03-042007-01-25Yuri OwechkoObject recognition using a congnitive swarm vision framework with attention mechanisms
US20060256655A1 (en)2005-05-102006-11-16Schlumberger Technology Corporation, Incorporated In The State Of TexasUse of an effective tool model in sonic logging data processing
US20070011115A1 (en)*2005-06-242007-01-11Halliburton Energy Services, Inc.Well logging with reduced usage of radioisotopic sources
US20070011114A1 (en)*2005-06-242007-01-11Halliburton Energy Services, Inc.Ensembles of neural networks with different input sets
US7587373B2 (en)2005-06-242009-09-08Halliburton Energy Services, Inc.Neural network based well log synthesis with reduced usage of radioisotopic sources
US7613665B2 (en)*2005-06-242009-11-03Halliburton Energy Services, Inc.Ensembles of neural networks with different input sets
US7328107B2 (en)2006-04-282008-02-05Kjt Enterprises, Inc.Integrated earth formation evaluation method using controlled source electromagnetic survey data and seismic data

Non-Patent Citations (108)

* Cited by examiner, † Cited by third party
Title
A. Chandra and X. Yao, "Divace: Diverse and accurate ensemble learning algorithm," in The Fifth International Conference on Intelligent Data Engineering and Automated Leaming, Exeter, UK, 2004, pp. 619-625.
A. Krogh, et al., "Neural network ensembles, cross validation, and active learning," in Advances in Neural Information Processing System 7, Cambridge, MA: MIT Press, 1995, pp. 231-238.
Anonymous, "Log Interpretation Charts", Dresser Atlas, Dresser Industries, Inc., USA, Jun. 1983, 2 pages.
B. Yang, et al., "Managing approximation models in multiobjective optimization," in Structure and Multidisciplinary Optimization, vol. 24, No. 2, 2002, pp. 141-156.
Brown, "Negative Correlation Learning and the Ambiguity Family of Ensemble Methods", Springer-Verlag Berlin, Heidelberg, MCS2003, LNCS 2709, (2003), pp. 266-275.
Chakraborti, N et al., "A Study of the Cu Clusters Using Gray-Coded Genetic Algorithms and Differential Evolution", Journal of Phase Equilibria and Diffusion, vol. 25, No. 1, (Apr. 2007), pp. 16-21.
Chandra et al., "Ensemble learning using multi-objective evolutionary algorithms", Journal of mathematical modelling and algorithms, 2006, pp. 417-445.*
Chandra et al., "Ensemble learning using multi-objective evolutionary algorithms", Kluwer Academic Publishers, 2005, pp. 1-34.*
Chen et al., "Acceleration of Levenberg-Marquardt training of neural networks with variable decay rate", IEEE, 2003, pp. 1873-1878.*
Chen et al., "Neural network ensemble selection using multi-objective gentic algorthn in processing pulsed neutron data", SPWLA 45th annual logging sympossium, 2004, pp. 1-13.*
Chen, Dingding "Variable Input Neural Network Ensembles in Generating Synthetic Well Logs", International Joint Conference on Neural Networks, Vancouver, BC, Canada, (2006), pp. 2273-2280.
Chen, Dingding et al., "Neural Network Ensemble Selection Using Multi-Objective Genetic Algorithm in Processing Pulsed Neuron Data", Petrophysics, vol. 46, No. 5, (Jun. 6-9, 2004), 13 pgs.
Chen, Dingding et al., "Neural Network Training-Data Selection Using Memory-Reduced Cluster Analysis for Field Model Development", SPE 80906; Society of Petroleum Engineers, SPE Production and Operation Symposium, Oklahoma City, OK, Mar. 23-25, 2003, 5 pages.
Chen, Dingding et al., "Neural-Network Based Surrogate Model Construction Methods and Applications Thereof", U.S. Appl. No. 12/048,045, filed Mar. 13, 2008, 30 pgs.
Chen, Dingding et al., "Systems and Methods Employing Cooperative Optimization-Based Dimensionality Reduction", U.S. Appl. No. 12/190,418, filed Aug. 6, 2008, 32 pgs.
Coelho, Guilherme P., et al., "The Influence of the Pool of Candidates on the Performance of Selection and Combination Techniques in Ensembles", International Joint Conference on Neural Networks, Vancouver, BC, Canada, (2006), pp. 10588-10595.
D. Chen, et al., "Neural network ensemble selection using a multi-objective genetic algorithm in processing pulsed neutron data," Petrophysics, vol. 46, No. 5, Oct. 2005, pp. 323-334.
D. Chen, et al., "Variable input neural network ensembles in generating synthetic well logs," Proc. of International Joint Conference on Neural Networks, Vancouver, BC, Canada, 2006, pp. 2273-2280.
Dahl, "Structured Programming", Academic Press, A.P.I.C. Studies in Data Processing, No. 8, 1972, pp. 7 and 19.
Dahl, O. J., et al., "Structured Programming", A.P.I.C. Studies in Data Processing, No. 8, Academic Press London and New York, (1972), 6 pgs.
Eberhart, Russell et al., "A New Optimizer Using Particle Swarm Theory", Micro Machine and Human Science, (Oct. 1995), pp. 39-43.
Everson, "Full Elite-Sets for Multi-Objective Optimization", Fifth International Conference on Adaptive Computing in Design and Manufacture (ACDM 2002), http://www.dcs.ex.ac.uk/academics/reverson/pubs/adcomp-abs.html, (download Jun. 14, 2004), 8 pages.
Everson, R.M. et al., "Full Elite-Sets for Multi-objective, Optimization", Fifth International Conference on Adaptive Computing in Design and Manufacture (ACDM 2002), 8 pages.
Flowjo, "Clustering-A New, Highly Efficient Algorithm for Cluster Analysis", FlowJo Reference Manual, FlowJo Version 4, www.flowjo.com/v4/html/cluster.html, Dec. 30, 2002 (download), pp. 1-3.
Flowjo, "Clustering-Clustering Algorithm Parameters", FlowJo Reference Manual, FlowJo Version 4, www.flowjo.com/v4/html/clusterparams.html, Dec. 30, 2002 (download), pp. 1-5.
Flowjo, "Clustering-Play-by-Play of Clustering Process", FlowJo Reference Manual, FlowJo Version 4, www.flowjo.com/v4/html/clusterprocess.html, Dec. 30, 2002 (download), pp. 1-2.
Fung, "Modular Artificial Neural Network for Prediction of Petrophysical Properties from Well Log Data", IEEE Instrumentation and Measurement Technology Conference, Brussels, Belgium, Jun. 1996, pp. 1010-1014.
Fung, "Modular Artificial Neural Network for Prediction of Petrophysical Properties from Well Log Data", IEEE Instrumentation and Measurement Technology Conference, Brussels, Belgium, Jun. 4-6, 1996, pp. 1010-1014.
G. Brown, et al., "Diversity creation methods: A survey and categorization," Journal of Information Fusion, vol. 6, No. 1, Jan. 2005, pp. 5-20.
G. P. Coelho and F. J. Von Zuben, "The influence of the pool of candidates on the performance of selection and combination techniques in ensembles," in Proc. of the International Joint Conference on Neural Networks, Vancouver, BC, Canada, 2006, pp. 10588-10595.
Gaspar-Cunha, "RPSGAe-Reduced Pareto Set Genetic Algorithm with Elitism", Workshop on Multiple Objective Metaheuristics, Paris, France, Nov. 2002, 6 pages.
Granitto, "Modeling of Sonic Logs in Oil Wells with Neural Network Ensembles", Argentine Symposium on Artificial Intelligence (ASAI'01), http://citeseer.ist.psu.edu/granitto01/modeling.html, Sep. 12-13, 2001, 7 pages.
Granitto, P. M., et al., "Modeling of Sonic Logs in Oil Wells with Neural Networks Ensembles", Argentine Symposium on Artificial Intelligence (ASA! '01), Bs. As., Sep. 12-13, 2001, 7 pages.
H. Abbass, "Pareto neuro-evolution: Constructing ensemble of neural networks using multi-objective optimization," in Proc. Congress on Evolutionary Computation, Canberra, Australia, 2003, pp. 2074-2080.
Hampson, "Use of Multiattribute Transforms to Predict Log Properties from Seismic Data", Society of Exploration Geophysicists, Geophysics vol. 66. No. 1, Jan.-Feb. 2001, pp. 220-236.
Hampson, Daniel P., et al., "Use of Multiattribute Transforms to Predict Log Properties from Seismic Data", Society of Exploration Geophysicists, Geophysics vol. 66. No. 1, Jan. 2001, pp. 220-236.
Hamza, Karim et al., "Vehicle Crashworthiness Design via a Surrogate Model Ensemble and a Co-Evolutionary Genetic Algorithm", Proceedings of IDETC/CIE 2005, ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Long Beach, California, (Sep. 24, 2005), 9 pgs.
Hansen, "Neural Network Ensembles", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, No. 10, 1990, pp. 993-1001.
Helle, "Fluid Saturation from Sell Logs Using Committee Neural Networks", Petroleum Geoscience, 2002, vol. 8, pp. 109-118.
Helle, H., et al., "Fluid saturation from well logs using committee neural networks", Petroleum Geoscience, 2002, vol. 8, Also cited in PCT ISR Jul. 3, 2008, pp. 109-118.
International Preliminary Report on Patentability, dated Feb. 17, 2011, Appl No. PCT/US09/52860, Systems and Methods Employing Cooperative Optimization Based Dimensionality Reduction, filed Aug. 5, 2009, 8 pgs.
International Preliminary Report on Patentability, dated Mar. 12, 2009, Appl No. PCT/US2005/009494, Genetic Algorithm Based Selection of Neural Network Ensemble for Processing Well Logging Data, filed Mar. 22, 2005, 6 pgs.
International Preliminary Report on Patentability, dated Sep. 15, 2009, Appl No. PCT/US08/56894, "Neural-Network Based Surrogate Model Construction Methods and Applications Thereof", filed Mar. 13, 2008, 1 pg.
Internatonal Search Report and Written Opinion, dated Sep. 28, 2009, Appl No. PCT/US09/52860, Systems and Methods Employing Cooperative Optimization Based Dimensionality Reduction, filed Aug. 5, 2009, 13 pgs.
Islam, M., et al., "A Constructive Algorithm for Training Cooperative Neural Network Ensembles", IEEE Transactions on Neural Networks, 2003, vol. 14, No. 4, 2003, pp. 820-834.
Islam, Monirul et al., "A Constructive Algorithm for Training Cooperative Neural Network Ensembles", IEEE Transactions on Neural Networks, vol. 14, No. 4, (Jul. 2003), pp. 820-834.
J. M. Twomey and A. E. Smith, "Committee networks by resampling," in Intelligent Engineering Systems through Artificial Neural Networks, C. H. Dagli, M. Akay, C. L. P. Chen, B. R. Fernandez and J. Ghosh, Eds. ASME Press, 1995, vol. 5, pp. 153-158.
Krough, A., et al., "Neural Network Ensembles, Cross Validation, and Active Learning", Advances in Neural Information Processing Systems 7, Cambridge, MA, MIT Press, 1995, pp. 231-238.
Lespinats, et al., "DD-HDS: A Method for Visualization and Exploration of High-Dimensional Data", IEEE, Transactions on Neural Networks, vol. 18, No. 5, (Sep. 2007), pp. 1265-1279.
Liu, "Evolutionary Ensembles with Negative Correlation Learning", IEEE Transactions on Evolutionary Computation, vol. 4, No. 4, Nov. 2000, pp. 380-387.
Liu, Y. et al., "Ensemble learning via negative correlation", Neural Networks, vol. 12, Issue 10, www.elsevier.com/locate/neunet, (Dec. 1999), pp. 1399-1404.
Liu, Y., "Evolutionary Ensembles with Negative Correlation Learning", IEEE Transactions on Evolutionary Computation, vol. 4, No. 4, Nov. 2000, pp. 380-387.
Liu, Y., et al., "Ensemble Learning via Negative Correlation", Neural Networks, vol. 12, Issue 10, Dec. 1999, pp. 1399-1404.
Mullen, "The Applications of Neural Networks to Improve the Usability of Pulsed Neutron Logs for Evaluating Infill Well Locations in the Piceance Basin of Western Colorado and the San Juan Basin of Northwest New Mexico", SPE Rocky Mountain Petroleum Technology Conference, Keystone, Colorado, 2001, pp. 1-14.
Odom, "A New 1.625 Diameter Pulsed Neutron Capture and Inelastic/Capture Spectral Combination System Provides Answers in Complex Reservoirs", SPWLA 35th Annual Logging Symposium, Jun. 1994, 19 pages.
Odom, "A Pulsed Neutron Analysis Model for Carbon Dioxide Floods: Application to the Reinecke Field, West Texas", SPE 59717, SPE Permian Basin Oil & Gas Recovery Conference, Midland, Texas, Mar. 21-23, 2000, 4 pages.
Odom, "Applications and Derivation of a New Cased-Hole Density Porosity in Shaly Sands", SPE 38699, SPE Annual Technical Conference and Exhibition, San Antonio, Texas, Oct. 5-8, 1997, pp. 475-487.
Odom, "Assessing the Capabilities of a Cased-Hole Reservoir Analysis System in the Gulf of Thailand", SPE 64404, SPE Asia Pacific Oil and Gas Conference and Exhibition, Brisbane, Australia, Oct. 16-18, 2000, 10 pages.
Odom, "Examples of Cased Reservoir Analysis in the Venturn Basin, California", SPE 62850 SPE/AAPG Western Regional Meeting, Long Beach, California, Jun. 19-23, 2000, 7 pages.
Odom, "Improvements in a Through-Casing Pulsed-Neutron Density Log", PSE 71742, SPE Annual Technical Conference and Exhibition, New Orleans, Louisiana, Sep. 30-Oct. 3, 2001, 9 pages.
Odom, "Log Examples with a Prototype Three-Detector Pulsed-Neutron System for Measurement of Cased-Hole Neutron and Density Porosities", SPE 71042, SPE Rocky Mountain Petroleum Technology Conference, Keystone, Colorado, May 21-23, 2001, 10 pages.
Odom, "Shaly Sand Analysis Using Density-Neutron Porosities from a Cased-Hole Pulsed Neutron System", SPE 55641, SPE Rocky Mountain Regional Meeting, Gillette, Wyoming, May 15-18, 1999, 10 pages.
Odom, Program and Pertinent Slides-by Richard C. Odom, SIAM Symposium on Inverse Programs: Geophysical Applications, Dec. 17, 1995, 5 pages.
Ong, "Evolutionary Optimization of Computationally Expensive Problems Via Surrogate Modeling", AIAA Journal, vol. 41, No. 4, Apr. 2003, pp. 1-10.
Opitz, "A Genetic Algorithm Approach for Creating Neural-Network Ensembles", Combining Artificial Neural Nets, http://citeseer.ist.psu.edu/opitz99genetic.html, 1999, Springer-Verlag, London, pp. 79-99.
Opitz, D.W. et al., "A Genetic Algorithm Approach for Creating Neural-Network Ensembles", Combining Artificial Neural Nets, pp. 79-99, Springer-Verlag, London, 1999, http://citeseer.ist.psu.edu.optiz99genetic.html,(1999), 26 pages.
P. Castillo, et al., "Multiobjective optimization of ensembles of multilayer perceptrons for pattern classification ," in Parallel Problem Solving from Nature IX, Reykjavik, Iceland, 2006, pp. 453-462.
P. Sollich and A. Krogh, "Learning with ensembles: how over-fitting can be useful," in Advances in Neural Information Processing Systems 8, D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, Eds. Cambridge, MA: MIT Press, 1996, pp. 190-196.
PCT International Preliminary Report on Patentability, dated May 22, 2009, Appl No. PCT/US/2006/02118, "Ensembles of Neural Networks with Different Input Sets", filed Jan. 6, 2006, 11 pgs.
PCT International Search Report and Written Opinion, dated Dec. 3, 2004, Appl No. PCT/US03/41239, "Neural Network Training Data Selection Using Memory Reduced Cluster Analysis for Field Model Development", filed Dec. 23, 2003, 6 pgs.
PCT International Search Report and Written Opinion, dated Jul. 3, 2008, Appl No. PCT/US06/21158, "Ensembles of Neural Networks with Different Input Sets", filed Jun. 1, 2006, 11 pgs.
PCT International Search Report and Written Opinion, dated Mar. 20, 2008, Appl No. PCT/US05/09494, "Genetic Algorithm Based Selection of Neural Network Ensemble for Processing Well Logging Data", filed Mar. 22, 2006, 11 pgs.
PCT International Search Report and Written Opinion, dated Mar. 21, 2007, Appl No. PCT/US006/25029, "Well Logging with Reduced Usage of Radiosotopic Sources", filed Jun. 26, 2006, 8 pgs.
PCT Written Opinion, dated Aug. 6, 2009, Appl No. PCT/US08/56894, "Neural-Network Based Surrogate Model Construction Methods and Applications Thereof", filed Mar. 13, 2008, 5 pgs.
Quirein, "An Assessment of Neural Networks Applied to Pulsed Neutron Data for Predicting Open Hole Triple Combo Data", 2003, 14 pages.
R. Maclin and J. W. Shavlik, "Combining the predictions of multiple classifiers: using competitive learning to initialize neural networks," in Proc. of the 14th International Joint Conference on Artificial Intelligence, Montreal, Canada, 1995, pp. 524-530.
Randall, "PDK-100 Enhances Interpretation Capabilities for Pulsed Neutron Capture Logs", 27th Annual SPWL Logging Symposium, Jun. 9-13, 1986, 6 pages.
Randall, "PDK-100 Log Examples in the Gulf Coast", 26th Annual SPWL Logging Symposium, Jun. 17-20, 1985, 6 pages.
Renner, et al., "Combining Constructive Neural Networks for Ensemble Classification", AIM Proc. Fifth Joint Conference on Information Sciences, Feb. 2000, pp. 1-6.
S. Dutta, et al., "A hybrid ensemble model of Kriging and neural networks for ore grade estimation," International Journal of Surface Mining, Reclamation and Environment, vol. 20, No. 1, 2006, pp. 33-45.
S. Obayashi, et al., "Multiobjective evolutionary computation for supersonic wing-shape optimization," IEEE Transactions on Evolutionary Computation, vol. 4, No. 2, 2000, pp. 182-187.
Schnieder, "Using Pulsed Neutron Decay-Spectrum Data and Inflatable Packer Plugdown Assemblies Improve Oil Production Rates in a Mature CO2 Flood", SPE 35165, SPE Permian Basin Oil & Gas Recovery Conference, Midland, Texas, Mar. 27-29, 1996, pp. 167-176.
Singapore Written Opinion, dated Jan. 11, 2010, Appl No. 200905991-6, "Neural-Network Based Surrogate Model Construction Methods and Applications Thereof", filed Mar. 13, 2008, 7 pgs.
Streeter, "Cased Hole Exploration: Modern Pulsed Neutron Techniques for Locating By-Passed Hydrocarbons in Old Wells", SPE 35162, SPE Permian Basin Oil & Gas Recovery Conference, Midland, Texas, Mar. 27-29, 1996, pp. 167-176.
Tittman, J. et al., "The Physical Foundations of Formation Density Logging (Gamma Gamma)", Geophysics, vol. XXX, No. 2, (Apr. 1965),pp. 284-293.
Torres-Sospedra, "A Research on Combination Methods for Ensembles of Multilayer Feedforward", Porceedings of International Joint Conference on Neural Networks, Montreal, Canada, Jul. 31-Aug. 4, 2005, pp. 1-6.
U.S. Non-Final Office Action, dated May 20, 2011, U.S. Appl. No. 12/190,418, "Systems and Methods Employing Cooperative Optimization-Based Dimensionality Reduction," filed Aug. 12, 2008, 37 pgs.
US Advisory Action, dated Jun. 18, 2008, U.S. Appl. No. 10/393,641,"Neural Network Training Data Selection Using Memory Reduced Cluster Analysis for Field Model Development", filed Jun. 8, 2006, 4 pgs.
US Advisory Action, dated Oct. 17, 2006, U.S. Appl. No. 10/393,641, "Neural Network Training Data Selection Using Memory Reduced Cluster Analysis for Field Model Development", filed Jun. 8, 2006, 3 pgs.
US Final Office Action, dated Dec. 5, 2007, U.S. Appl. No. 10/393,641, "Neural Network Training Data Selection Using Memory Reduced Cluster Analysis for Field Model Development", filed Jun. 8, 2006, 33 pgs.
US Final Office Action, dated Jun. 8, 2006, U.S. Appl. No. 10/393,641, "Neural Network Training Data Selection Using Memory Reduced Cluster Analysis for Field Model Development", filed Mar. 21, 2003, 73 pgs.
US Final Office Action, dated May 7, 2007, U.S. Appl. No. 10/811,403, "Genetic Algorithm Based Selection of Neural Network Ensemble for Processing Well Logging Data", filed Mar. 26, 2004, 7 pgs.
US Final Office Action, dated Nov. 24, 2008, U.S. Appl. No. 11/165,892, "Ensembles of Neural Networks with Different Input Sets", filed Jun. 24, 2005, 19 pgs.
US Non-Final Office Action, dated Apr. 9, 2008, U.S. Appl. No. 11/165,892, Ensembles of Neural Networks with different input sets, filed Jun. 24, 2005, 54 pgs.
US Non-Final Office Action, dated Aug. 28, 2007, U.S. Appl. No. 11/165,892, "Ensembles of Neural Networks with Different Input Sets": filed Jun. 24, 2005, 30 pgs.
US Non-Final Office Action, dated Jan. 20, 2006, U.S. Appl. No. 10/393,641, "Neural Network Training Data Selection Using Memory Reduced Cluster Analysis for Field Model Development", filed Jun. 8, 2006, 42 pgs.
US Non-Final Office Action, dated Jul. 9, 2007, U.S. Appl. No. 10/393,641, "Neural Network Training Data Selection Using Memory Reduced Cluster Analysis for Field Model Development", filed Jun. 8, 2006, 34 pgs.
US Non-Final Office Action, dated Jun. 8, 2007 U.S. Appl. No. 11/270,284, "Well Logging with Reduced Usage of Radiosotopic Sources", filed Nov. 9, 2005, 20 pgs.
US Non-Final Office Action, dated Mar. 20, 2008, U.S. Appl. No. 11/270,284, "Well Logging with Reduced Usage of Radiosotopic Sources", filed Nov. 9, 2005, 10 pgs.
US Non-Final Office Action, dated Mar. 24, 2011, U.S. Appl. No. 12/048,045, "Neural-Network Based Surrogate Model Construction Methods and Applications Thereof", filed Mar. 13, 2011, 39 pgs.
US Non-Final Office Action, dated Nov. 24, 2008, U.S. Appl. No. 11/270,284, PCT/US006/25029, "Well Logging with Reduced Usage of Radiosotopic Sources", filed Nov. 9, 2005, 23 pgs.
US Non-Final Office Action, dated Oct. 20, 2006, U.S. Appl. No. 10/811,403, "Genetic Algorithm Based Selection of Neural Network Ensemble for Processing Well Logging Data", filed Mar. 26, 2004, 11 pgs.
Wilson, "Bulk Density Logging with High-Energy Gammas Produced by Fast Neutron Reactions with Formation Oxygen Atoms", IEEE Nuclear Science Symposium and Medical Imaging Conference Record, vol. 1, Oct. 21-28, 1995, 7 pages.
Y. Jin, et al., "A framework for evolutionary optimization with approximate fitness functions," IEEE Transactions on Evolutionary Computation, vol. 6, No. 5, 2002, pp. 481-494.
Y. Jin, et al., "Neural network regularization and ensembling using multi-objective evolutionary algorithms," in Proc. Congress on Evolutionary Computation, Portland, Oregon, 2004, pp. 1-8.
Zhou, "Genetic Algorithm Based Selective Neural Network Ensemble", 17th International Joint Conference on Artificial Intelligence, vol. 2, Seattle, WA, 2001, pp. 797-802.
Zhou, Zhi-Hua "A Study on Polynomial Regression and Gaussian Process Global Surrogate Model in Hierarchical Surrogate-Assisted Evolutionary Algorithm", IEEE Congress on Evolutionary Computation, Edinburgh, United Kingdom, (Sep. 2005), 6 pgs.
Zhou, Zhi-Hua, et al., "Genetic Algorithm based Selective Neural Network Ensemble", International Joint Conference on Artificial Intelligence, 2001, vol. 2, Seattle, WA, 2001, pp. 797-802.

Cited By (28)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130198007A1 (en)*2008-05-062013-08-01Richrelevance, Inc.System and process for improving product recommendations for use in providing personalized advertisements to retail customers
US8924265B2 (en)*2008-05-062014-12-30Richrelevance, Inc.System and process for improving product recommendations for use in providing personalized advertisements to retail customers
US9081934B2 (en)*2009-05-042015-07-14Airbus Engineering Centre IndiaSystem and method for collaborative building of a surrogate model for engineering simulations in a networked environment
US20120041734A1 (en)*2009-05-042012-02-16Thierry ChevalierSystem and method for collaborative building of a surrogate model for engineering simulations in a networked environment
US9721204B2 (en)2013-10-282017-08-01Qualcomm IncorporatedEvaluation of a system including separable sub-systems over a multidimensional range
US11531874B2 (en)2015-11-062022-12-20Google LlcRegularizing machine learning models
US11934956B2 (en)2015-11-062024-03-19Google LlcRegularizing machine learning models
US10956823B2 (en)2016-04-082021-03-23Cognizant Technology Solutions U.S. CorporationDistributed rule-based probabilistic time-series classifier
US11281978B2 (en)2016-04-082022-03-22Cognizant Technology Solutions U.S. CorporationDistributed rule-based probabilistic time-series classifier
US11681898B2 (en)2016-06-222023-06-20Saudi Arabian Oil CompanySystems and methods for rapid prediction of hydrogen-induced cracking (HIC) in pipelines, pressure vessels, and piping systems and for taking action in relation thereto
US10990873B2 (en)2016-06-222021-04-27Saudi Arabian Oil CompanySystems and methods for rapid prediction of hydrogen-induced cracking (HIC) in pipelines, pressure vessels, and piping systems and for taking action in relation thereto
US10451767B2 (en)*2016-09-262019-10-22Halliburton Energy Services, Inc.Neutron porosity log casing thickness corrections
US20190025454A1 (en)*2016-09-262019-01-24Halliburton Energy Services, Inc.Neutron porosity log casing thickness corrections
US10891311B2 (en)2016-10-142021-01-12Red Hat, Inc.Method for generating synthetic data sets at scale with non-redundant partitioning
US10769550B2 (en)2016-11-172020-09-08Industrial Technology Research InstituteEnsemble learning prediction apparatus and method, and non-transitory computer-readable storage medium
US10607715B2 (en)2017-06-132020-03-31International Business Machines CorporationSelf-evaluating array of memory
US11037650B2 (en)2017-06-132021-06-15International Business Machines CorporationSelf-evaluating array of memory
US20200210864A1 (en)*2018-01-152020-07-02Dalian Minzu UniversityMethod for detecting community structure of complicated network
US11783099B2 (en)2018-08-012023-10-10General Electric CompanyAutonomous surrogate model creation platform
US11783195B2 (en)2019-03-272023-10-10Cognizant Technology Solutions U.S. CorporationProcess and system including an optimization engine with evolutionary surrogate-assisted prescriptions
US11468306B2 (en)2019-11-182022-10-11Samsung Electronics Co., Ltd.Storage device with artificial intelligence and storage system including the same
US11636336B2 (en)2019-12-042023-04-25Industrial Technology Research InstituteTraining device and training method for neural network model
US11531790B2 (en)*2020-01-032022-12-20Halliburton Energy Services, Inc.Tool string design using machine learning
US12406188B1 (en)2020-03-092025-09-02Cognizant Technology Solutions U.S. CorportionSystem and method for evolved data augmentation and selection
US12099934B2 (en)2020-04-072024-09-24Cognizant Technology Solutions U.S. CorporationFramework for interactive exploration, evaluation, and improvement of AI-generated solutions
US11775841B2 (en)2020-06-152023-10-03Cognizant Technology Solutions U.S. CorporationProcess and system including explainable prescriptions through surrogate-assisted evolution
US12424335B2 (en)2020-07-082025-09-23Cognizant Technology Solutions U.S. CorporationAI based optimized decision making for epidemiological modeling
US12051237B2 (en)2021-03-122024-07-30Samsung Electronics Co., Ltd.Multi-expert adversarial regularization for robust and data-efficient deep supervised learning

Also Published As

Publication numberPublication date
GB0916094D0 (en)2009-10-28
GB2462380A (en)2010-02-10
US20080228680A1 (en)2008-09-18
GB2462380B (en)2012-02-15
NO20093113L (en)2009-12-14
WO2008112921A1 (en)2008-09-18

Similar Documents

PublicationPublication DateTitle
US8065244B2 (en)Neural-network based surrogate model construction methods and applications thereof
Anifowose et al.A parametric study of machine learning techniques in petroleum reservoir permeability prediction by integrating seismic attributes and wireline data
Rostami et al.Rigorous prognostication of permeability of heterogeneous carbonate oil reservoirs: Smart modeling and correlation development
Ghorbani et al.Performance comparison of bubble point pressure from oil PVT data: Several neurocomputing techniques compared
LimReservoir properties determination using fuzzy logic and neural networks from well data in offshore Korea
Salehi et al.Estimation of the non records logs from existing logs using artificial neural networks
Maschio et al.Bayesian history matching using artificial neural network and Markov Chain Monte Carlo
Sahoo et al.Damage assessment of structures using hybrid neuro-genetic algorithm
Park et al.Handling conflicting multiple objectives using Pareto-based evolutionary algorithm during history matching of reservoir performance
Ferentinou et al.Computational intelligence tools for the prediction of slope performance
Velez-LangsGenetic algorithms in oil industry: An overview
Ma et al.Practical implementation of knowledge-based approaches for steam-assisted gravity drainage production analysis
Amirian et al.Data-driven modeling approach for recovery performance prediction in SAGD operations
Sivaprasad et al.Fatigue damage prediction of top tensioned riser subjected to vortex-induced vibrations using artificial neural networks
Ghiasi-Freez et al.Improving the accuracy of flow units prediction through two committee machine models: an example from the South Pars Gas Field, Persian Gulf Basin, Iran
Alfarizi et al.Well control optimization in waterflooding using genetic algorithm coupled with Artificial Neural Networks
KhoukhiHybrid soft computing systems for reservoir PVT properties prediction
Song et al.Probabilistic prediction of uniaxial compressive strength for rocks from sparse data using Bayesian Gaussian process regression with Synthetic Minority Oversampling Technique (SMOTE)
Oloso et al.Prediction of crude oil viscosity and gas/oil ratio curves using recent advances to neural networks
Davari et al.Permeability prediction from log data using machine learning methods
Taherkhani et al.Capacity prediction and design optimization for laterally loaded monopiles in sandy soil using hybrid neural network and sequential quadratic programming
Wang et al.A novel FDEM-GSA method with applications in deformation and damage analysis of surrounding rock in deep-buried tunnels
Jitchaijaroen et al.Bearing capacity prediction of open caissons in anisotropic clays utilizing a deep neural network coupled with a population based training approach
Surguchev et al.IOR evaluation and applicability screening using artificial neural networks
Zhou et al.Efficient back analysis of multiphysics processes of gas hydrate production through artificial intelligence

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:HALLIBURTON ENERGY SERVICES, INC., TEXAS

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, DINGDING;ZHONG, ALLAN;HAMID, SYED;AND OTHERS;REEL/FRAME:020916/0880;SIGNING DATES FROM 20080429 TO 20080502

Owner name:HALLIBURTON ENERGY SERVICES, INC., TEXAS

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, DINGDING;ZHONG, ALLAN;HAMID, SYED;AND OTHERS;SIGNING DATES FROM 20080429 TO 20080502;REEL/FRAME:020916/0880

FEPPFee payment procedure

Free format text:PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCFInformation on status: patent grant

Free format text:PATENTED CASE

FPAYFee payment

Year of fee payment:4

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:8

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:12


[8]ページ先頭

©2009-2025 Movatter.jp