BACKGROUND OF INVENTIONThe present invention generally relates to a method and an apparatus for analyzing and/or optimizing a design and more particularly, to a probabilistic based software based system which may be used to selectively analyze the overall reliability, robustness, and other features and/or characteristics of a design which is created by and/or which is based upon a computer aided design type model, effective to allow for the production of an item having desirable features and characteristics.
Traditional design techniques require the creation of one or more physical prototypes of an item or element which is to be produced. The prototypes are then subjected to a variety of tests. Further, various changes are made to the prototypes and/or new prototypes are created as a result of these tests and these changes or modifications are tested in order to allow for the production of a final product which has desirable characteristics and features (e.g., a relatively high reliability).
While these traditional design techniques or methods are effective, they are relatively costly and time consuming. One attempt at improving upon these drawbacks of these traditional design techniques utilizes a computer generated or computer aided design “CAD” model which simulates the item or product which is to be created and which represents and/or comprises a certain design of the item or product. The computer system may be used to perform various tests and/or modifications upon the model or design, thereby allowing a user to determine the desirability of the design.
While this approach does reduce the amount of time required for testing, it does not reliably assess the operation of the item or product in a “real operational setting” since this approach does not account for variations or dynamic changes occurring in the values of the various variables used to model the item or product which commonly occur in a “real operational setting”. This deterministic approach therefore does not allow a product or item to be analyzed in a “real world” situation and does not reliably allow for the production of an item having desired characteristics and/or attributes.
There is therefore a need for a new and improved system which allows computer type models to be created of an item or product and which further allows these computer type models to be analyzed and tested in an environment which substantially simulates the “real operational environment” into which the produced item or product is to be operationally placed, thereby allowing for the creation of an item or product having a relatively high reliability, robustness and various other features and/or characteristics.
SUMMARY OF INVENTIONIt is a first non-limiting advantage of the present invention to provide a method and an apparatus for analyzing a design based upon and/or represented by a computer generated model of an item or product in a manner which overcomes some or all of the previously delineated drawbacks of prior methods and apparatuses.
It is a second non-limiting advantage of the present invention to provide a method an apparatus for probabilistically analyzing a design based upon and/or represented by a computer generated model of an item or product in a manner which overcomes some or all of the previously delineated drawbacks of prior methods and apparatuses.
It is a third non-limiting advantage of the present invention to provide a method and an apparatus for analyzing a design based upon and/or represented by a computer generated model of a product or item in a cost effective and efficient manner, effective to allow an item or product to be produced having a relatively high reliability and various other desirable features and characteristics.
According to a first aspect of the present invention, a system is provided to analyze a design which is based upon a computer generated model. The system includes a computer that operates under stored program control and which probabilistically analyzes the computer generated model.
According to a second aspect of the present invention, a method for analyzing a design which is based upon a computer generated model type design is provided. The method includes the steps of receiving the computer generated model; creating at least one variable; and probabilistically analyzing the computer generated model by the use of the at least one variable.
These and other features, aspects, and advantages of the present invention will become apparent from a reading of the detailed description of the preferred embodiment of the invention and by reference to the following drawings.
BRIEF DESCRIPTION OF DRAWINGSFIG. 1 is a block diagram of a computer analysis system which is made in accordance with the teachings of the preferred embodiment of the invention in communicative combination with a conventional computer aided design system;
FIG. 2 is a flowchart illustrating the sequence of operational steps which comprise the methodology of the preferred embodiment of the invention;
FIG. 3 is a diagram, which is created instep30 of the flowchart which is shown inFIG. 2;
FIG. 4 is a flowchart which illustrates a sequence of operational steps used to generate a performance surface as required by the methodology of the preferred embodiment of the invention;
FIG. 5 is a chart illustrating the response of a design which is analyzed bysystem10 by the use of certain sampled design points in accordance withstep120 of the methodology of the preferred embodiment of the invention;
FIG. 6 is a flowchart illustrating a sequence of operational steps used by the methodology of the preferred embodiment of the invention to substantially optimize the developed approximation model;
FIG. 7 is a diagram illustrating the methodology used by and/or incorporated within the system of the preferred embodiment of the invention to determine a “most probable point” and
FIG. 8 is a diagram illustrating the methodology used by and/or incorporated the system of the preferred embodiment of the invention to determine the robustness of a variable or parameter.
DETAILED DESCRIPTIONReferring now toFIG. 1, there is shown acomputer system10 which is made in accordance with the teachings of the preferred embodiment of the invention and which selectively communicates with a conventional or typical computer aideddesign system12.
Particularly,computer system10 includes a computer processor orcontroller14 which operates under stored program control, adisplay16 which is operatively, physically, and communicatively coupled to the processor orcontroller14 and which is adapted to visually display certain information which is received from the processor orcontroller14 or fromsystem12, and akeyboard17 or other conventional and commercially available input device which is physically, operatively, and communicatively coupled to the processor orcontroller14 and which is adapted to allow a user of thesystem10 to command thesystem10 to selectively perform one or more operations.
Conventional computer aideddesign model18 represents and/or comprises a certain design or product which may be selectively communicated to the system10 (e.g., to the controller or processor14) from thesystem12. This model (e.g. the design which is represented by the model18) is then evaluated according to the methodology of the preferred embodiment of the invention.
Referring now toFIG. 2, there is shown aflowchart20 which illustrates a sequence of operational steps which may be selectively performed by the system10 (e.g., by the processor or thecontroller14 which operates under stored program control). Theinitial step30 offlowchart20 requires the creation of an analytical reliability and robustness or parameter (“P”) diagram32 by the user ofsystem10.
Particularly, diagram32, as best shown inFIG. 3, includes afirst column34, which is denoted as “parameter number”. Eachrow36 withincolumn34 uniquely identifies a parameter within the computer addeddesign model18 which is to be analyzed. A “parameter” may be defined as some measurable attribute or characteristic of the receiveddesign18 and may have one or more constituent variables. Diagram32 includes asecond column38, which is denoted as “parameter description” and anentry39 in thecolumn38 describes the parameter resident within thesame row36 as theentry39. Diagram32 further includes athird column40 which is denoted as “nominal” and anentry41 in arow36 ofcolumn40 denotes the nominal value of the parameter which is referenced in thesame row36. Diagram32 further includes a pair ofcolumns42 which are denoted as “design range” and which include a fourth and afifth column44,46. Thefourth column44 is denoted as “lower bound” and thefifth column46 is denoted as “upper bound”. Anentry45, withincolumn44, specifies the lowest feasible or acceptable/desired value for the parameter which is referred to in thesame row36 as theentry45. Anentry47, withincolumn46, specifies the highest feasible or acceptable/desired value for the parameter which is referred to in thesame row36 as theentry47.
The diagram32 includes asixth column48 which is denoted as “variation”. Anentry49, within thecolumn48, specifies the amount by which the nominal value, resident within thesame row36 as theentry49, varies in the “real physical or operational environment” (e.g., within a vehicle). The diagram32 further includes aseventh column50 which is denoted as “parameter in model?”. Anentry51, withincolumn50, denotes whether the parameter, which resides within thesame row36 as theentry51, is included within themodel18. The diagram32 includes aneighth column52 and is denoted as “surrogate?”. Anentry53 within thecolumn52 delineates whether the parameter, which is referred to in thesame row36 as theentry53, is a surrogate. The term “surrogate,” as should be appreciated by those of ordinary skill in the art, delineates a variable which may be the physical manifestation of another variable (e.g., the variable of temperature may manifest itself in a variable length of a desired product and therefore the variable of length may be a surrogate for the variable of temperature).
Diagram32 includes a pair ofcolumns54 and this pair of columns are denoted as “sensitivity available?”. Theconstituent columns56,58 ofcolumn pair54 are respectively denoted as “R1” and “R2”. Anentry57, within thecolumn56, denotes whether the sensitivity is available for the parameter resident within thesame row36 as theentry57 for a first response of the model ordesign18 to an input. Anentry59, within thecolumn58, denotes whether the sensitivity is available for the parameter resident within thesame row36 as theentry59 for a second response of themodel design18 to an input.
Diagram32 further includes aneleventh column60, which is denoted as “remark”. Anentry61 within thecolumn60 denotes or comprises any remarks that the user ofsystem10 desires to make with respect to the parameter resident within thesame row36 as theentry61.
Diagram32 includes asection62 which is delineated as “noise factor table” and which includesentries63 which are representative of and/or which comprise a noise or uncontrolled variable associated with the overall design and which impacts the performance of the overall design.
Diagram32 further includes asection64 which is denoted as “study goal” and which has threepossible entries66,68, and70 which are respectively denoted as “assessment”, “parameter design”, and “tolerance design”.Entry66 denotes the reliability/robustness assessment of the design made by the user of thesystem10 upon the completion of the methodology of the preferred embodiment of the invention,entry68 denotes the desired values for each of the parameters upon the completion of the methodology of the preferred embodiment of the invention, andentry70 denotes the amount of variance which is acceptable or desired in each of parameter design values.
Step30 requires that at least one of theentries66,68, and70 be selected and defined by a user ofsystem10. Diagram32 further includes asection72 having afirst entry74 which is entitled “system input” and which requires a description of the input signal(s) which is (are) applied to the model ordesign18 by the controller orprocessor14, and asecond entry76 which is denoted as “system responses” and which hasmultiple entries78 which require a description of the respective responses which are expected after one or more inputs have been applied to the model ordesign18.Step30 is completed upon the completion of the diagram32 and this diagram32 may be used to ensure that all of the necessary parameters are evaluated by thesystem10 and to compare the analytical results of thesystem10 against the desired attributes or characteristics of each of the parameters, thereby allowing the analyzed model or design to be used to construct an item or product having desired characteristics or attributes.
Step30 is followed bystep80 in which thecontroller14 is made to create and/or select a design region with which to sample the performance surface associated with and/or created by thedesign18 as it is placed into a simulated operating environment by the controller orprocessor14.
That is, as should be apparent to those of ordinary skill in the art, the relationship between all of the input variables or parameters and the output or performance may be thought of as or cooperatively forms a nonlinear response surface (e.g., each allowable and unique combination of input parameters produce output values which cooperatively form a performance surface). One non-limiting advantage of the invention is that a relatively small portion (e.g., the portion may be continuous or be formed from a plurality of discreet and discontinuous sample points) of the design space (e.g., the space formed from the interrelationship between or all of the allowable or possible combinations of input values of all of the parameters or variables) and a relatively small portion of the performance surface is used to reliably approximate the performance surface of the entire design, thereby reducing the overall cost associated with the creation and operation of the simulation and reducing the amount of time in which the simulation system must be operated. In the preferred embodiment, the samples are “spread out” through the entire design space and cooperatively form a true representation of the design space. Moreover, in the preferred embodiment, each sample point provides or is related to a certain performance sample point on the performance surface. Thus, the overall design space creates a performance space. The steps required by this portion or step80 of themethodology20 are delineated in theflowchart90 ofFIG. 4 and, as delineated above, seek to determine what portion of the overall design space is actually needed to reliably and desirably approximate the overall performance surface.
As shown inflowchart90 ofFIG. 4, thefirst step92 of the portion or step80 of themethodology20 of the preferred embodiment of the invention requires that a relatively random sampling be made of the design space using a modified latin hypercube sampling technique. That is, a traditional latin hypercube sampling technique does not provide optimal spacing between sample points and therefore the obtained sample does not reliably represent or approximate the overall design or performance surface. In the preferred embodiment of the invention, as is further delineated below, the conventional Latin Hypercube sampling technique is heuristically combined with conventional “greedy” and “Tabu” methodologies to achieve a unique or modified Latin Hypercube algorithmic combination which allows for a substantially optimized approximation of the performance space.
Thus, instep94, a Tabu set “T” is created and is initially made to be empty. An entropy analysis is applied to the previously obtained random samples, instep96, and the “best” solution (e.g., the solution having the lowest entropy) is placed into a set denoted as “m+”.
Step98 followsstep96 and, in thisstep98, a pair wise or “greedy” substitution is made of the previously obtained samples and, instep100, an entropy analysis is performed an each pair wise substitution and each entropy is compared with the entropy of the sample currently within the set “m+”. Step102 followsstep100 and, in thisstep102, only those samples having a certain entropy (e.g., having an entropy below some threshold which may be equal to the entropy of the sample placed in “m+”) are placed in the set “T”. Step104 followsstep102 and, in thisstep104, the controller orprocessor14, determines whether the required number of iterations or time has elapsed or whether the entropy has not sufficiently improved during a certain time or sampling interval and, based upon this analysis, proceeds to step106 and terminatesmethodology90, or proceeds to step98 in which another pair wise substitution is made and placed into the set “T” only if its entropy is lower than the entropy of the current sample which is resident within the set “T”.
At the conclusion offlowchart90, controller orprocessor14 has a set of design points (e.g., a certain portion of the design space) which cooperatively create and/or cooperatively form a certain portion of the performance space or surface (e.g., the resultant performance surface is cooperatively formed by the solutions or output values of each of these samples) and this performance surface may be used to extrapolate or approximate the remainder of the performance surface.
Step120 followsstep80 and, in thisstep120, a computer based simulation is conducted by the processor orcontroller14 in order to determine the response which is obtained from such an analysis.
That is, as shown bychart130 ofFIG. 5, a set of design sample points or values132 (e.g., each set includes each of theparameters39 specified in asingle row36 of the chart32) is communicated to the processor orcontroller14 and applied to and/or incorporated within thedesign18, thereby causing thedesign18 to respond by outputting at least one value134 for each set of design or input points/values132. That is, thedesign18 may have many sets ofdesign points132 which are sequentially communicated to it and each set of design points orparameters132 causes a respective output response134 to be created. These sets of design parameters or points132 and the respective responses134 are noted instep120 of methodology orflowchart20.
Step140 followsstep120 and, in thisstep140, the controller orprocessor14 utilizes the publicly available MARS algorithms or methodology which is disclosed in the paper entitled Multivariate Adaptive Regression Splines, which is authored by Jerome H. Friedman, which is published in the Annals of Statistics (1991), vol. 19, No. 1, 1-141, and which is fully and completely incorporated herein by reference.
Particularly, the MARS methodology is applied on each row or set of design points orparameters132 and causes the creation of a respective output value for each such row or set of design points orparameters132. The difference between the output value which is obtained from thedesign18 for a row or set132 and the output value obtained from the MARS methodology for the same row or set ofparameters132 is defined as the “residual”. In this manner, a residual value is created for each row or of design points or values132.
Further, instep140, the controller orprocessor14 uses the residual value from eachrow36 or set ofdesign points132 and uses the publicly available Kriging methodology or algorithms to create an additional output value for each row132 (e.g., the residual value for arow132 or set ofdesign points132 is used, by the Kriging methodology, to generate an output value for that row or set of design points132). The Kriging methodology or algorithms are set forth, for example and without limitation, in the paper entitled Screening, Predicting, and Computer Experiments, which is written by William J. Welch et al., which is published in Technometric; vol. 34, No. 1 (February 1992) and which is fully and completely incorporated herein by reference. The respective MARS and Kriging output values, for each row or set ofdesign points132, are then added. An output value134 should be substantially equal to the addition of the MARS and Kriging value for therow132 which generated the value134, thereby validating the use of the combined MARS and Kriging simulation methodologies to simulate the performance surface. At the conclusion of this portion ofstep140, the respective MARS and Kriging output values for each row or set of design points orparameters132, which have been added, are respectively stored within the computer orprocessor14 as an output value.
A certain optimization methodology is then performed instep140 in order to ensure that the set of design parameters or points, obtained instep80, provides an approximation of the performance surface to a desired level of accuracy. This methodology is shown withinflowchart150 of FIG.6.
Particularly, flowchart ormethodology150 begins with aninitial step152 in which the controller orprocessor14 retrieves the information resident within thechart130 of FIG.5. This information is then used by the controller orprocessor14 in combination with the Kriging methodology software to evaluate the correlation or the amount of non-linearity for each of the utilized parameters. In this manner, the most “critical” parameters are identified (e.g., those parameters whose behavior or output values have a respective non-linear relationship to the input values and which are relatively difficult to approximate, especially with a relatively small sample). Instep154, a certain number of additional samples are made of those parameters having a relatively high level of non-linearity and these samples are added to those included within the chart130 (e.g., theadditional samples132 and their respective output values134 are noted on thechart130 and stored within the computer or processor14). Instep156, the modeling error is evaluated on the new matrix of design points (e.g. the respective output values associated with these new samples which are predicted by the combined MARS and Kriging methodologies and the actual respective output value generated by and/or from thedesign18 for these new sample points are compared). The difference between these respective values for each set of new sample points is defined as the modeling error. Instep158 the modeling error is compared with a threshold value. If, instep158, each such error is below the threshold value, the process is concluded. Otherwise, the two sets of design points or parameters (e.g., the previously obtained and new sample points) are combined instep159 andsteps152,154,156, and158 are again completed. At the conclusion ofstep150, sufficient design parameter samples are obtained in order to produce an approximation of the performance surface having a sufficient degree of accuracy.
Step200 followsstep140 and, in thisstep200, the design parameters which approximate the performance surface are input to the processor orcontroller14 in order to obtain certain information about the overall design by the use of a successive linear approximation method or by use of any other conventional approximation or simulation methodology. Step201 followsstep200 and, in thisstep201, the user or the controller or processor determines whether the design is satisfactory. If the design is satisfactory,step201 is followed bystep203 in which the controller orprocessor14 adopts the design as the “final design”. Alternatively,step201 is followed bystep205 in which the controller and/orprocessor14 conducts an optimization process or methodology.
Instep205, for example and without limitation, thecontroller14 creates a probability distribution function, for each output value (which is associated with and/or is found from at least one variable), such as134, and then calculates the distance between a fist certain percentile value, for example and without limitation a ten percentile value, and a second certain percentile value, for example and without limitation, the ninety percentile value, thereby assessing the amount of variance or robustness in each of the output values. A large amount of distance between these respective percentile points evidences a large amount of undesirable and respective variance, and thesystem10 searches for values of these variables which reduces this distance and allows for the creation of a more robust product.
That is, as shown best ingraph300 ofFIG. 8, controller orprocessor14 further evaluates the relationship between the value of the performance (e.g. appearing on axis304) and the probability of occurrence (appearing on axis302) in order to determine the robustness of a certain design setting. Particularly, a relatively robust design setting produces a relativelyshort distance310 between a first percentile point, such as and without limitation, thetenth percentile point312 and a second percentile point, such as and without limitation, theninetieth percentile point314.
The controller orprocessor14, in thisstep200, utilizes the saddle point method with second order approximation in order to compute a probability, which may be used by way of example without limitation to obtain the most probable point. Particularly, in the most preferred embodiment of the invention the most probable point is representative of the setting of compound noise within the system. It should be noted that the use of the most probable point of the performance surface allows a simulation to be “probabilistically” operated or analyzed (e.g., at the most probable point) in order to allow the simulation to provide “real world” or “real operational” results.
Referring now toFIG. 7, there is shown agraph250 which illustrates the technique or methodology used by controller orprocessor14 to locate or fix the most probable point which is defined, within thesystem10, as thepoint260 which lies on theborder251 of the acceptable and non-acceptable performance spaces which are respectively denoted asspaces252 and254. That is, the controller orprocessor14 evaluates the performance space by use of thedesign parameters256,258 (e.g., eachperformance point259 is a function of eachparameter256,258) and determines which of thepoints259 is most probable. In other non-limiting embodiments of the invention, the performance space may be comprised of many variables and parameters. As further shown, withinFIG. 7, in the preferred embodiment of the invention, theperformance spaces252 and254 are normalized, thereby having the nominal value at the origin and the most probable point thereby occurs on theboundary251 and has the smallest distance to theorigin262 of any other point on theboundary251. The sensitivity around thepoint260 determines, in part, the amount of influence of the variables, which are plotted on one or both of theaxes256,258.
That is, in the most preferred embodiment of the invention, the “influence” of a variable is defined by the total or sum of the sensitivity and the amount of variability (“noise”) of the variable and, in the system of the preferred embodiment of the invention, thecontroller14 analyzes the influence of each parameter and each constituent variable in order to display to the user those parameters and/or constituent variables, for each design point, which are most influential in the overall design, thereby allowing the user to select design points having other parameters which may be easier to control and which have less influence. Lastly, in the preferred embodiment of the invention, the contribution of compound noise is dynamically calculated for each design point, which is utilized, in order to obtain a more accurate overall result. That is, the calculated compound noise may then be utilized by the simulation methodology, in a conventional manner, to achieve a more accurate overall result.
It is to be understood that the invention is not limited to the exact construction or method which has been delineated above, but that various changes and modifications may be made without departing from the spirit and the scope of the inventions as are more fully delineated in the following claims. It should be further appreciated the combined MARS and Kriging methodologies allow for the creation of accurate simulation by use of only a relatively small portion of the design space and the performance surface.