Journal of Industrial Engineering, University of Tehran, Special Issue, 2011, PP. 39-49 39
Global Meta-Model for System Level Multidisciplinary Design Optimization

Yaghoob Gholipour 1, Parviz Mohammad Zadeh *2 and Mohadeseh Sadat Shirazi 2
1
Department of Civil Engineering, University of Tehran, Tehran, Iran
2
Faculty of New Science and Technology, University of Tehran, Tehran, Iran (Received 6 November 2010, Accepted 28 May 2011)

Abstract
This paper presents an efficient global meta-model building technique for solving high fidelity multidisciplinary design optimization (MDO) problems. The main difficulties associated with MDO are often characterized by interdisciplinary couplings, high computational cost of an analysis in individual disciplines and a large number of design variables and constraints. These issues result in very high overall computational cost limiting applications of MDO to complex industrial design problems. To address these issues a combination of global meta-model using moving least squares (MLSM) and the trust region strategy is introduced. A global meta-model is used to identify the feasible and infeasible regions and the trust region strategy is used for a detailed search of the feasible region. The technique is demonstrated on a test problem and the effectiveness of the method for modeling and system level collaborative optimization using high fidelity models is studied. The results show that meta-model based on MLSM provide a high degree of accuracy whilst achieving a considerable reduction in computational cost.

Keywords: Multidisciplinary design optimization, Meta-model, Moving least squares method, Collaborative optimization, Trust region strategy

Introduction
* Corresponding author: Tel: +98- 21- 61118505 Fax: +98- 21- 88617087 Email: [email protected]

Most industrial engineering design problems are multidisciplinary in nature (e.g. aerospace and automotive). Multidisciplinary design optimization (MDO) has become an effective method for solving these industrial design problems. Collaborative optimization (CO) is one of the main MDO approaches for solving multidisciplinary design problems. The key concept in CO approach is the decomposition of the design problem into two levels, namely disciplinary and system level. The system level optimizer is used to minimize the system level objective while satisfying consistency requirements among the disciplines by enforcing equality constraints at the system level, which coordinate the interdisciplinary couplings. Despite many advantages, the methodology has not become a mainstream design optimization tool in industry due to high computational costs. In addition, the most important difficulty specifically associated with CO is its system level convergence rate due to the fact that CO ensures interdisciplinary compatibility by means of system level equality constraints and attempts to minimize the disagreement between disciplines by disciplinary optimization. The use of equality constraints at the system level to represent disciplinary feasible regions introduces numerical and computational difficulties as the discipline level optima are non-smooth and noisy functions of the system level parameters. The implications of these issues are that derivative-based optimization techniques cannot be used for the system level optimization and robust optimization techniques such as genetic algorithms (GA) and particle swarm optimization (PSO) are prohibitively expensive for solving CO. These issues pose significant barriers for application of CO to industrial design problems based on high fidelity simulation models (e.g. detailed finite element (FEM) and computational fluid dynamics (CFD)). To address these issues, this paper introduces two levels of meta-model building techniques with the emphasis on the construction of system level meta-models using moving least squares method (MLSM). The use of meta-models or approximations in design optimization in general and MDO in particular has become popular for reducing the computational cost and filtering out the numerical noise of high fidelity models in the optimization process. In addition, it provides a means for rapid design space exploration and more importantly, visualization of the design search space. The basic approach is to replace computationally expensive high fidelity model by an approximate one, which is computationally very efficient model. Such an approximate model is often referred to as an approximation or meta-model (“model of a model”).These terms are used interchangeably thought this paper.
There have been several meta-model building methods where, some of well known meta-model building methods are; polynomial regression (PR) [1], moving least squares method (MLSM) [2], Kriging [3], multivariate adaptive regression splines (MARS) [4] and radial basis function (RBF) [5]. These data fitting approximation models have become attractive as they are simple to construct and generally do not require sensitivity information. However, these models suffer from a number of limitations. The cost of providing high fidelity data for fitting global meta-model can be computationally expensive, and in some cases it is difficult to build a high quality meta-model with low order polynomials as well as the construction of an appropriate sampling scheme and sufficient number of plan points in the design variable space. These difficulties become more complex as the number of design variables increases (i.e. curse of dimensionality). Variable fidelity modelling, which avoids the curse of dimensionality provides an alternative approach to the conventional meta-modelling based on data fitting. Variable fidelity modelling term is used in this paper to refer to simulation models of different levels of (i.e. low and high) fidelity models. Low fidelity models are low complexity and less faithful representations of the actual physical problems [6]. In many cases, low fidelity models can be obtained either by simplifying the analysis model (e.g. using coarser finite element mesh) or by simplifying the original model (e.g. using simpler boundary conditions or geometry etc). Low fidelity models inherit the most general features of the original model and are less expensive than the original model. Hence, low fidelity models provide a good basis for the construction of high quality meta- models. [7] used such low fidelity model to a problem of material parameter identification (formulated as a design optimization problem) and [8] demonstrated the effectiveness of correcting inexpensive analysis based on low fidelity models by results from more expensive and accurate models in the design of shell structures for buckling. [9] used a coarse low fidelity finite element model to predict the stress intensity factor, and corrected it with high fidelity model results based on a detailed finite element model for optimizing a blade stiffened composite panel. Several researchers used advanced meta-modelling concepts to build a high quality meta-model. [10] demonstrated an aircraft wing optimization utilizing kriging response surface of the differences between the two drag prediction tools of variable levels of fidelity. [11] introduced kriging based scaling functions using a trust region approach and demonstrated that it converge to the solution of the high fidelity model. These work primarily, focused on the application of variable fidelity modelling concept to single discipline optimization problem, other researchers used variable fidelity models for solving MDO problems. For example, [12] used variable complexity modelling technique for the multidisciplinary design of high speed civil transport (HSCT) where simple analysis methods were used to define a sub-region of the design space in which an optimum design was likely to exist. The more accurate analysis methods were then applied to construct smooth response surface models of various aerodynamic and structural weight quantities and optimization was performed for the aircraft wing design using response surface models. For example [13] used variable fidelity models for a wing design optimization problem. In this approach, an approximation management framework was used for solving optimization problems that involve computationally expensive models, which is aimed at maximizing the use of inexpensive models, with occasional recourse to expensive models for monitoring the progress of the algorithms. The approach achieved a twofold improvement for a 2D airfoil optimization problem. In addition [14] used variable fidelity modelling within the
CSSO framework for MDO problems involving a trust region management algorithm. The work focused mainly on the use of Design of Experiments (DoE) at the discipline levels for response sampling to generate the database required to build the response surface models. They found that the efficiency of the optimization algorithm depended upon the sampling strategy used. The CSSO based sampling strategy was found to be more efficient in reaching the optimum solution. Reference [15] implemented the multi-fidelity metamodelling approach within the CO framework. Space mapping technique also uses high and low fidelity models but used to establish a mapping of one model’s parameter space on the other model’s space such that the low fidelity model with the mapped parameters accurately reflects the behaviour of the high fidelity model, [16] demonstrated the use of space mapping in structural optimization on a simple beam problem and [17] developed two new mapping methods, corrected space mapping and proper orthogonal decomposition (POD) mapping that are used in conjunction with trust region model management. This paper focuses upon the development of an efficient meta-model building technique for solving high fidelity multidisciplinary design optimization (MDO) problems that (i) retains capability with sub-system (discipline) constraints,(ii) provides near optimal metamodels in terms of “maximum information from minimum sampling”(iii) provides highly accurate meta-modes for discipline constraints at the system level, and (iv) reduces computational effort associated with discipline analyses within a multidisciplinary design environment.

Collaborative optimization based on meta-modeling
The collaborative optimization using metamodelling approach adopted in this work separates the construction of discipline level meta-models from those of the system level. Meta-models in the disciplinary optimization are based on variable fidelity modelling and for the system level optimization a combination of global meta-model and trust region strategy using MLSM is introduced.

Figure 1: Collaborative optimization using meta-modeling

Meta-model building at the discipline level
Constraints at the discipline level in CO correspond to functions describing the behaviour of a typical engineering system as related to a particular discipline. The construction of meta-models at the discipline level is based on well-established global meta-model concepts, using variable fidelity modelling for multidisciplinary design optimization. The variable fidelity modelling concept is designed to simultaneously utilize computational models of varying levels of fidelity in a CO process to facilitate the solution of MDO problems with high fidelity models using meta-models. It consists of computationally efficient simplified numerical models (low fidelity) and expensive detailed (high fidelity) models. The low fidelity models are tuned using a small number of high fidelity model runs, which are then used in place of expensive high fidelity models in the optimization process. Only tuned low-fidelity models are used in the optimization process. The lowfidelity model is tuned in such a way that it approaches the same level of accuracy as a high fidelity model but at the same time remains a computationally inexpensive model to be used repeatedly in the optimization process. The organization of the optimization process and the main components of the variable fidelity modelling within a CO framework are shown in Figure 1.

System level optimization using meta-model
Constraints at the system level are equality constraints (discrepancy function) and have a complex form when compared to discipline level constraints. The objective of these optimization problems is to minimize interdisciplinary discrepancies while satisfying the disciplinary design constraints. Values of system level constraints are obtained by solving disciplinary optimization problems and correspond to a measure of disagreement between the targets given to a discipline by the system level optimizer.
Hence, they are non-smooth at the transition from a plateau of zero values to a region of non-zero values. This feature causes slow convergence of the CO system level optimization. These characteristics of system level optimization in CO make it difficult to directly employ conventional metamodelling techniques. Figure 2 shows a 50point uniform Latin Hypercube plan corresponding to the system level design variables for discipline 1. In the figure, larger dots indicate points at which the corresponding disciplinary optimizer returned non-zero values of the objective function. The remaining points correspond to zero values of the disciplinary optimization runs.

Figure 2: Uniform Latin Hypercube design for 50point plan (discipline 1 of the test problem)

An initial study was out carried using a cubic polynomial response surface to build a meta-model for the system level optimization. The discipline 1 objective function (a constraint at the system level) represented by a cubic polynomial metamodel (a cubic can be used as basis for metamodel provided negative values are removed by forcing them to zero). The cubic metamodel captures the basic behavior of the function, but may exhibit an unrealistic nonzero domain in the region of the zero-level plateau. To correct this, an iterative process can be used to incrementally subtract a small positive constant from the function values and then remove negative values by forcing them to zero. The process stops when the non-zero domain disappears and a zero-level plateau is obtained. However, this metamodel building process is time consuming, which makes it less attractive to employ. It is therefore necessary to employ a meta-model building strategy, which is suitable to the characteristics of the discrepancy function, for more accurate and efficient modeling within a CO framework. In this respect, the moving least squares method (MLSM) is used in the construction of system level meta-models. The following sections describe the construction of meta-models for the system level collaborative optimization using MLSM.

Formulation of the System Level Optimization
The formulation of the system level optimization is:

Minimize
Fsys(x) Eq.(1)

Subject to
gi*(x) = 0, i = 1,…,M Eq.(2) xil  x x ,iu i 1,,N Eq.(3)

Fsys(x) is the objective function of the system level, M is the number of disciplines, and xli and xui are the lower and upper limits on the design variable xi , respectively. The functions, gi*(x)=0, i=1,…,M , can be very expensive to compute and must be approximated by surrogate models gi (x), i=1,…,M , obtained from the individual disciplines in the form, Fi(x)=0, i=.1,…,M.

Moving least squares method (MLSM)
The moving least squares method (MLSM) [2], is a new method for meta-model building in design optimization. It can be thought of as a weighted least squares method that has varying weight functions with respect to the position of the approximation. Coefficients of the model are functions of location. The weight associated with a particular sampling point xi, decays as a point x moves away from xi The weight function is defined around the prediction point x and its magnitude changes or “moves” with x, so the approximation obtained by the least squares fit is termed a moving least squares approximation of the original function F(x). Since the weights wi are functions of x, the polynomial basis function coefficients are also dependent on x. This means that it is not possible to obtain an analytical form of the function F( )x but its evaluation is still computationally inexpensive. It is possible to control the “closeness of fit” of the approximation to the sampling data set by changing a parameter in a weight decay function wi(r). r is the distance from the i-th sampling point. Such a parameter defines the rate of weight decay or the radius of a sphere beyond which the weight is assumed to be zero (sphere of influence of a sampling point xi). A second-order meta-model using the least squares regression method can be stated as: f(x) = β0 + Σ βixi + Σ βjxj . Ep.(4)
(1)
An approximation of order p in terms of matrix notation can be written
(2)f = Aβ. Eq.(5)

(3) The vector of responses at n sampling points can be written
Y = Aβ + ε. Eq.(6)

Y is an n1 vector of output responses obtained from a Design of Experiments (DoE), A is an np matrix obtained from the matrix of input values of the DoE, β is a p1 vector of regression coefficients and ε is an n1 vector of random errors. The leastsquares estimator of β is obtained by weighted least-squares fitting of the response surface f into the set of responses Y at the sampling point:
Minimize
(Y – f)TW(Y – f). Eq.(7)

This is equivalent to solving the system of normal equations
β= (ATWA)-1ATWY. Eq(8)

W is a diagonal n n matrix of weight coefficients Wi indicating the relative importance of the information at the corresponding sampling points. In a conventional least-squares regression all weights are set to unity and the system of normal equations becomes β = (ATA)-1ATY. Eq.(9)

Methodology
The methodology adopted here uses two levels of meta-model: a global meta-model to identify the feasible and infeasible regions and a move limit strategy for a detailed search in the feasible region. In both levels of meta-model MLSM is used in the construction of approximation models at the system level. The construction of metamodels using MLSM is performed in a series of following steps:
p-point plan treated as a Design of Experiments;
Run optimization for discipline i over the p-point plan to compute corrected lowfidelity response values at the plan points;
Construct an approximation model: the corrected low-fidelity response values calculated in step 2 are used to build global approximations for each discipline;
Identify feasible and infeasible regions;
Construct a new sub-region and add new points within the move limits;
Construct an approximation model for the new sub-region constructed in step 5;
Solve the system level approximation optimization problem using the approximation models constructed in
step 3;

Check for convergence. Stop if convergence is obtained, otherwise construct a new sub-region design space, add new plan points and return to step 5.

Numerical Example
A test problem in this study deals with the weight minimization of a cantilever composite beam subjected to a parabolic distributed load q = q0(1 – x2 /L2), where x= 0 is the clamped end [15]. Design data for this benchmark problem are outlined in Table 1.
The maximum stress and deflection of the beam can be calculated analytically: бmax= q0 L2h / 8I and δmax= 19 q0 L4 /360EI. Eq.(10)

Based on the rule of mixtures for a continuous fibre-reinforced composite material with a fibre volume fraction vf and a matrix volume fraction vf, the following relationship must be satisfied for the longitudinal (fibre direction) Young’s modulus Ef and the composite weight density ρ:
EI = Ef vf + Em(1 – vf), ρ = ρf vf + ρm(1- vf)
and vf + vm=1. Eq.(11)

Design parameter Design variable Description (notation) Unit Value Description (notation) Unit Baseline design Range
Min. Max.
Parabolic distributed load (q0) N/mm 1 2nd moment of area (I) mm4 2.25E4 3.3E3 20.83
3E5
Length of the beam (L) mm 1000 Depth of the beam (h) mm 30 20 50
Elastic modulus graphite fibre (E ) N/mm2 2.3E5 Fiber vol. fraction (vf) – 0.785 0.4 0.906 9
f
Elastic modulus epoxy resin
(E ) N/mm2 3.45E3 – – – – –
Weight density graphite fibre (ρ ) N/mm3 1.72E-5 – – – – –
f
Weight density epoxy resin (ρm) N/mm3 1.2E-5 – – – – –
Table 1: Data for cantilever composite beam test problem
Ef and Em are the elastic moduli for graphite and epoxy resin, and ρf and ρm are the weight densities of the graphite fibre and epoxy resin, respectively. The fibre volume fraction vf can vary from zero (no fibre is used) to the maximum value defined by the maximum amount of fibre packed in the composite, vfmax =0.9069. In this problem vf = 0.4 was taken as a lower limit.
The test problem was solved using a highfidelity finite element (FE) beam model consisting of 100 elements in both conventional and collaborative optimization processes. Numerical results are compared to the analytical results. The formulation and results for disciplinary collaborative optimization are given by [15]. This paper focuses on the system level collaborative optimization of the test problem:
Minimize
Fo(xs) = xs1(1440 + 624xs3) / x2s2 Eq.(12)

subject to
g1* (xs1, xs2) = 0 and g1* (xs1, xs3) = 0 Eq.(13)
0.333 xs120.833 , 20.0 x s2 50.0 ,
0.4  xs30.9069 Eq(14)

xs1, xs2 and xs3 are the system level design variables, and g1* (xs1, xs2) and g2* (xs1, xs3) are the system level equality compatibility constraints. The functions g1* and g2* can be expensive to evaluate and need to be replaced by inexpensive approximation models. The application of the main steps described in Section 5 to the test problem is described below.

role in the construction of a high quality approximation. Here the Gaussian function is used as a suitable function to study the “closeness of fit”. The Gaussian function can be expressed by wi =exp (- ri2).The case =0 is equivalent to the conventional least squares regression. When  is large it is possible to obtain a very close fit through the sampling points. In this study various values of ranging from 0 to 100, based on quadratic polynomial functions on the test problem, were examined (Figure 5). Figures 6 and 7 show linear and cubic polynomials. In this study it was found that the quadratic function (Figure 5(b)) provides an accurate approximation for the test problem.
Step 1. Choice of a Design of Experiments: quadratic and cubic), shown in Figures 5 to
To provide a sufficient number of points for the approximation building, it is necessary to ensure that the sphere of influence has at least points. is the number of coefficients in the base polynomial (linear, quadratic, cubic) and is the number of additional points to provide the necessary amount of redundant information for the least-squares model fitting. One and three additional points were studied on the test problem using a polynomial function (linear,
The selection of points in the design variable space is based on a Uniform Latin Hypercube for the case study as shown in Figure 2 for discipline 1.

Step 2. Compute corrected low-fidelity model response values: The Design of Experiments established in step 1 is used to compute the corrected low fidelity response values at the

Step 3. Construct approximation model: The corrected low-fidelity model response values calculated in step 2 are used to build global approximations for both disciplines 1 and 2.

Step 4. Study of “closeness of fit” parameter on the test problem: There are several parameters in MLSM that can be selected such as the size of the domain of influence and the weight decay function. During the development of an approximation model, these parameters are controlled for best fit. Selection of a suitable expression for the weight decay function plays an important
7. Figure 5 shows approximation functions based upon quadratic polynomials.

Figure 5(a): θ = 5 with 1 additional sampling point

Figure 5(b): θ = 10 with 1 additional sampling point

Figure 5(c): θ=35 with 1 additional sampling point

Figure 5(d): θ=100 with 1 additional sampling point

Figure 5(e): θ=35 with 3 additional sampling points

Figure 5(f). θ = 50 with 3 additional sampling points.

Figure 6: Approximation function with 1 additional sampling point (linear based polynomial, θ=35)

Step4. Solve the system level approximation optimization problem using a genetic algorithm: Approximation models constructed in step 3 are used in the system level optimization run. The process ignores response values below 0.0002 to zero (due to approximation error, which is corrected during the optimization process).

Figure 7. Approximation function with 1 additional sampling point (cubic based polynomial, θ=35

Step 5. Move limit strategy: Construct a new sub-region design space, add new plan points and return to step 2. This step focuses on the localised search for an optimum solution. In this process a new sub-region of design space is constructed. This new sub-region is centred on the new design point obtained in step 5, which is resized to 50% of its original size. The new plan points are generated in such way as to ensure the homogeneous distribution of the points inside the first search sub-region. The approximation model is implemented in the optimization process using a genetic algorithm and is checked for convergence (stop if convergence is obtained, otherwise move limit process is continued until optimum solution is reached).

Evaluation of predictive capabilities of the meta-models
The construction of highly accurate metamodels is an essential requirement for system level optimization of the CO framework and it is therefore important to evaluate the predictive capabilities of such models. In this respect a detailed accuracy estimation using various statistical criteria was used to evaluate the predictive capabilities of meta-models for the disciplinary optimization. These include root mean square (RMS), R-square, relative average absolute error (RAAE), relative maximum absolute error (RMAE), and maximum absolute difference error (MADE) over plan points.
In Tables 2 and 3, three indicators have been used to evaluate the accuracy of the constructed meta-models: R-square, relative average absolute error and root mean square. The larger R-square and smaller RMS and RAAE values indicate a more accurate metamodel. The tables also show that RMS and RAAE values become smaller as the size of the design space reduces (the quality of the meta-model is improved with the reduction of the size of the design space).
Iterations RMS R-square MADE RMAE RAAE
Global approximation model 0.5355 0.9021 2.2647 1.3235 0.1402
Box 1 0.2895 0.9267 0.8110 0.7582 0.1590
Box 2 0.3207 0.9914 0.7819 0.2255 0.0711
Box 3 0.0903 0.9980 0.2397 0.1173 0.0323

Table 2: Evaluation of predictive capabilities of meta-models constructed during CO runs for the system
level (discipline 1)

Iterations RMS R-square MADE RMAE RAAE
Global approximation model 0.1924 0.7984 0.9015 2.1062 0.1801
Box 1 0.0147 0.9300 0.0388 0.6985 0.1716
Box 2 0.1591 0.9789 0.0396 0.3612 0.1215
Box 3 0.0571 0.9909 0.1715 0.2858 0.0611

Table 3: Evaluation of predictive capabilities of meta-models constructed during CO runs for the system
level (discipline 2)

Iteration Number

Iteration Number

No. Plan
Points Used in
Disciplines
1
and 2

No. Plan

Points Used in

Disciplines

1



قیمت: تومان


دیدگاهتان را بنویسید