Journal of Industrial Engineering, University of Tehran, Special Issue, 2011, PP. 13-23 13
Metaheuristic Based Multiple Response Process
Optimization

Mahdi Bashiri*1, Reza B. Kazemzadeh2, Anthony C. Atkinson3 and Hossein Karimi1
1Department of Industrial Engineering, Shahed University, Tehran, Iran
2Department of Industrial Engineering, Tarbiat Modares University, Tehran, Iran
3 Department of Statistics of LSE, London , England
(Received 6 October 2010, Accepted 29 June 2011)

Abstract
The simultaneous optimization of multiple responses is an important problem in the design of industrial processes in order to achieve improved quality. In this paper, we present a new metaheuristic approach including Simulated Annealing and Particle Swarm Optimization to optimize all responses simultaneously. For the purpose of illustration and comparison, the proposed approach is applied to two problems taken from the literature. The results of our study show that the proposed approach outperforms the other approaches and can find better solutions. Finally, in both cases, we present the results of a sensitivity analysis incorporating experimental design.

Keywords: Multiple response optimization, Simulated annealing, Particle swarm
optimization, Desirability function

Introduction
Response Surface Methodology (RSM) has extensive applications in industrial settings. It is a collection of techniques for finding the relationship between a response (y) and input variables ( x1,x2,…,xn ). The purpose of the experimenter often is to find the optimal setting of the input variables to maximize (or minimize) the response. In RSM, the input variables are transformed into coded dimensionless variables.
A standard experimental design in RSM is the Central Composite Design (CCD), used to find the relationship between response and input variables. The various techniques used in RSM are described by Box and Draper (1987) [1], Khuri and Cornell (1996) [2] and Myers and Montgomery (2002) [3].
In some applications there is more than one process or product response. The selection of optimal settings of the input variables with simultaneous consideration of multiple responses is called a Multi Response Surface (MRS) problem. There are typically three stages in the solution of such problems: experimental design and data collection, model building and optimization.
After the 1st and 2nd stages we write the model as follows:

y j  f j xj ,j  1,2,…, m , (1)
where y j is jth of m responses, f j  x is a function relating the jth response to the input variables and j is random error.
This paper presents an approach for simultaneous optimization of all the responses in MRS problems by the use of the two metaheuristics: Simulated Annealing and Particle Swarm Optimization. The paper is organized as follows. The next section reviews current approaches to MRS problems. The third section contains our approach and the new algorithm. In the succeeding section we present two examples solved using our approach and compare our solutions with those obtained from other approaches. Conclusions are in the last section.

Main approaches to MRS problems
* Corresponding author: Tel: +98- 21- 55277561 Fax: +98- 21- 55277400 Email: bashiri @shahed.ac.ir

Given a model of each response, a basic and simple approach to MRS problems is the
use of response contour plots, determining the optimal solution by visual inspection. However, unless both the numbers of responses and input variables are small, this method is inefficient and should not be used.
Some approaches to MRS problems aggregate all responses in a single objective form which is then optimized. Examples are the priority based approach [4], desirability functions [5] and the loss function [6]. In the priority based approach, the decision maker selects the most important of the responses as an objective function and uses the desired values of the other responses as constraints; there is no simultaneous optimization of all responses.
In the desirability function approach, all responses are transformed to a scale-free value between 0 and 1 using the desirability function d j for the jth response. The computed desirability for each response is combined to construct an overall desirability, which is then optimized.
Derringer (1994) proposed a weighted geometric mean for the overall desirability function [7]. Kim and Lin (1998, 2000 and 2002) suggested maximizing the lowestd j , as overall desirability value of the responses [8, 9, 10]. The loss function approach attempts to minimize the costs associated with the distances of the responses from their targets namely:
Ly(x)  y(x)  T  Cy(x)  T , (2)

Here y(x) is the vector of responses, x is the vector of input variables, T is the target vector of the responses and C is the cost matrix containing the relative importance of each response. See Vining (1998) and Ko and Kim (2005) [11, 12].
One of the main objectives in MRS problems is robustness in product or process, reaching the specified mean with minimum variance. Chiao and Hamada (2001) propose a quality measure which is the probability that m component responses are simultaneously meeting their respective specification ( S ) or the proportion of conformance [13]. They proposed it to incorporate robustness into these problems. The objective function can be written
max p Y  S  , (3) where Y is the vector of responses and S is the specification region depending on values l j ,u j which are the lower and upper limits of
the ith response
m
S  l j , u j  . (4) j 1
For the optimization stage, Del Castillo and Montgomery (1993) solved the problem by using the generalized reduced gradient (GRG) algorithm, which is available in software packages such as Microsoft Excel [14]. Del Castillo et al. (1996) used a gradient-based optimization approach by modifying the desirability function to be everywhere differentiable [15]. In a latter study Tong and Xu (2002) used a goal programming approach to find the optimal solution [16].
When the number of responses (or objectives) and constraints increase, the probability of finding a local instead of global optimum is increased and, in these cases, metaheuristic approaches can be helpful for finding the global optimum [17]. Ortiz et al. (2004) developed a multipleresponse solution technique using a GA in conjunction with an unconstrained desirability function [18]. Some other recent works on multi-response optimization problems are as follows;
Tong et al. (1997) developed a multiresponse signal to-noise (MRSN) ratio, which integrates the quality loss for all responses to solve the multi-response problem [19]. Tong et al. (2005) also consider the correlation of responses and use PCA and TOPSIS methods to find the best variable setting [20]. Kun-Lin Hsieh (2006) used neural networks to estimate the relationship between control variables and responses [21]. Tong, et al. (2007) use VIKOR methods in converting Taguchi criteria to single responses and then derive a regression model and the related optimal setting [22]. Kazemzadeh et al., (2008) proposed a general framework for multiresponse optimization problems based on goal programming and compared some existing methods [23]. They attempted to aggregate all characteristics into one approach, including the priorities of certain types of decision makers. Bashiri and Hejazi (2009) used Multiple Attribute Decision Making (MADM) methods such as VIKOR, PROMETHEE II, ELECTRE III and
TOPSIS in converting multiple responses to a single response in order to analyze data from robust experimental designs [24].

Particle Swarm Optimization
Particle Swarm Optimization (PSO) is a significant member of swarm intelligence techniques. It was proposed by Eberhart and Kennedy (1995) as an optimization method [25]. PSO is a population based search algorithm founded on the simulation of the social behavior of bees, birds or a school of fishes. Each individual within the swarm is represented by a vector in multidimensional search space. This vector has one assigned vector that determines the next movement of the particle and is called the velocity vector.
The PSO algorithm determines how to update the velocity of a particle. Each particle updates its velocity based on the current velocity, the best position (p_best) it has explored so far and on the global best position (g_best) explored by the swarm [26, 27, 28]. Movement of each particle is shown in Figure 1, and it is based on equations (5), (6). Equation (5) shows that the velocity vector is updated by the global best position, personal best position and current position of each particle. Equation (6) shows that each particle moves with its own velocity.

Current
Personal Best
Position
Global Best
Position
New Position
Velocit
y
Vecto
r

Current

Personal Best

Position

Global Best

Position



قیمت: تومان


دیدگاهتان را بنویسید