Journal of Industrial Engineering, University of Tehran, Special Issue, 2011, PP. 113-125 113
Solving Quadratic Assignment Problem (QAP) Using
Invasive Weed Optimization Algorithm
Maryam Omidbakhsh1 and Mahdi Seifbarghy *1
1 Technical and Engineering Department, Alzahra University, Tehran, Iran
(Received 31 October 2010, Accepted 24 July 2011)
A new powerful optimization algorithm inspired from colonizing weeds is utilized to solve the well-known quadratic assignment problem (QAP) which is of application in a large number of practical areas such as plant layout, machinery layout and so on. A set of reference numerical problems from QAPLIB is taken in order to evaluate the efficiency of the algorithm compared with the previous ones which had been applied to solve the addressed problem. The results indicate that the algorithm outperforms the competitive ones for a sizable number of the problems as the problems’ dimensions increase.
Keywords: Invasive weed optimization, Meta heuristic algorithms, Quadratic assignment
problem, Weed colonization.
The quadratic assignment problem (QAP) is known as a special type of assignment problems, which is too complex to be solved using the common exact solution methods (e.g.  and ). The main difference between QAP and the classic assignment problems is that there is an interaction between each two pairs of facilities, leading to a non-linear objective function. The problem is assigning n facilities to n locations in such a way as to minimize the transportation costs associated with the flow of materials between the facilities and the distances between the corresponding locations ().
* Corresponding author: Tel & Fax: +98-21-77207041 Email: [email protected]
QAP is known as an NP-Hard problem. The solution complexity of QAP is widely accepted among researchers (e.g.  and ). There is not any known polynomial time algorithm to solve QAP instances with more than a relatively small number of inputs. The exact solution of even small problems (20 < n < 30) is considered computationally non-trivial. To obtain optimal solutions for the modest size of QAP instances (i.e. 30<n<40), a massively parallel computing environment is required . There is a continuous interest in developing different formulations and solution methods among the researches on QAP. Initially, Koopmans and Beckmann  proposed the QAP as a mathematical model related to economic activities. Steinberg  used the QAP to minimize the number of connections between components in a backboard wiring. Heffley  applied the QAP to economics problems. Dickey and Hopkins  applied the QAP to the assignment of buildings in a University campus. Francis and White  developed a decision framework for assigning a new facility (e.g. police post, supermarket or school) in order to serve a given set of clients. Geoffrion and Graves  focused on scheduling problems. Pollatschek et al.  invoked the QAP to define the best design for typewriter keyboards and control panels. Elshafei  applied it to a hospital planning problem. Krarup and Pruzan  applied it to archeology. Hubert  applied it to statistical analysis. Bos  applied it to a problem related to forest parks. Brusco and Stahl  utilized it in a numerical analysis. Parallel computing and networking provide other location analysis problems, which can be formulated as QAPs (e.g. [17,18]. Benjaafar  introduced a formulation of the facility layout problem in order to minimize work-in-process. Ciriani et al.  explored assigning rooms to persons with undesirable neighborhood constraints could be well formulated as a QAP. A generalization of the QAP is also used for a service allocation problem by Cordeau et al.  with the objective of minimizing the container re-handling operations at a shipyard. Rabak and Sichman  and Miranda et al.  studied the placement of electronic components as a QAP. For a comprehensive survey of the addressed researches and other applications of the QAP and corresponding special cases, the reader can refer to , ,  and .
Generally, the methods used in combinatorial optimization problems can be either exact or heuristic. Exact algorithms reach a global optimum within an acceptable time; Some instances are dynamic programming, branch-and-bound, cutting planes or some combinations like branchand-cut. Heuristic algorithms do not give a guarantee of optimality for the best solution obtained. Meta heuristics are a type of heuristic algorithms with a relatively general structure. The difficulties due to using mathematical optimization methods for solving NP-hard combinatorial problems with a large number of variables and nonlinear objective functions have contributed to the development of meta heuristic algorithms. A number of meta heuristic algorithms are based on natural process metaphors such as simulated annealing (SA), genetic algorithms (GA), scatter search (SS) and ant colony optimization (ACO) while a number of them are based on theoretical and experimental considerations such as tabu Search (TS), greedy randomized adaptive Search Procedure (GRASP), variable neighborhood search (VNS) and iterated local search (ILS). There has been applied several meta heuristics algorithms in order to solve the QAP (e.g.  and ). We now mention to a number of well-known researches in this regard.
Hahn et al.  presented a new branchand-bound algorithm for solving the QAP. The algorithm is based on a dual procedure similar to the Hungarian method for solving the linear assignment problem. It solves the small and large size QAP, however, does not guarantee an optimal solution. Angel and Zissimopoulos  declared that local search was widely used to solve approximately NP-complete combinatorial optimization problems while little was known about the quality of obtained local minima, for a given neighborhood. They proposed an upper bound for the quality of solutions obtained from the deepest local search for the QAP.
Ahuja et al.  suggested a genetic algorithm for the QAP and reported its computational behavior. Since the genetic algorithm incorporated many greedy principles in its design, they named it “greedy genetic algorithm”. They tested the proposed algorithm on all the benchmark instances of QAPLIB (), a well-known library of QAP instances. Out of the 132 total instances in QAPLIB of varied sizes, the greedy genetic algorithm obtained the best known solution for 103 instances, and for the remaining instances (except one) found solutions within 1% of the best known solutions. Talbi et al.  proposed a parallel model for ant colonies to solve the QAP. The cooperation between simulated ants was provided by a pheromone matrix that played the role of a global memory. The exploration of the search space was guided by the evolution of pheromones levels, while exploitation had been boosted by a tabu local search heuristic.
Hasegawa et al.  proposed a novel method in order to solve the QAP. Initially, they modify the conventional tabu search into a chaotic version, and then they compare the performance of the novel chaotic search with the conventional tabu search and an exponential tabu search. It is indicated that the exponential tabu search has higher performance than the conventional tabu search, and further that the novel method with a chaotic neural network exhibits the best performance. Misevicius  proposed a genetic algorithm hybridized with so-called ruin and recreate a procedure for solving the QAP. The results indicated that the proposed algorithm was of a high performance for the QAP.
In Chawki et al. , the network structure of basic solutions to the QAP was revisited. The concept of a relative local star minimum was introduced.
Results characterizing a relative local star minimum were obtained, and an extreme point algorithm was proposed. Misevicius  proposed an improved hybrid genetic algorithm (IHGA). It used a robust local improvement procedure as well as an effective restart mechanism that was based on so-called ‘shift mutations’. IHGA was applied to the QAP. The results obtained from the experiments on different QAP instances indicated that the proposed algorithm appeared to be superior to other approaches that were among the best algorithms for the QAP.
Erdogan and Tansel  declared that exact solution attempts proposed for instances of size larger than 15 had been generally unsuccessful. They presented two new integer formulations based on the flowbased linearization technique which required fewer variables and yielded stronger lower bounds than existing formulations. They also strengthened the formulations with valid inequalities and reported computational experience with a branch-and-cut algorithm.
Drezner  introduced the compounded genetic algorithm. They proposed to run a quick genetic algorithm several times as Phase 1, and compile the best solutions in each run to create a starting population for
Phase 2. This new approach was implemented on the QAP and caused very good results. In Stützle , iterated local search (ILS) as a simple and powerful stochastic local search method was applied to the QAP. Enhanced ILS extensions was also proposed in this research. An experimental evaluation indicated the excellent performance of the enhanced ILS when comparing to the similar state-of-theart algorithms.
Loiola et al.  presented some of the most important QAP formulations and classified them according to their mathematical sources; then, they gave a discussion on the theoretical resources used to define lower bounds for the exact and heuristic algorithms for QAP. They also gave a detailed discussion of the progresses made in both exact and heuristic solution methods. Drezner  solved QAP using various variants of a hybrid genetic algorithm and tested several modifications of the hybrid genetic algorithm. The results indicated the high efficiency of the proposed algorithms.
James et al.  introduced a cooperative parallel tabu search algorithm (CPTS) for the QAP. CPTS is a cooperative parallel algorithm in which the processors exchange information throughout the run of the algorithm in contrast to independent concurrent search strategies that aggregate data only at the end of execution. A set of 41 test problems obtained from QAPLIB were used to analyze the quality of the CPTS algorithm. Xia  proposed a Lagrangian smoothing algorithm for the QAP where the continuation sub problems were solved by the truncated Frank–Wolfe algorithm. Limited numerical results were also provided in order to measure the efficiency of the proposed algorithm.
In this paper, the invasive weed optimization algorithm (IWO) which has recently proposed by Mehrabian and Lucas 2006  is applied to the QAP. As it will be indicated in Section 4, the addressed algorithm shows a higher performance compared to other similar meta-heuristics, especially when the problem size increases.
The rest of the paper is structured as follows: In Section 2, the formulation of the problem is given. In Section 3, we describe the solution algorithm and customize it to QAP. Section 4 represents the results from utilizing the proposed algorithm on a set of numerical problems. Section 5 gives conclusions and recommends some further research.
QAP is one of the most difficult problems as a combinatorial optimization problem in which a given set of facilities should be assigned to a set of locations in such a way that each facility is designated to exactly one location and inversely each location to one facility. The distances between locations, the demand flows between the facilities and the allocation cost of each facility to each location are known. There are different formulation methods for QAP in the literature (e.g.  and ). The problem’s parameters are as follows:
]: The flow matrix between facilities i and j;
D= : The distance matrix between locations k and l;
B= : The allocation costs of facility i to location k.
The decision Variable is
, a binary variable whose value is 1 if facility
is assigned to location k.
In general, the objective function can be stated as follows:
Since the linear term is easy to solve, we do not consider it; thus, the problem can be stated as follows:
The objective function within Eq. (2) represents the total weighted flows between facilities i and j while they are located in locations k and l respectively. Constraint (3) and (4) guarantee each facility being exactly located in one location and each location accommodate exactly one facility. Constraint (5) represents all decision variables must be binary.
Invasive weed optimization algorithm
IWO algorithm is motivated by a common phenomenon in agriculture, which is colonization of invasive weeds. According to the common definition, a weed is any plant growing where it is not wanted. Any tree, vine, shrub, or herb may qualify as a weed, depending on the situation; generally, however, the term is reserved for those plants whose vigorous, invasive habits of growth pose a serious threat to desirable, cultivated plants. Baker and Stebbins  mentions that a plant is called weed, if in any specified geographical area, its population grows entirely or predominantly in situations markedly disturbed by man (without, of course, being deliberately cultivated plants). Weeds have shown very robust and adaptive nature, which turns them to undesirable plants in agriculture. The algorithm is simple but has shown to be effective in converging to the optimal solution by employing basic properties, e.g. seeding, growth and competition, in a weed colony .
The feasibility, the efficiency and the effectiveness of IWO are tested in details through a set of benchmark multidimensional functions, including ‘Sphere’, ‘Griewank’ and ‘Rastrigin’ by Mehrabian and Lucas . The reported results are compared with other recent evolutionarybased algorithms: genetic algorithms, memetic algorithms, particle swarm optimization, and shuffled frog leaping. The results are also compared with different versions of simulated annealing, which are simplex simulated annealing and direct search simulated annealing. The performance of IWO has a reasonable performance for all the test functions. Mallahzadeh et al.  utilized IWO algorithm for designing an eshaped mimo antenna. Karimkashi and Kishk  studied IWO Features in
To simulate the colonizing behavior of weeds, some basic properties of the process turn out as following ( :
A finite number of seeds is being spread out over the search area.
Every seed grows to a flowering plant and produces seeds depending on its fitness.
The produced seeds are being dispersed randomly over the search area and grow to new plants.
This process continues until the maximum number of plants is reached; now only the plants with lower fitness can survive and produce seeds, others are being eliminated. The process continues until reaching a maximum number of iterations. Hopefully, the plant with the best fitness is closest to the optimal solution.
The process is addressed in details as follows :
Initialize a population
A population of initial solutions is being dispersed over the d dimensional problem space with random positions. The initial search area is notated by X ini which is bounded by a lower and upper bound. The lower bound value is usually a negative real number while the upper bound is usually a positive real number.
A member of the population of plants is allowed to produce seeds depending on its own and the colony’s lowest and highest fitness: the number of seeds each plant produces, increases linearly from a minimum number of seeds, Smin , to a maximum number, S max . In other words, a plant will produce seeds based on its fitness, the colony’s lowest fitness and highest fitness to make sure the increase is linear. Thus, the above reproduction technique is proposed to give a chance to infeasible individuals to survive and reproduce similar to the mechanism happens in the nature. Moreover, quite often the system can reach the optimal point more easily if it is possible to “cross” an infeasible region (especially in nonconvex feasible search space) .
The generated seeds are distributed randomly over the d dimensional search space by normally distributed random numbers with mean equal to zero; but varying variance. This means that seeds will be distributed randomly so that they abode near to the parent plant. However, standard deviation of the random function will be reduced from an initial value, initial , to a final value, final , in every step (generation).
In simulations, a nonlinear alteration has shown satisfactory performance, which is given as in (6).
Whereitermax is the maximum number of iterations, iter is the standard deviation at
the present time step and n is the nonlinear modulation index.
If a plant leaves no offspring then it would go extinct, otherwise it would take over the world. Thus, there is a need of some kind of competition between plants for limiting maximum number of plants in a colony. After passing some iterations, the number of plants in a colony will reach its maximum by fast reproduction, however, it is expected that the desirable plants are reproduced more than the undesirable ones. Reaching the maximum number of plants in the colony, pmax , a mechanism for eliminating the plants with poor fitness in the generation activates. When all seeds have found their position in the search area, they are ranked together with their parents’ (as a colony of weeds). Next, weeds with lower fitness are eliminated to reach the maximum allowable population in a colony. This mechanism gives a chance to plants with lower fitness to reproduce, and if their offspring has a good fitness in the colony than they can survive.
In this algorithm, number of the initial population N 0 , maximum number of iterations itmax , maximum number of plants
in the colony pmax , nonlinear modulation algorithm convergence. The general structure index n is the key IWO parameters whose of the addressed algorithm can be effective tuning is very important for the represented as in Fig. 1.
Generate the initial
population of weeds
Compute the initial
Compute the number o
new seeds for each
plant with respect to
Dispread the new
The seeds grow and change to the new plants
Compute the new seeds fitness
Eliminate plants with
Figure 1: General structure of the IWO algorithm