• Ei tuloksia

On the performance improvement of elephant herding optimization algorithm

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "On the performance improvement of elephant herding optimization algorithm"

Copied!
19
0
0

Kokoteksti

(1)

UEF//eRepository

DSpace https://erepo.uef.fi

Rinnakkaistallenteet Luonnontieteiden ja metsätieteiden tiedekunta

2019

On the performance improvement of

elephant herding optimization algorithm

Elhosseini, Mostafa A

Elsevier BV

Tieteelliset aikakauslehtiartikkelit

© Elsevier B.V.

CC BY-NC-ND https://creativecommons.org/licenses/by-nc-nd/4.0/

http://dx.doi.org/10.1016/j.knosys.2018.12.012

https://erepo.uef.fi/handle/123456789/7526

Downloaded from University of Eastern Finland's eRepository

(2)

Accepted Manuscript

On the performance improvement of elephant herding optimization algorithm

Yasser I. Rashwan, Mostafa A. Elhosseini, Ragab A. El Sehiemy, X.Z. Gao

PII: S0950-7051(18)30599-9

DOI: https://doi.org/10.1016/j.knosys.2018.12.012 Reference: KNOSYS 4610

To appear in: Knowledge-Based Systems Received date : 31 August 2018

Revised date : 4 November 2018 Accepted date : 8 December 2018

Please cite this article as: Y.I. Rashwan, M.A. Elhosseini, R.A.E. Sehiemy et al., On the

performance improvement of elephant herding optimization algorithm,Knowledge-Based Systems (2019), https://doi.org/10.1016/j.knosys.2018.12.012

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form.

Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

(3)

1

On the performance improvement of elephant herding optimization algorithm

Yasser I. Rashwan Department of computers

engineering & control systems, Faculty of Engineering,

Mansoura University.

P.O. Box: 35516, Egypt.

Email:

yasser_e2005@hotmail.com

Mostafa A. Elhosseini Assoc. Prof. Computers Engineering & Control Systems Department, Faculty of Engineering, Mansoura University, P.O.

Box: 35516, Egypt.

Email:

melhosseini@gmail.com

Ragab A. El Sehiemy Assoc. Prof. Electrical Engineering Department,

Faculty of Engineering, Kafrelsheikh University

Kafrelsheikh, P.O. Box: 33511, Egypt.

Email:

elsehiemy@eng.kfs.edu.eg

X. Z. Gao Professor of data science

University of Eastern Finland, Kuopio,Finland.

Email:

xiao.z.gao@gmail.com

Abstract— Thanks to fewer numbers of control parameters and easier implementation, the Elephant Herding Optimization (EHO) has been gaining research interest during the past decade.

In our paper, to understand the impact of the control parameters, a parametric study of the EHO is carried out using a standard test bench, engineering problems, and real-world problems. On top of that, the main aim of this paper is to propose different approaches to enhance the performance of the original EHO, i.e., cultural-based, alpha-tuning, and biased initialization EHO. A comparative study has been made between these EHO variants and the state-of-the-art swarm optimization methods.

Case studies ranging from the recent test bench problems of CEC 2016 to the popular engineering problems of gear train, welded beam, three-bar truss design problem, continuous stirred tank reactor, and fed-batch fermentor are used to validate and test the performances of the proposed EHOs against the existing techniques. Numerical results show that the performances of the three new EHOs are better than or competitive with the population-based optimization algorithms.

Keywords— Elephant Herding Optimization; Cultural-Based;

Alpha-tuning; Biased initialization; Test benchmark functions;

Three-bar truss; Welded beam; Gear train; Continuous stirred tank reactor; Fed-batch reactor.

I. INTRODUCTION

Remarkable progresses in optimization techniques have led to a rapid increase in their usage in industrial planning, scheduling, decision-making, and variant complex problems [1, 2, 3, 4]. Classical optimization methods are used to find the optimum solution for continuous and differentiable functions.

Unfortunately, they are not suitable for real-life problems with an objective function, which is not continuous or not differentiable. Moreover, they usually require a large amount of computation time because of their complexity.

Soft computing [5, 6, 7] is the fusion of methodologies that are implemented to design and enable solutions to the demanding problems in real world, which cannot be modeled or too difficult to model mathematically. Soft computing-based optimization has many advantages over the classical optimization algorithms, such as tolerant of imprecision, better rapport with reality, robustness and capability of dealing with too complex problems. They cannot always guarantee an

optimal solution, but they yield a satisfactory solution within a reasonable time span.

Soft computing methods include a number of evolutionary algorithms inspired by evolutionary biology, e.g., genetic algorithm (GA) [8], particle swarm optimization (PSO) [9], harmony search (HS) [11], artificial bee colony (ABC) [10], simulated annealing (SA) [12], etc. The GA [8] is a search algorithm used to find an approximate solution for a given problem through evolving an initial population towards a better population. This evolving is done through a number of operations. It starts with selecting a chromosome from the population followed by the recombination and mutation operators. The selection operation selects the chromosomes with suitable fitness, and the crossover operation combines two parent chromosomes to create new child chromosome.

Kennedy and Eberhart originally propose the PSO [9] in 1995, which is a swarm algorithm based on the behavior of a swarm of insects, schooling of fish or flock of birds. In the PSO, a possible solution is represented by particles, and each particle has a position and velocity. The differential evolution (DE) [13] is a method based on stochastic search, which is developed by Storn and Price. Similar to the GA, a population of D dimensional solutions are randomly generated in the DE.

After initialization of mutation, the differential operator adds the result of the difference of two random individuals to a third randomly individual of the present population to produce a mutant vector, and after that, a crossover makes a new trial vector [13].

A new swarm-based meta-heuristic search method, called Elephant Herding Optimization (EHO) [14] that mimics the behavior of elephant groups has been drawing research attention in the community during the past decade. The elephants in nature live in the groups under the leadership of a female matriarch, but male elephants may leave these groups and live separately from the groups while growing up. Table 1 shows the jargons used and their naming in the solution space.

The above elephant behaviors can be implemented so as to deal with optimization problems. As a matter of fact, the EHO algorithm is capable of finding much better solutions than the DE, GA, and biogeography-based optimization methods on most benchmark problems [14]. The main features of the state-

(4)

2 of-the-art nature-inspired optimization methods are provided in Table 2.

Table 1. EHO algorithm.

Table 2. Advantages and weaknesses of four optimization algorithms.

Advantages Weaknessnes

GA Supports multi- objective optimization.

Good for “noisy”

environments or stochastic.

Easy to stuck in a local minimum.

Long time is needed for convergence.

PSO Quick convergence.

Works with non- differentiable and large dimensions.

Multiplicity of the population is not enough.

Easy to stuck in a local minimum.

DE Good convergence properties.

Keeps the multiplicity of the population.

Convergence is unstable.

Easy to stuck in a local minimum.

EHO Hard to stuck in a local minimum.

Fewer number of control parameters

Replacing worst member is random.

Fixed parameters with all generations.

Stochastic initialization.

Investigating the strengths and weaknesses of the EHO is the main concern of the proposed work in this paper. We also inspect different ways to improve the performances of the EHO in some recent benchmarks and real-world problems. The main contributions of our work are summarized as follows.

 Study the parameters of the regular EHO algorithm.

 Apply the standard EHO to well-known benchmark optimization problems.

 An analysis is carried out to find how the algorithm performance changes with regard to algorithm parameter settings.

 Propose three different variants of the EHO algorithm.

 Verify and validate the modified EHOs to demonstrate that they supersede their counterpart optimization algorithms.

The rest of this paper is organized as follows: Section 2 presents the underlying principle of the original EHO algorithm, while Section 3 proposes three methods to enhance the EHO algorithm. The simulations results are illustrated in Section 4. Conclusions are finally drawn and given in Section 5.

II. SIMPLE ELEPHANT HERDING

In nature, elephants live in social groups. The structure of these groups often consists of a number of clans, and each clan lives under a female leader of a matriarch. In addition, male elephants live separately from the groups, and they may also leave these clans while growing up. To mimic and implement these behaviors so as to deal with challenging optimization problems, the principle of the EHO is simplified into three main rules as follows [14]:

(1) The population has a number of clans, and each clan has a fixed number of male and female elephants.

(2) A fixed number of male elephants live alone away by leaving their clans.

(3) A matriarch female elephant is the leader of the clans.

As described in the pseudo codes in Figure 1, an initial population of elephants is randomly created, and this population is divided into a fixed number of clans and sorted according to their fitness. Each clan is then updated individually. Clan updating operator: for each elephant in clan ci, a matriarch affects its next position c. We can update elephant j in clan by:

, , ,

. .

, ,

(1)

where , , is newly updated and , is the old position for elephant j in clan . α [0, 1] is a scale factor that determines the effect of the matriarch on on ,. , presents matriarch c, which is the fittest elephant individual in clan . r

[0, 1].

However, the fittest elephant in each clan cannot be updated by (1). It can be updated as:

, , ,

(2)

where β [0, 1] is the centre of clan . , can be calculated as:

,

, ,

(3)

where 1≤d≤D is the number of the dimension, and D is its total dimension. is the number of elephants in clan ci. , , is the dimension d of the elephant individual ,. Separating operator: as aforementioned, in elephant groups, the male elephant will live alone away from the clan by leaving their clans, when they reach adulthood. This separating process presents the separating operator modelling each generation as the elephant individuals with the worst fitness, which is implemented as follows:

In nature In problem domain Elephant Decision variable (solution) Clan of elephants Recommended solutions

Matriarch Best solution Male which leaves the clan Worst solution

Elephant position How well the solution is?

(5)

3

, . 1 (4)

where and are the upper and lower bounds of the positions of elephant individuals, respectively. , is the worst elephant individual in clan . r is a random number [0, 1].

The EHO algorithm is competitive with other population- based algorithms with a unique advantage of employing fewer control parameters. However, the probability that the randomly selected solution is a good solution is the same as that a bad one. Therefore, the new candidate solution is not always promising to be a solution better than the previous one.

Due to the involvement of these random quantities, the search operator does not consider the information of the best solution or other solutions that may have a positive effect on guiding the EHO towards to more promising areas of the search space.

The EHO also has some weaknesses, which have an impact on its performance, such as generation of the separating operator on (4) that can lead to the worst member value. In fact, changing to better values using (4) is not guaranteed.

Another drawback of the EHO is that α in (1) is usually a fixed value between [0, 1] during all the steps of the algorithm and for all the optimization problems. Making the parameter adaptive instread during the evolution of the EHO could be a better approach.

III. RELATED WORK

Many researchers have used the EHO to cope with their mathematical or physical problems. The EHO is originally proposed by Gai-Ge Wang [14] and examined against a total of 15 test-bench problems in 2015. It has been proved to effectively find competitive solutions in most of the tested problems. Wang and his colleagues [15] also present a modified EHO, in which forbidding the fittest elephant individuals from being ruined by the position updating operators can motivate them to use the elitism strategy.

Statistical analysis has been carried out using T-test to verify the effectiveness of this altered EHO against the BBO, DE, and GA. It has been benchmarked by 20 standard test problems. Especially, the continuous stirred tank reactor (CSTR) and economic dispatch problems are the two main real-world case studies investigated.

Gupta has used the standard EHO algorithm [16] for fine- tuning PID controller to minimize the integral-square-error.

The tuned controller is further applied to control the level of the three-tank system. The EHO-based tuning method shows enhanced performances in terms of both qualitative and quantitative results. Multilevel image thresholding [17] has been chosen to be a challenging application for the standard Figure 1. Pseudo codes of EHO algorithm.

Step 1: Initialization. Set generation counter t=1;

initialize the population;

the maximum generation MaxGen.

Step 2: While t<MaxGen do

Sort all the elephants according to their fitness.

Clan updating:

for c=1 to nClan (for all clans in elephant population) do for j=1 to nci (for all elephants in clan c) do

Update , and generate , , by Eq. (1).

if , = , i then

Update , and generate , , by Eq. (2).

end if end for j end for c Separating operator:

for c=1 to nClan (all the clans in elephant population) do Replace the worst elephant in clan c by Eq. (4).

end for c

Evaluate population by the newly updated positions.

t=t+1.

Step 3: end while Initialization:

Initialize (Maximum generation, Population size, Boundaries).

Initialize the population.

Calculate elephant’s fitness.

Repeat

Sort all the elephants according to their fitness.

Clan updating:

For c=1 to (for all clans in elephant population) do For j=1 to (for all elephants in clan c) do

If

,

=

,

then

Update

,

(old elephant) and generate

, ,

(new elephant) by Eq. (2).

Else

Update

,

(old elephant) and generate

, ,

(new elephant) by Eq. (1).

End if End for j End for c

Separating operator:

For c=1 to (all the clans in elephant population) do Replace the worst elephant in clan c by Eq. (4).

End for c

Evaluate population by the newly updated positions.

Until (Maximum number of generation)

(6)

EHO com meth coun N optim obje prob meth sizes utilit T limit diffe matr upda of th sepa wors stron pred babi fema

P the E and issue the c eleph the p cent β=0.

[19]

foun Mor the h conv cost In idea lack to co Mor again optim throu are d simp mod the show

T three nam (c) B

O. It is exam mpared agains

hods. The EH nterparts.

Nand K. Meen mization EH

ctive Distrib blem is form

hod, in which s of DERs so ty and consum The authors tations on the erent improve riarch by the t ate the matria he clan or bet arating process

st solution, e nger females dators. Therefo

es should be ale.

Parashar and th EHO, namely

the parametri es raised in th center of the hant to find i position of th er of the clan.

.1 merely bas is used again nd to be bette reover, the EH home energy m ventional meth

and waiting t n reference t s to improve s some promi onverge fast a reover, the pe nst complex mal settings o ugh careful pa developed in plicity as m dification mad

optimization w the importan

IV.

To further imp e new approa mely, (a) Alph

Biased initializ

mined on so st a few we

HO has been na et al. [18] p O algorithm.

buted Energy mulated and

the aim is to o as to maxim mers.

of [18] foc e standard E ements to the

traditional way arch but not af

ta parameter. T s. The authors elephants shou in order to fore, it is sugg allocated the heir colleague y MEHO. The

ic study of al his work. They

clan on the its new positio he fittest elep . On top of tha sed on trial a n as the prim er than the ex HO is used in management s hods, it perfo time.

o the above the performa ising features and obtain th erformance o testbeds an of the EHO pa

arametric stud a simple way much as po de is the updat results and nce and poten

PARAMETR

prove the perf aches are pro ha tuning, (b)

zation.

ome testbed ell-known sw n shown to propose the fi . A complex

Resources handled usi determine the mize the overa cus on some

HO. In addit EHO. Instea y, they presen ffected only b The impact al s conclude tha uld keep thei o protect th gested that the

e position ne es [19] propos e update polic

lpha and beta y also conside previous posi on instead of phant with the at, they choos and error. M mary case stud

isting swarm- the schedulin system [20]. C

rms better in relevant work ance of the E that may ena e optimal sol f the EHO s nd real-world arameter need dy. Fortunatel y to preserve ossible. Alth te equation as performance ntials of such a

RIC STUDY OF E formance of th

oposed as sh Cultural-bas

images and warm intellig

outperform t rst multi-obje x real-life m

(DERs) plan ng the prop e optimal sites all benefits o e challenges tion, they sug ad of updating nt a new metho by a mean ave

lso happens to at by removing ir babies near em from hu e newly gener ar to the stro se a new varia y of the matri are the two m er the influenc ition of the f

directly repla e influence o

e to use α=0.8 oreover, the D dy. The MEH -based techniq ng of applianc Compared wit reducing both k, there are n EHO, since it

able the algor utions effecti should be ver d problems.

to be investig y, modified E

the advantage hough the m

s discussed ab e comparison

a modification EHO he EHO algori

own in Figur ed algorithm,

4 also gence these ective multi-

nning posed s and of the and ggest g the od to erage o the ng the r the ungry

rated onger ant of riarch main ce of fittest acing f the 8 and DER HO is ques.

ces in th the h the novel t still rithm ively.

rified The gated EHOs es of main bove, can n.

ithm, re 2, , and

(a) Th cha the val ove fun

whe perm (b)

A EHO The the indi spac acce in F mem indi algo gene belie

Figur tuning e updating sc anged with the e original EH

lue between z er a number o nction:

ere and missible range

Cultural alg Another way O algorithm i

culturally ba algorithm by viduals. In g ce that consist eptance functi Figure 3. This mber of the vidual [21, 2 orithm to repl

erated solution ef space.

Figure

re 2. EHO imp cale factor e number of g O algorithm zero and 1 [14

f generations

are the e of .

orithm to enhance th s to apply ch ased algorithm taking advan general, the c ts of the popu ion to create i s population i e original po

22]. In our s ace the worst n between the

e 3. Belief spac

provement me of the EHO generations in

is considered 4]. However, on the basis o

lower and up

the performan hanges in the ms [21, 22] ca ntage of the h cultural algor ulation, which improved pop is used to rep

opulation w scheme, we t solution wit e lower and u

ace in cultural ethods.

algorithm ca n a linear way.

d to be a con it can be cha of the simple l

pper bounds o

nce of the ori algorithm’s r an help to imp history of the rithm has a b h is selected b

pulation, as sh lace a new or ith an impr deploy a cu h a new rand upper ranges o

algorithm.

an be . in nstant anged linear

(5)

of the

iginal rules.

prove e best belief by the hown r bad roved ultural domly of the

(7)

5 (c) Biased algorithm:

The basic idea of the biased algorithm is to initiate the first population with good-fitness elephants. The evolution and the next steps to promote will not start until the population has reached and exceed the predetermined threshold [23].

Implementing the biased algorithm on EHO algorithm can be based on adding a rule or a threshold in the initialization step.

This rule forces the first generation’s population to have a minimum good fitness or cost.

V. SIMULATIONS

The EHO and CSA have been examined against many difficult comparative functions, such as CEC’2016 15 benchmark functions, 21 well-known functions from CEC’2005 to CEC’2014, three challenging engineering problems with many constraints, and two problems from real world. Some performance metrics are used, e.g., best, worst, mean, standard deviation, and T-test values for statistical analysis.

a. BENCHMARK FUNCTIONS

The three enhanced algorithms are tested against 21 benchmark functions in comparison with the original EHO algorithm and Crow Search Algorithm (CSA) [24, 25]. The functions shown in Table 3 are selected from the IEEE CEC 2005, 2010, and 2014 [26, 27 28].

 In this study, the best, mean, and worst costs are the performance metrics used to compare different algorithms.

 To determine the normal distribution of the results, a standard deviation (Std. deviation) is employed. It determines the variation between each data point relative to the mean.

 For investigate the statistical significance of the difference between two independent samples of an

equal sample size, a hypothesis test method called two-sample t-test [29] is used.

 The success rate determines the number of successful runs in the total runs, in which a successful run means that the best solution is found by the algorithm within the predetermined accuracy level of the problem.

In these tests, each function is tested for 50 runs. The first population is generated randomly for all the 50 runs. The above performance metrics are used (best, mean, worst, standard deviation, and T-test). Each run has 50 iterations, 50 populations, and 15 dimensions.

The results in Table 4 show that the cultural-based EHO has outperformed the other algorithms with regard to the best cost, worst cost, mean cost, standard-deviation and t-test in 12 problems (f1, f3, f4, f7, f8, f9, f11, f12, f15, f17, f18, f20). The original EHO has the best cost and standard deviations in three problems (f10, f14, f19). The alpha-EHO has the best cost in two problems (f2, f5). The biased EHO has produced the best results at two problems (f6, f13). The CSA obtains the best costs in two problems (f4, f21). The cultural-based EHO surpasses the standard EHO at 19 out of the 21 benchmark functions according to the T-test values.

In summary, the cultural-based EHO has the best results and convergence for most of the test benches over the other algorithms. Figure 4 shows the convergence for the five algorithms for the Ackley problem as an example, and the cultural-based has the best convergence. Figure 5 shows the best cost in all the 50 runs for the four EHO algorithms, and the cultural-based again has the superiority in costs in all the runs. Figures 6, 7, and 8 show the differences between costs in different populations (1, 10, 30, 50) in one run for the cultural algorithm. Figures 9, 10, and 11 demonstrate the costs for all the populations in generation 20 for the alpha tuning, biased- EHO, and EHO.

Table 3. Benchmark functions.

Test function Dimension Range of

search Optimum

± accuracy Ackley

20 0.2 1 1

2 20

20 [-32,32] 0±0.1

Dixon & Price

1 2

20 [-10,10] 0±100

Griewank 1

4000 1

20 [-600,600] 0±0.1

Levy

1 1 10 1 1 1

2

20 [-10,10] 0±100

Perm 0,D,BETA

10 1 20 [-20 20] 0±10

(8)

6

PERM D, BETA

1

20 [-20,20] 0±10

Powell

10 5 10

/

10 10

20 [-4,5] 0±0.1

Rastrigin

10 2 10

20 [-5.12,5.12] 0±100

Sphere 20 [-100,100] 0±0.1

Rosenbrock’s

100 1

20 [-10,10] 0±100

Alpine

| sin 0.1 |

20 [0,10] 0±0.1

Periodic Function

1 0.1 20 [-10,10] 0±0.1

Qing 20 [-500,500] 0±100

Quartic

0,1

20 [-1.28,1.28] 0±0.1

Salomon

1 cos 2 0.1

20 [-100,100] 0±0.1

Styblinski-tank 1

2 16 5

20 [-5,5] 0±100

SUM OF DIFFERENT

POWERS | |

20 [-1,1] 0±0.1

Sum Squares 20 [-10,10] 0±0.1

Trid

1 1

20 [-400,400] 0±100

Zakharov

0.5 0.5

20 [-5.10] 0±0.1

Scwphele

418.9829 | | 20 [-500,500] 0±100

Function EHO Αlfa-EHO Cultural-based

EHO

Biased-EHO CSA Best 1.18e-3 1.1362e-003 162.0570e-009 1.960115e-004 2.79 Worst 2.34E-03 2.5611e-003 200.9038e-009 7.6533e-04 6.7663 Mean 1.76E-03 1.8388e-003 182.9120e-009 4.3766e-04 4.3995 Std. 283.4368e-006 2.59e-4 8.9207e-009 174.9501e-006 830.9771e-003 Success

rate

100 100 100 100 0

Table 4. Result obtained by four algorithms

(9)

7 Function

evaluation 2500 2500 2500 197.6900e+003 -

t Test - 1.4512 -48.0216 - -

Best 787.9677e-003 721.2090e-003 751.6218e-003 741.4798e-003 2.21E+01

Worst 0.9417608 915.6748e-003 797.1412e-003 938.7157e-003 9.36E+02

Mean 0.9006011 865.2964e-003 793.6545e-003 889.2522e-003 2.10E+02

Std. 0.0270178 37.3145e-003 7.6577e-003 45.2834e-003 165.8166

Successrate 100 100 100 100 -

Function

evaluation 2500 2500 2500 184.0440e+003 -

t Test - -5.4189 -1.8464 - -

Best 5.49E-05 41.1627e-06 1.5455e-012 50.3490e-06 2.8968 Worst 2.42E-04 156.3767e-06 2.3610e-012 128.3107e-06 10.5429 Mean 9.92E-05 86.3872e-06 1.9667e-012 88.8416e-006 6.072 Std.

deviation 3.23E-05 25.8627e-006 190.9415e-015 17.1894e-06 1.8255

Successrate 100 100 100 100 0

Function 2500 2500 2500 694.8120e+003 -

t Test - -2.1896 -21.7167 -

Best 1.1882 1.4078 1.1062 1.2477 0.9597862

Worst 1.9773 1.9366 1.2701 1.9780 5.0404 Mean 1.7261 1.6509 1.2556 1.6274 2.3153 Std. 1.37E-01 137.6775e-003 28.5703e-003 159.3046e-003 0.796673 Success

rate

100 100 100 100 100

Function evaluation

2500 2500 2500 228.6860e+003 -

t Test - -2.7377 -23.77 - -

Best 520.4241 130.1511 284.8803 629.5494 8311463.3904 Worst 2.8048e+003 2.1313e+003 532.5667 2.1009e+003 3.419098e+22 Mean 1.6488e+003 1.1605e+003 402.3765 1.5733e+003 7.485552e+20 Std. 478.0793 542.9126 61.0312 376.0461 4.835696e+21 Success

rate

- - - - 0

Function

evaluation 2500 2500 2500 246.9170e+003 -

t Test - -4.772 -18.286 - -

Best 5.9493e+045 5.3693e+045 5.8018e+045 22.4994e+039 1.499867E+47 Worst 19.7937e+048 34.6479e+048 145.1404e+048 24.0315e+045 2.64236E+49 Mean 4.2570e+048 5.0527e+048 27.6148e+048 4.5413e+045 6.6276E+48 Std. 4.6916e+048 7.9282e+048 37.0331e+048 5.7673e+045 6.2185E+48 Success

rate

- - - - -

Function evaluation

2500 2500 2500 47.2120e+003 -

tTest - 0.6107 4.4245 - -

Best 9.34E-07 68.2033e-009 40.0674e-015 1.8146e-006 4.6368 Worst 1.20E-05 2.1085e-006 70.7967e-015 11.8838e-006 83.0815 Mean 5.65E-06 716.8296e-09 56.6443e-015 5.2037e-006 29.3331 Std. 2.39E-06 399.1951e-009 7.3109e-015 1.9691e-006 14.9122 Success

rate

100 100 100 100 0

Function evaluation

2500 2500 2500 313.8570e+003 -

t Test - -12.6974 -135.06 - -

Best 1.89E-05 13.9509e-06 568.4342e-015 17.1929e-06 54.1176 Worst 6.00E-05 55.9127e-06 795.8079e-015 50.7350e-06 117.4514 Mean 3.75E-05 32.8959e-06 677.5736e-015 31.9998e-06 97.1837 Std. 9.06E-06 10.4060e-06 52.4259e-015 8.4215e-006 14.238

(10)

8 Success

rate 100 100 100 100 -

Function

evaluation 2500 2500 2500 687.1450e+003 -

t Test - -2.3596 -29.267 - -

Best 7.11E-08 21.0886e-06 791.5914e-015 16.7049e-06 0.5642772 Worst 2.79E-07 62.6962e-06 1.1973e-012 71.8044e-06 3.3775 Mean 1.60E-07 40.1746e-06 995.8725e-015 35.5740e-06 1.6871 Std. 4.38E-08 10.8875e-006 87.8833e-015 10.5710e-006 0.6098371 Success

rate

100 100 100 100 -

Function evaluation

2500 2500 2500 41.2140e+003 -

t Test - 25.9879 -25.830 - -

Best 18.7035 18.7865 18.7489 18.7913 992.2255 Worst 18.8044 18.8932 18.7712 18.9066 21278.3004 Mean 18.7818 18.8456 18.7680 18.8460 7812.3128 Std. 1.91E-02 25.2688e-003 4.6502e-003 21.1188e-003 4107.2245 Success

rate

100 100 100 100 -

Function evaluation

2500 2500 2500 214.7410e+003 -

t Test - 14.2425 -4.963 - -

Best 1.2735e-003 1.0977e-003 220.4753e-009 1.3513e-003 15.7037 Worst 1.6016e-003 1.3019e-003 241.5231e-009 1.6035e-003 30.7874 Mean 1.4709e-003 1.2153e-003 232.4065e-009 1.4626e-003 23.8597 Std. 72.7031e-006 51.9191e-006 4.4326e-009 55.2276e-006 3.3591 Success

rate

100 100 100 100 -

Function evaluation

2500 2500 2500 241.8220e+003 -

t Test - -20.2306 -142.98 - -

Best 900.0003e-003 900.0002e-003 0.9 3.1192 3.4363 Worst 900.0012e-003 900.0010e-003 0.9 5.0862 6.0638 Mean 900.0012e-003 900.0005e-003 0.9 4.3676 5.0752 Std. 202.7760e-009 155.0651e-009 1.3312e-015 0.43458 0.53818 Success

rate

100 100 100 - -

Function evaluation

2500 2500 2500 - -

t Test - -19.390 -41.8455 - -

Best 587.4021 484.7694 551.6289 134 2930900.2814 Worst 1.0837e+003 1.0605e+003 873.7168 396 66390354.2396 Mean 854.5547 789.1110 757.5113 268.6 25958506.2459 Std. 124.1995 125.4644 86.8255 67.1459 15766147.1417 Success

rate - - -

Function

evaluation 2500 2500 2500 - -

t Test - -2.621 -4.528 - -

Best 9.6533e-006 18.3942e-006 31.7537e-006 1.2858 0.015031 Worst 2.8384e-003 2.2010e-003 2.1679e-003 28.6086 0.14094 Mean 578.7191e-006 530.7382e-006 424.2425e-006 15.1107 0.062138 Std. 564.4367e-006 529.8290e-006 497.3141e-006 6.5657 0.030451 Success

rate 100 100 100 - -

Function evaluation

2500 2500 2500 - -

t Test - 0.438 -1.452 - -

Best 837.0900e-006 477.6948e-006 113.2627e-009 0.2 1.7011 Worst 3.1305e-003 1.9302e-003 134.8178e-009 0.3 4.4315

(11)

9

Mean 1.6716e-003 868.6047e-006 124.5356e-009 0.282 2.9958 Std. 483.3371e-006 292.9177e-006 5.1651e-009 0.038809 0.53991 Success

rate

100 100 100 - -

Function evaluation

2500 2500 2500 - -

t Test - -10.046 -24.453 - -

Best -520.7107 -563.6670 -498.0462 -780 -62.9384 Worst -399.1790 -392.7707 -455.2663 -720 -47.8004 Mean -447.1662 -464.9557 -473.9140 -760.64 -53.5499 Std. 29.2783 30.9563 7.4942 15.896 3.1230 Success

rate - - - - -

Function

evaluation 2500 2500 2500 - -

t Test - -2.952 -6.258 - -

Best 622.9521e-015 357.7808e-018 1.9276e-024 0 2.2863e-08 Worst 22.3080e-012 12.2539e-012 867.7847e-024 2 2.216e-05 Mean 6.0716e-012 1.2878e-012 321.2879e-024 0.98 2.6207e-06 Std. 5.3387e-012 2.5296e-012 217.4749e-024 0.62237 3.5495e-06 Success

rate 100 100 100 - -

Function

evaluation 2500 2500 2500 - -

t Test - -5.725 -8.041 - -

Best 2.2385e-006 1.5237e-006 51.2725e-015 49 4.3617 Worst 7.1856e-006 5.2608e-006 84.2107e-015 114 23.093 Mean 4.7144e-006 2.9064e-006 63.8312e-015 75.84 12.6643 Std. 1.1618e-006 958.0762e-009 5.9263e-015 15.5279 4.1332 Success

rate

100 100 100 - -

Function evaluation

2500 2500 2500 - -

t Test - -8.489 -28.693 - -

Best -30.1433 -27.4804 -17.4525 -14 1489.853 Worst 5.1025 4.5037 -4.3215 50 13888.6157 Mean -6.6518 -10.1116 -7.1428 28.52 6551.7862 Std. 7.5964 6.7243 2.6867 16.1198 2605.2398 Success

rate 2 - 100 - -

Function

evaluation 2500 2500 2500 - -

t Test - -2.411 -0.430 - -

Best 35.7857e-006 6.3069e-006 638.1831e-015 71 19.7235 Worst 1.7726e-003 448.0683e-006 2.0048e-012 208.3125 146.0828 Mean 939.9226e-006 31.0391e-006 1.2104e-012 156.2438 74.793 Std. 469.8498e-006 62.1967e-006 332.3379e-015 30.3803 33.6455 Success

rate 100 100 100 - -

Function

evaluation 2500 2500 2500 - -

t Test - -13.560 -14.145 - -

Best 4.7589e+003 5.1955e+003 5.5711e+003 3412.2221 4.0952e+003 Worst 6.4497e+003 6.5117e+003 6.5927e+003 4984.5693 6.1471e+003 Mean 5.9121e+003 5.8858e+003 6.2104e+003 4586.1723 5.4106e+003 Std. 320.5722 290.2473 288.1345 312.2525 408.8361 Success

rate

- - - - -

Function evaluation

2500 2500 2500 - -

t Test - -0.430 4.893 - -

(12)

Figure 4. C

Figure 5. Best

Figure 6. G

Figure 7. G

Convergence for

costs for the fiv

Generations (1,

Generations (10,

r the five algori

ve algorithms in

10) for Cultura

, 30) for Cultur

ithms for (f1).

n 50 runs for (f

al-based for (f1)

al-based for (f1

10 f1).

).

1).

Figure 8. G

Figure 9.

Figur

Figur

Generations (30

. Generations (2

e 10. Generatio

re 11. Generatio

0, 50) for Cultur

20) for Alpha-tu

ons (20) for Bia

ons (20) for EH

ral-based for (f1

unning for (f1).

ased for (f1).

HO for (f1).

1).

.

(13)

11 b. IEEE-CEC2016PROBLEMS

The demanding problems of IEEE-CEC 2016 [30] are applied here to test the four algorithms, and their results are compared with the CSA. The 15 problems used are given in Table 5 [30]. The running parameter of our five optimization algorithms is:

 Number of runs: 50 runs

 Number of populations: 100 members

 Problem dimension: {10,30}

 Search range: [-100, 100]

In the simulations, the cultural-based EHO shows obvious superiority over the other four algorithms in most of the problems of IEEE-CEC 2016. Tables 6 presents the results for CEC 2016 problems for 10 dimensions, and the cultural-based algorithm can outperform the other four algorithms in 11 problems. Moreover, Table 7 gives the results for 30 dimensions and the cultural-based algorithm again outperforms the other four algorithms in the 11 problems.

TABLE 5.IEEE-CEC 2016 EXPENSIVE PROBLEMS.

TYPE No. FUNCTION NAME Unimodal

function F1 Rotated Bent Cigar Function F2 Rotated Discus Function

Simple multimodal

functions

F3 Shifted and Rotated Weierstrass Function F4 Shifted and Rotated Schwefel’s Function F5 Shifted and Rotated Katsuura Function F6 Shifted and Rotated

HappyCat Function F7 Shifted and Rotated HGBat F8 Shifted and Rotated

Expanded Griewank’s plus Rosenbrock’s Function F9 Shifted and Rotated

Expanded Scaffer’s F6 Hybrid

function

F10 Hybrid Function 1 (N=3) F11 Hybrid Function 2 (N=4) F12 Hybrid Function 3 (N=5) Composition

function

F13 Composition Function 1 (N=5) F14 Composition Function 2 (N=3) F15 Composition Function 3 (N=5) TABLE 6.RESULTS OF 10D-IEEE-CEC 2016 PROBLEMS

Function EHO Cultural-based

EHO ALPHA-EHO BIASED-EHO CSA

Best 476.6334e+006 26930.6034 543709413.3418 543885476.2903 78373464.6307 Worst 1.6013e+009 29433.6311 3286989419.865 3039136007.9487 605031870.6001 Mean 679.1173e+006 27307.3598 1544262620.6233 1400347176.632 207707303.6967 Std. deviation 679.1173e+006 532.6941 722121533.1397 558579938.9056 113141141.0649

Best 8.6585e+003 7560.641 5621.2136 10896.2612 2519.2961

Worst 21.7472e+003 18260.9905 19610.8295 120105.8651 17341.0829 Mean 14.0750e+003 14685.9056 13729.3504 39067.5674 8301.8423 Std. deviation 3.4412e+003 2323.0534 3434.2188 24468.3507 3287.5858 Best 307.4925 303.5014 306.4127 308.1323 304.4735 Worst 310.9731 303.5594 310.8039 311.1062 308.9268 Mean 309.4205 303.5227 309.2074 309.9108 306.6765 Std. deviation 908.0998e-003 0.017034 1.0318 0.69365 0.84903

Best 1.0756e+003 679.0465 1119.3182 1817.4554 1321.1737 Worst 2.4996e+003 679.0465 2416.5572 2394.3987 2183.3466 Mean 1.9182e+003 679.0465 1692.2138 2144.0573 1843.2074 Std. deviation 357.6202 2.7799e-09 280.8037 139.2615 174.5322

Best 501.0260 500.3148 500.7144 500.6482 500.8605 Worst 502.4002 502.53 502.498 501.8891 502.8656

Mean 501.8609 501.7351 501.8083 501.3726 501.9438 Std. deviation 346.7572e-003 0.46481 0.40741 0.21849 0.42897

Best 601.1271 600.1689 601.5018 600.9856 600.3217 Worst 602.9563 600.3201 603.5601 602.9936 601.9129 Mean 602.2387 600.2365 602.4673 602.2365 600.5905 Std. deviation 336.8449e-003 0.037162 0.51941 0.39766 0.28748

Best 700.4629 700.2531 703.8297 703.0693 700.3881 Worst 727.4164 700.4397 726.8067 732.3817 707.1389 Mean 709.766 700.3661 711.9324 712.8848 701.8247 Std. deviation 5.2147 0.037641 5.1721 4.9724 1.6827

(14)

12

Best 815.4744 801.1476 816.5529 809.9271 803.8898 Worst 2.0079e+003 801.1678 1398.4834 3423.8554 815.8568

Mean 924.6131 801.1554 921.1423 931.3237 808.0091 Std. deviation 173.1187 0.0043426 127.332 378.2212 3.0306

Best 903.0809 902.8396 903.0931 903.6344 903.1924 Worst 904.1328 903.7779 903.9837 904.3532 903.8987 Mean 903.6689 903.5118 903.5921 904.1257 903.6083 Std. deviation 198.4233e-003 0.19851 0.18208 0.14553 0.1815

Best 7.8921e+003 2768.9436 4465.7025 5866.2838 1683.0238

Worst 333.9811e+003 4989.4997 579924.011 201520.5216 8021.2105 Mean 145.1549e+003 3594.4825 195469.9555 69079.1342 68661.8212 Std. deviation 85.0110e+003 456.4824 118282.866 52688.0301 13049.0298 Best 1.1049e+003 1102.7656 1104.7478 1105.9125 1103.685 Worst 1.1193e+003 1102.8038 1120.033 1134.5742 1112.7676 Mean 1.1104e+003 1102.7828 1110.9685 1113.8367 1106.8876

Std. deviation 3.3488 0.0087617 3.0512 5.251 1.8597 Best 1.2602e+003 1243.1786 1232.1499 1246.8621 1236.5261 Worst 1.4153e+003 1243.3425 1406.5574 1516.3017 1377.9581 Mean 1.3608e+003 1243.2427 1353.7297 1397.4109 1336.9823 Std. deviation 27.1754 0.044518 20.0835 57.3768 38.0628

Best 1.6300e+003 1614.938 1625.3437 1625.6502 1618.4759 Worst 1.7123e+003 1614.9383 1682.8461 1930.7268 1658.8019 Mean 1.6502e+003 1614.938 1642.168 1732.5269 1630.9939 Std. deviation 16.1831 5.2379e-05 12.3927 76.8228 7.4752

Best 1.5950e+003 1599.2116 1595.7564 1598.1421 1592.2272

Worst 1.6033e+003 1606.2617 1611.6879 1615.5692 1604.3916 Mean 1.6033e+003 1603.4929 1603.9688 1607.3778 1597.9797 Std. deviation 3.3120 1.6497 3.7437 3.5151 2.9001 Best 1.5644e+003 1901.4794 1520.4835 2025.6828 1513.4857

Worst 1.9715e+003 1902.3574 1983.7441 2092.0996 1907.7301 Mean 1.8046e+003 1901.9172 1844.3663 2060.3489 1626.152 Std. deviation 119.9243 0.18939 126.8466 20.2324 140.8064

TABLE 7.RESULTS FOR 30D-IEEE-CEC 2016 EXPENSIVE PROBLEM.

NO. EHO Cultural-based

EHO

ALPHA-EHO BIASED-EHO CSA

Best 25071990425.438 450559.6425 29528811961.872 24928916015.582 13171184633.255 Worst 53378995299.747 1987089.0971 53329652661.756 45863858068.055 31358938546.896 Mean 42345648875.289 1247391.2453 41701382589.146 33756846863.704 21806882037.949 Std. deviation 5844081921.496 426531.6689 5471364154.9996 4687926931.753 4438865859.4255

Best 45.1055e+003 52525.2148 48711.5476 50070.3571 32985.3746 Worst 69.0070e+003 166254.0303 69524.3185 62583.001 70028.1814 Mean 57.9996e+003 73848.7125 58604.5283 58290.145 51287.9624 Std. deviation 5.1858e+003 18790.2975 4840.6249 3063.5928 6803.6584

Best 334.8906 319.8749 331.0212 337.2931 329.5939 Worst 343.0315 321.2588 342.4594 346.1315 337.8899 Mean 339.6997 320.759 338.9993 343.432 334.696 Std. deviation 1.5400 0.24358 2.3114 1.7032 1.897 Best 6.4189e+003 2794.2691 5377.4983 7734.5446 5762.6152 Worst 8.6481e+003 3412.2772 7841.71 9050.8971 8303.0006 Mean 7.5487e+003 3324.7306 6855.4943 8460.183 7574.2023 Std. deviation 534.1249 142.803 594.8491 327.1361 459.0583 Best 502.3455 502.6233 501.6432 502.19 502.6151

(15)

13

Worst 504.5754 504.5586 504.6376 503.7745 504.6175 Mean 503.6954 503.6871 503.5726 503.0151 503.6286 Std. deviation 519.5096e-003 0.44841 0.61645 0.40723 0.50067

Best 604.3295 600.4878 604.2996 603.8698 602.7829 Worst 605.7908 600.7756 605.8607 604.9776 604.3559 Mean 605.1283 600.6586 605.1713 604.4266 603.5618 Std. deviation 0.33622 0.059595 0.34085 0.26932 0.33779 Best 769.6439 700.2236 766.5284 739.9443 721.7425 Worst 818.1561 700.457 815.1206 788.1292 754.7635 Mean 793.1768 700.3238 790.1725 768.6317 740.7332 Std. deviation 11.0899 0.06625 9.8791 10.3283 7.1868

Best 840453.2023 810.0151 600341.7633 223954.9416 9512.6266 Worst 13052171.6604 826.8067 14676994.6806 6307667.4086 1166745.6455 Mean 3757928.6722 819.6547 3973603.5048 1960671.1923 394377.2891 Std. deviation 2303748.0601 3.7477 2496502.1451 1233570.5022 260588.0084

Best 912.7482 912.5223 912.4824 913.5332 912.6644 Worst 913.8757 913.7441 913.8618 914.3087 913.7561 Mean 913.4211 913.195 913.3761 913.9741 913.4442 Std. deviation 0.25517 0.23765 0.30178 0.16821 0.2135

Best 14733527.9993 333758.2462 14852209.7799 7870601.3752 2561960.8737 Worst 60995850.8279 556395.9502 69513576.3205 59110513.5707 37664545.6757 Mean 34554946.0192 457975.8516 36469663.7262 27855118.0018 14580371.7978 Std. deviation 11672408.3196 51246.5933 10630272.8786 11028805.7446 8944725.9387 Best 1231.9082 1120.2693 1229.9736 1272.0741 1161.5425 Worst 1352.134 1121.7015 1317.5854 1599.2919 1256.5026 Mean 1289.2464 1121.1726 1279.2539 1390.1148 1212.614 Std. deviation 28.9066 0.2816 22.5904 67.9763 26.6938

Best 1913.4527 1548.1814 1741.5517 2034.352 1402.3816

Worst 3964.2655 1577.5493 3280.7889 4348.6174 2431.2859 Mean 2592.3358 1560.0705 2362.038 2896.8465 1988.6405 Std. deviation 408.1982 6.2269 374.4833 354.13 223.1858

Best 1906.6354 1642.3662 1876.9697 2066.2302 1795.9864 Worst 2446.3503 1642.6249 2370.336 2975.9324 2050.8323 Mean 2131.5282 1642.4427 2075.9421 2522.1663 1905.6973 Std. deviation 120.5532 0.054944 117.2619 231.5621 62.0819 Best 1701.7921 1620.722 1708.5882 1697.4862 1665.568 Worst 1828.681 1632.2196 1831.0356 1854.2502 1738.8439 Mean 1759.1441 1625.6032 1748.956 1763.696 1706.1746 Std. deviation 25.3019 2.3815 23.1385 32.5125 16.8615

Best 2665.814 2302.6711 2527.6599 2642.421 2405.1922 Worst 3055.8917 2338.2189 3134.5143 2993.7823 31358938546.896 Mean 2934.7886 2321.1019 2875.8328 2866.9204 2724.683 Std. deviation 91.2718 8.7932 120.9519 88.5989 107.3513 c. ENGINEERING OPTIMIZAITON PROBLEMS

Some engineering problems including gear train design problem, three-bar truss design problem and welded beam design problem are used here for evaluating the performance of the proposed algorithms. Table 8 shows that the cultural- based, EHO, and biased algorithm can achieve the best costs in the three-bar problem. In Table 9, the biased algorithm has

the best cost in welded beam problem outperforming the other four algorithms. The cultural-based algorithm has the best cost in gear train problem in Table 10, better than the other four algorithms

1. Three-bar truss design problem

(16)

In Figu load on e follo M C

0

Ta EHO Cultu based EHO A-EH Biase EHO CSA

2 T ineq

M C

n the three-b ure 12, the ma ded three-bar t

ach of the tru ow:

Minimize:

2√

Constraints:

√2

√2 0 1 √2

100 ,

Figure able 8. Three-b

best

O 263.8

ural- d

O 263.8

HO 263.8

ed-

O 263.8 263.8

2. Welded be The welded quality constra

: sheer stress : bucking lo Minimize:

1.1

Constraints:

bar truss desig ain objective truss volume uss. The objec

√2

√2 2 1 2

1, 2000 /

12. Three-bar t bar truss design

worst

8958 264.13

8958 264.27 8964 264.02 8958 263.90 8964 263.92

eam design pr beam proble aints [32]:

s, : bending oad on the bar,

, ,

0471

0

gn problem [ is to minimiz subject to stre tive and three

0 0 0 ,2

, 200

truss design pro problem optim

t mean

371 263.913

791 263.919 215 263.920 048 263.897 287 263.906

roblem:

em in figur

stress in the b , : deflection

,

0.04811

24, 31] show ze the statisti ess ( ) constr e constraints a

00 /

oblem.

mization results.

Std.

32 0.038785

94 0.05964 06 0.025482 75 0.002261 64 0.007577

re 13 has s

beam n of the beam

14

14 wn in

ically raints are as

69

1 76 32 73

seven

W

0 0

EHO Cult -bas EHO A-E Bias EHO CSA

1

1.1047 0.125

Where

√2 4 2 √2

4.013

6000 , 12 6

0.25

0.1 2,

0.1 10

Table

best O 1.7434

tural sed

O 1.7426 EHO 1.744

sed-

O 1.7316 A 1.7320

Fig 1. Gear train

0 0, 1 12 0.0481

0 0 0.

2 2

,

2 ,

3 36 1

, 1413,6 0.1,

0, 0.1

9. Welded beam

worst

40 2.483546

69 2.298880 4 2.574746 64 1.889506 00 2.448470

gure 13. Welded n design probl ,

11 3 4 14

,

4

,

2 4

, 30

600 , 10

2

am design probl

mean

62 2.004199

05 1.99546 60 2.224429 66 1.804456 092 1.888939

d beam design p lem:

2 5 0

2

0 6 30,000

lem results.

Std.

97 0.165051

64 0.13908 96 0.191939 60 0.040002 935 0.151918

problem.

0

18

82 90 28 815

Viittaukset

LIITTYVÄT TIEDOSTOT

Soft computing methods include a number of evolutionary algorithms inspired by evolutionary biology, e.g., genetic algorithm (GA) [8], particle swarm optimization (PSO) [9],

Keywords: Optimization, Swarm Intelligence, Clustering, Firefly Optimization, Cuckoo Search Optimization, Firefly Algorithm, Cuckoo Search Algorithm, K-means

I Numerical optimization methods, principal component analysis, dimensionality reduction, independent component analysis, EM algorithm... I Probabilistic Models (with

Algorithm 1 Optimization via a genetic algorithm GA Input: Nodes, Lanes, Signals /* Model creation */ Assignment of Nodes to Lanes Assignment of Signals to selected Lanes

Keywords: clustering segmentation; local binary pattern; Fuzzy c-means; particle swarm

The behavior of a Rogowski coil in terms of sensitivity and bandwidth is determined by its electrical parameters: resistance R, self-inductance L, capacitance between the winding

Finally, the backtracking search algorithm (BSA) is used as an efficient optimization algorithm in the learning phase of the ANFIS approach to provide a more precise prediction

The objective of this study is to test if Genetic algorithms, Cultural algorithms and Genetic algorithm/Ant colony optimization hybrid algorithm are ef fi cient methods for