SIMPLE POPULATION-BASED ALGORITHMS FOR SOLVING OPTIMIZATION PROBLEMS

. Heuristic algorithms are simple yet powerful tools that are capable of yielding acceptable results in a reasonable execution time. Hence, they are being extensively used for solving optimization problems by researchers nowadays. Due to the quantum of computing power and hardware available today, a large number of dimensions and objectives are considered and analyzed effectively. This paper proposes new population-based metaheuristic algorithms that are capable of combining different strategies. The new strategies help in fast converging as well as trying to avoid local optima. The proposed algorithms could be used as single-phase as well as two-phase algorithms with different combinations and tuning parameters. “Best”, “Mean” and “Standard Deviation” are computed for thirty trials in each case. The results are compared with many efficient optimization algorithms available in the literature. Sixty-one popular un-constrained benchmark problems with dimensions varying from two to thousand and fifteen constrained real-world engineering problems are used for the analyses. The results show that the new algorithms perform better for several test cases. The suitability of the new algorithms for solving multi-objective optimization problems is also studied using five numbers of two-objective ZDT problems. Pure Diversity, Spacing, Spread and Hypervolume are the metrics used for the evaluation.


Introduction
Optimization aims to achieve the best favourable outcome under the prevailing conditions.It is also sometimes called "Mathematical Programming" which means the application of mathematical methods and principles to solve quantitative problems.For smaller and simpler problems, exact methods could be effectively used.However, if the problem is large and complex with more variables (dimensions), exact methods may not converge in a reasonable time.As a result, researchers started using heuristic and metaheuristic methods for solving optimization problems in important domains of Operational Research, Computer Science, Industrial Engineering and Artificial Intelligence [12].They are capable of producing results with acceptable accuracy in a reasonable time.Most optimization problems use continuous functions with well-defined bounds [48].
Application of Operations Research (OR) could be found in all areas of science, engineering and medicine.Operations Research is effectively used in developing different production models [31], estimation of optimal A. BASKAR values and forecasting of future events.Chakraborty et al. [14] estimated the misfortunes of COVID-19 in China, Italy, and India in a given time frame in terms of the goodness-of-fit statistics and support vector machine-based regression (SVR).Optimization techniques were effectively used by Das et al. in studying the multi-objective location problems that include variable carbon emission in inventory management and transportation-p-facility location problems under a neutrosophic environment [15,17].
Primarily, any optimization algorithm can be either a local search or a population-based method [11].The search-ability (exploitation) is good for local search methods whereas; the main drawback is their focus on local search rather than concentrating on a global search (exploration).As a result, the possibility of getting stuck in a local optimum is high for this category of local search algorithms [2].The local search method could be effective in case of a smaller number of data points and find applications in several areas of optimization [8].
Classical algorithms use the derivatives of the objective function to arrive at a solution and, generally, they are fast and efficient.Two modified conjugate gradient methods, MCB1 and MCB2 of Mehamdia et al. [32] proved to be effective and robust in minimising some unconstrained optimization problems and each of these methods outperforms the four well-known conjugate gradient methods.Direct-search, Stochastic and Population-based algorithms are employed where the derivatives are not available.These algorithms are sometimes referred to as "Black-Box Optimization Algorithms" [21].Direct-search algorithms do not require any information about the gradient of the cost function [30].Powell's method, Hooke-Jeeves method and Nelder-Simplex method are a few popular direct-search algorithms.On the other hand, stochastic optimization refers to a method for optimizing an objective function where randomness exists [22].Stochastic algorithms include Simulated Annealing, Cross-Entropy and Evolution-Strategy algorithms.These stochastic algorithms initially generate random solutions which are evaluated for the cost function [35].Since they are stochastic, several trials are required to reach the global optimum or near the global optimum.
Population-based algorithms are also stochastic which generate, evaluate and maintain a pool of candidate solutions to move towards the optimal solution.The candidate solutions are spread over the entire domain which increases the probability of avoiding the local minima/maxima.Many efficient algorithms like the Genetic Algorithm [26].Particle Swarm Optimation [29], Sine Cosine Algorithm [33], Arithmetic Optimization Algorithm [3] and Sine (B) algorithm [7] fall under this category.Adil and Lakhbab [4] modified the Bat Algorithm for solving single objective global optimization problems and statistically analysed the results.
Optimization methods are widely used not only in the engineering domain but also in other areas like economics and finance making use of exact methods, heuristic and fuzzy logic methods.Rahaman et al. [37] developed two Marxian economic production quantity models aiming at the minimization of the exploitation rate and reduction of social surplus.Similarly, Ghorui et al. [24] used PIFN (pentagonal intuitionistic fuzzy number) with MCDM tool analytic hierarchy process (AHP) and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods to rank the Cloud Service Providers (CSPs).Using the fuzzy-modelled -constraint approach, Das et al. [18] simultaneously optimized three conflicting objective functions namely, "minimizing total financial costs, maximizing customers' satisfaction levels, and ensuring sustainable and effective conveyances".For solving the case of a multi-modal transportation routing problem with stochastic transportation time, data-driven approaches were applied by Peng et al. [36] in multi-objective optimization.
Several science concepts-based optimization algorithms have been proposed over the years for optimization.Archimedes' optimization algorithm is a physics-based algorithm proposed by Hashim et al. [25].Most of the algorithms are used for solving real-world engineering problems also (e.g.[3,6,7,25,49]).The case may be a single-objective or multi-objective one.
A multi-objective optimization procedure, namely, the global criterion method was introduced by Das et al. [16] to (i) minimize the total financial costs along with carbon emissions cost, (ii) maximize the customers' satisfaction level simultaneously, and (iii) maximize the sustainable effectiveness conveyances in a green framework.Abdelghany et al. [1] proposed a two-stage variable neighbourhood search algorithm for the nurse rostering problem.24 benchmark instances of Curtois and Qu were used for the analyses and the test results revealed that their proposed algorithm can compete with a recent heuristic approach for most of the tested instances.
The quantum of computing power available today greatly influences the size and complexity of the problems that could be effectively solved.Optimization techniques and algorithms day by day advance in almost every engineering domain.

Cost function
An optimization problem typically contains three elements; the objective function (cost function) that has to be optimized, a cluster of decision variables to define the objective function and, a set of constraints which may or may not be present.
The objective function for a "finite-dimensional single-objective optimization" could be, Minimize  () such that where; () = non-equality constraints, ℎ() = equality constraints and,  = number of dimensions.
In practical applications, the case may be a multi-objective or many-objective optimization with or without constraints.

Benchmark functions and real-world engineering problems used
For evaluating and comparing the performance of different algorithms, benchmarks with varying complexity and dimensions are available in the literature.They are generally classified as unimodal, multimodal, constrained and unconstrained benchmark functions.Also, several real-world engineering optimization problems are proposed and solved by researchers using their proposed algorithms.
In this work; for analysing the proposed algorithms, forty-eight unconstrained single objective benchmark functions [43,45] are used with dimensions varying from two to thirty (Tab.1).The optimal values, dimensions and search range are also presented in Table 1.The set is a mix of unimodal and multimodal functions with optimal values; zero, negative and positive numbers to have a fair assessment.Examples of unimodal and multimodal functions are depicted in Figure 1.
For assessing their algorithm, Bayzidi et al. [10] made use of several design-related constrained real-world engineering problems out of which a few problems are considered in this work (Tab.2).
To further assess the suitability of the algorithms in solving multiobjective optimization problems, five twoobjective problems (Tab.3) proposed by Zitzler et al. [50] are used.ZDT5 problem is omitted as it is a binary encoded one.
For running the codes, the following parameters are considered:

Strategies used
The general approach used by many population-based optimization algorithms (e.g.[7,38]) consists of the following steps: Step 1: Initialization.Generate the initial random population (random candidate solutions).
Step 2: New Solutions.Using a well-designed logic, generate another solution set from the initial one.Step 5: Iteration.Continue until all pairs are compared.This completes the iteration.
Step 6: Termination.Repeat till the required accuracy of the cost function is obtained or a fixed number of iterations.
Apart from this usual procedure, a few new strategies are applied and tried in this paper which are summarized below: Step 1: Initialization.The process being followed by most of the researchers is: The same strategy is used here also. ( Step 2: New Solutions.For generating the new solutions, this work considers seven indigenous expressions in the first phase based on the concept; ( maybe   , or "Best" or "Worst" solution) In the second phase, the expression used is, 1 changes from "" to 0 as the iterations progress [Tuning parameter,  = 1/1.5/2].
Step 3: Bounding.Instead of bringing the out-of-bound solutions to the nearest bounds, regenerate (Strategy: "RG") the new solution till it falls within the bounds.This strategy is used in the first phase only and in the second phase, the usual "corner bounding" is applied.
Step 4: Selection.The different selection techniques used by optimization algorithms include (mu + lambda), (mu, lambda) and greedy selections.In this work, (mu + lambda) is applied in the first phase and greedy selection in the second phase.Since different selection strategies have their advantages, different strategies are used in different phases.
Steps 5 and 6 remain the same.
During the optimization process, three more strategies which are applied whenever required as additional features.They are briefed below: (i) Opposite Number (Strategy: "ON"): the concept of "Opposition based learning" was proposed by Tizhoosh [44] wherein, the "opposite number" is defined as, LB + UB − .This concept was applied to GA and results analyzed.Since the initial solutions are highly stochastic, using the "anti-chromosomes" generated by the "opposite number" of a few weakest solutions, the results improved during early iterations.The same concept is applied here in the first phase where 5% (ON: 5%) or 10% (ON: 10%) of the worst solutions are converted to their "opposite numbers".In such a case, we can have symmetrical bounds while generating the solutions and, before estimating the cost, the variables can be modified to,  =  + [8 0.5] to bring back the original bounds.This strategy is used and the results are evaluated for a few problems.(iii) Tuning Parameter (Strategy: "TP"): tuning Parameter whose value reduces from " to 0" as the iteration progresses is used in the second phase while generating a new improved solution (Fig. 2, Pseudocode).
The strategies combined are declared in every part of the analysis.

Single-phase algorithms
The basic concept considered is that to obtain a new approximate solution; add or subtract a fraction of the present solution/"Best" solution/"Worst" solution to the present solution.
( maybe   , or "Best" or "Worst" solution) +1 is the new variables obtained during ( + 1)th iteration.Seven methods of computing the fraction are considered and named from ABN-1 to ABN-7 as presented in Table 4.They are designed such that the resulting fraction has both positive and negative values depending on the random number generated.
Where "rand" is any random value between 0 and 1, "Best" and "Worst" are the values of the variables corresponding to the "minimum" and "maximum" cost function values obtained during the previous iteration.
Only one random number is generated during the process.However, for the same random number, the resultant fraction value differs from [−1 to +1] in each case of ABN-1 to ABN-7.For ABN-1, ABN-3 and ABN-5, the result is straightforward.The resultant fraction varies from [−1 to +1] * (Solution).The "Solution" is either "  or Best or Worst solution".In the case of ABN-2, ABN-4 and ABN-6; the resultant fraction varies from [0 to 1 to 0 to −1 to 0] * (Solution) and, for ABN-7, the fraction varies from [1 to 0 to −1 to 0 to 1] * (Solution) as shown in Table 5.That is, the new solution differs in each case even for the same random number value resulting in varying exploitations.
Equation ( 6) in combination with Table 4 can be effectively used as stand-alone algorithms that use any of the above expressions for analyses.The new algorithms are compared with one of the recent better-performing algorithms, the "Arithmetic Optimization Algorithm (AOA)" [3].

Two-phase algorithm, T-Cos
The single-phase algorithms discussed in the earlier section could be converted to two-phase algorithms for improving the solutions further.In the case of a two-phase algorithm, the number of function evaluations is two per iteration.Any of the expressions listed in Table 4 could be used for generating the solutions in the first phase.However, to make the analysis simpler, only the "ABN-2" expression is used in the two-phase algorithm.
In the second phase, the expression used for the improvement of the solutions obtained during the first phase is: The pseudo-code and flow chart depicting the two-phase algorithm are presented in Figures 2 and 3. Lines 6 to 12 represent the first phase and lines 5, 13 to 25 represent the second phase of the algorithm in the pseudo code (Fig. 2).It is important to note that in this work, results are presented for only the first phase as well as for the combined two-phase algorithms.Analyses have not been carried out for the second phase separately.
Figure 3 represents the flow chart for the two-phase algorithm.Whenever the tuning parameter "" is not mentioned, it is to be taken as " = 1".

Motivation and novelties
Population-based algorithms are efficient in solving different variations of optimization problems.Researchers apply specific concepts and strategies to evolve new algorithms or modify popular algorithms available in the literature to further improve their efficacies.A few researchers have developed hybrid algorithms which prove to be efficient in many cases.It is widely accepted that "no single algorithm is capable of solving all optimization problems" according to the "No Free Lunch Theorem" [47].That is, the scope is always open for new entrants in this domain.The "solution generating expressions" used in phase 1 (ABN-1 to ABN-7) could be used as stand-alone single-phase algorithms as well as combined with other expressions to form different two-phase algorithms.In experiment 1 of this work, these single-phase versions are demonstrated to solve single objective unconstrained, constrained real-world and two-objective problems.In experiment 2 of this paper, one expression "Sin (2 * pi * rand) * " (ABN-2) which is a member of the family of seven is combined with another indigenous expression, " + Sin (2 * pi * rand) * Step" proposed by the author to form a two-phase algorithm and its performance analysed using same sets of benchmark keeping number of function evaluations the same.It is observed that the two-phase algorithm performs better than the single-phase one.
During the process, different sets of strategies are combined which are discussed briefly earlier which can further change the final solution quality.That is, several options are available for a researcher to solve an optimization problem.
The strategies discussed earlier are reproduced here for reference: RG Regeneration of the approximate solution in the first phase RCount Number of regenerations effected in thirty trials.ON Fraction of worst solutions converted to their "Opposite Numbers", 5% or 10% in the first phase.SS Symmetric Span representing the LB and UB symmetrical about zero in a benchmark function.TP Tuning Parameter whose value reduces from " to 0" as the iteration progresses ( = 1/1.5/2.0) in the second phase.
A slightly modified version of the two-phase algorithm is capable of solving facility location problems, as preliminary analyses show.

Single objective unconstrained benchmark
For analyzing the performances of different single-phase algorithms, some of the popular benchmark problems available in the literature are used (Tabs.1-3).ABN-1 to ABN-6 are used for the analyses.The trigonometric algorithm, Sine (AB) [6] is considered initially to evaluate the impact of the new strategy.The Sine (AB) algorithm is modified using the new strategy [RG] which is represented as "Sine (AB)-M1" hereafter.
Uniformly, a population size (PS) of 30 and 500 iterations (IT) is considered for the evaluation.30 trials are conducted and the mean values are compared with the original Sine (AB) algorithm.Also, randomly 10% [ON: 10%] of the population (i.e. 3 solutions) is symmetrically modified with "opposite solutions" and the cost function values are computed [Sine (AB)-M2].The results are compared with the results of Sine (AB) for the number of dimensions and search range considered by the author in the original work (Tab.6).Better results are marked in bold throughout the paper.
When the new strategy [RG] is applied, Sine (AB)-M1 outperforms the original Sine (AB) algorithm in all 40 benchmark functions.When 10% of the population [RG+10% ON] is symmetrically modified [Sine (AB)-M2], the results further improve.In 34 of the 40 problems, it reports better results against 24 of Sine (AB)-M1.In 18 cases, both algorithms report the best results.In 6 cases, the performance of Sine (AB)-M1 is better than Sine (AB)-M2 whereas; Sine (AB)-M2 betters Sine (AB)-M1 in 16 instances.
This proves the efficacy of new strategies.A total of 48 benchmark instances (Tab. 1) are used for the analyses using ABN-1 to ABN-6 considering them as single-phase algorithms and the summary of results is presented in Table 7 followed by complete results in Table 8.
Here also, the population size is 30 for all algorithms and run for 500 iterations.The "mean" and "standard deviation" are computed for 30 trials.
The best "Mean" results are obtained for ABN-5 which outperforms others in 19 problems closely followed by ABN-1 which is a better performer in 18 cases and ABN-2 performs well in 17 cases.
ABN-4 and ABN-6 are the poor performers that account for better results in 6 problems each and hence, complete results are not presented in Table 8.
AOA and ABN-3 account for the best results in 16 problems each.

Single objective constrained real-world engineering problems
To assess the performances of the single-phase algorithms in real-world constrained problems, four problems are considered as listed in Table 9.For this analysis, ABN-7 is also considered.Here also, the performance of ABN-5 is better than others reporting the best results in 3 of the four cases.ABN-1 reports better results in two problems.
The results are also compared with a few other algorithms (Tab.10) including the Arithmetic Optimization Algorithm (AOA) for the same population size (30), iterations (500) and number of trials (30).Other results are referred to from the AOA paper.
The design and optimal values for the welded beam and pressure vessel vary from those presented by Bayzidi et al. [10] in their paper.It is shown that the performances of the new algorithms are comparable with other popular and efficient algorithms.ABN-5 outperforms all other algorithms in the "Welded Beam" problem and performs reasonably well in the "Pressure Vessel" case.In the cases of "Helical Spring" and "3 Bar Truss" problems, ABN-1 with cost values of 0.012686 and 260.90 is ahead of many algorithms and very close to the best performers AOA and TSA respectively.

Two objective unconstrained ZDT problems
The suitability of using these single-phase algorithms for Two-Objective Optimization is evaluated in this section.The NSGA-III programs are used for the analysis.Instead of cross-over and mutation, expressions are used for building the population.To verify the working of the modified codes, two single-objective benchmarks, one un-constrained (Hartmann4) and the other constrained (three-bar truss), are tried using ABN-1.The second objective function is also taken as the first objective and the codes are run for two-objective optimization.That is,  2 () =  1 ().A single run is conducted with a population size of 30 and 500 iterations.
For the Hartmann4 function (dimensions: 4, bounds: [0 1], optimal: −3.1345), the algorithm reported a cost value of −3.1345 which is the optimal one.Similarly, for the three-bar truss problem (dimensions: 2, bounds: [0 1], number of constraints: 3), 263.897 is the reported cost which is better than many recent algorithms.Though ABN-5 is a better performer, it is not considered for two-objective optimization since it has to be slightly modified for each objective.Other algorithms, ABN-1, ABN-2 and ABN-7 can be used as it is for any number of objectives.To have more options, 5% and 10% of the population are modified [Strategy: ON] in each algorithm using the expression and results recorded.This changes those particular elements as their mirror images.
Five two-objective problems (Tab.3) proposed by Zitzler et al. [50] are considered.ZDT5 is omitted here as it is a binary encoded problem.

Metrics used
For assessing the performance of any algorithm; convergence, uniform distribution and extensiveness need to be measured.This work considers a few such metrics which are presented along with their significance in Table 11.
Hypervolume estimates the closeness of the estimated data points to the "Pareto Front" which is an indication of the convergence and distribution of a non-dominated set.During the evaluation process, the true Pareto Front (PF) is not known.Hence, the hypervolume metric considered here estimates the hypervolume between the estimated approximate Pareto front (PF  ) and a reference point [27].The reference point here is the maximum values of the cost functions in the obtained front.Since exact estimation is time-consuming, the Monte Carlo approach is used to estimate the hypervolume [13] by calculating the percentage of a set of random points in the performance space to be dominated by the Pareto front.1000 uniformly distributed random points are chosen within the bounded hyper-cuboid for this evaluation.

HV (PF
Table 11.Metrics used and their significance.

Metric Reference Significance Remark
Pure Diversity Wang et al. [46] Diversity of solutions Larger the better Spacing Jariyatantiwait and Yen [27]; Kaew [28] Degree of uniform distribution Minimum is better Maximum spread Zitzler et al. [50]; Jariyatantiwait and Yen [27] Separation of extreme solutions Larger the better Hypervolume Jariyatantiwait and Yen [27]; Cao [13] Convergence and distribution Larger the better Since only two-objective optimization is considered in this work, "(  )" refers to the rectangular area bounded by the reference point and  (  ).Spacing () is the degree of uniform distribution [27,28,39].It is a metric measuring how the obtained nondominated solutions are evenly distributed.Spacing "" is an indication of the "degree of uniform distribution" of the obtained solutions of the non-dominated front.
where "  " is the Euclidean distance between individual "  " and the nearest solution in the obtained nondominated front.Here, " " is the mean Euclidean distance of "  " and " " is the number of solutions in the obtained front.If the space, "" is zero; it indicates that all solutions in the obtained front are equally spaced.Maximum Spread (MS) [27,28,39] measures the length of the diagonal hyperbox formed by the extreme solutions observed in the non-dominated sets.However, the distribution of solutions cannot be assessed.A normalized version of MS could be defined as: where,  -Number of objective functions. max  and  min  -Maximum and minimum values of the objective "" in the selected Pareto set.   -Value of the ""-th objective function of the ""-th member of the non-dominated solutions set.Diversity is a measure of uniformity of the spread of the obtained solutions."Pure Diversity (PD)" was proposed by Wang et al. [46] and demonstrated using DTLZ and WFG problems.Though originally used for many-objective optimization problems, it is used here for two-objective problems for analysis purposes.
In addition to the above metrics; the "Minimum", "Mean", "Maximum" and "Standard Deviation" of the obtained final Pareto Front values after a single run are computed.This helps in analysing the capability of the algorithms to yield uniform results within the optimal cost function values.

Results and discussion
The results of the analyses are presented in Tables 12-16 and obtained Pareto Fronts in Figure 4 (later in this paper).
The algorithms are compared with the "Trigonometric Algorithms" of Baskar [6].The codes are run for the algorithms without any modification and with, 5% and 10% [Strategy: ON] of the population modifications.The population size (PS) is taken as 80 and the number of iterations is limited to 100.Only one run is conducted for all 5 problems.For the objectives; the minimum values, values nearer to 0.5 and values less than and nearer to 1 are considered the better values.For the second objective of ZDT3, a mean value closer to zero is a better one.ABN-1 (ON: 10%) performs better for the ZDT1 problem whereas; ABN-1 (ON: 5%) performs well in the case of ZDT2.The Trigonometric Algorithms, Sine (AB), Cosine (AB) and ABN-2 (ON: 10%) report the values of the second cost function above 1 (shown as underlined values in Tab.13) and hence ignored for this problem.
For the ZDT3 problem, ABN-7 is ahead of all other algorithms.In this case, the maximum value of the second objective exceeds 1 for the Cosine (AB) algorithm and is hence ignored.ABN-1 performs well for the ZDT6 instance.Finally, for the ZDT4 problem, ABN-7 (ON: 5%) performs extremely well when compared to other algorithms.In the case of ZDT6 and ZDT4 problems, the trigonometric algorithms report higher costs for the second objective and are hence ignored.
The obtained Pareto Front is a concentrated one in ZDT4.This is evident from the "Mean" values of the solutions for both objectives.For objective-1, the best "Mean" value is reported by ABN-2 (ON: 10%) which is just 0.13009.Similarly, for the second objective; ABN-7 reports a "Mean" value of 0.7522.Both the "Mean" values are far away from 0.5.
The results demonstrate the better performances of the proposed single-phase algorithms against the two "Trigonometric Algorithms" even for a smaller number of iterations.However, as mentioned earlier, this is a preliminary evaluation only.
To assess the real power, more analyses are required using more problems, and more objectives and algorithms.

Experiment 2: Impact of strategies -two-phase algorithm, T-Cos
In this section, the single-phase algorithm is converted to a two-phase algorithm termed T-Cos.In the first phase, only ABN-2 is considered as shown in the pseudo-code (Fig. 2).
Other expressions could also be used in the first phase.
In the second phase, an indigenous expression is devised to improve the solution which is also presented in the pseudo-code.

Single objective unconstrained benchmark
In this section, the results of the same set of forty-eight single-objective unconstrained benchmark problems are analysed using the two-phase algorithm.In line with experiment 1, the computing parameters and environment are kept the same and the results are presented in Table 17.
Only ABN-2 is considered in the first phase throughout this experiment 2. The results of ABN-2: single-phase with (RG+ON: 10%), two-phase with (RG+SS+ON: 10%) and, two-phase without any strategy applied are compared.
The "Rcount" (number of regenerations in 30 trials) is also reported in the two-phase with "RG" case.
The simulation shows that the two-phase algorithm with (RG+SS+ON: 10%) is the better performer accounting for 34 best "Mean" results and minimum "Standard Deviation" in 30 instances followed by the two-phase algorithm without any strategy with 27 and 24 better results respectively.That is, in both cases, they perform better than the single-phase algorithm.The "Rcount" value varies from a minimum of 547 for a 2 variables case (Three Hump Camel) and goes up to a whopping 3 037 298 (Vincent) for a 30-variable benchmark.
The "SS" strategy is applied for problem numbers 5, 18, 19, 20, 22, 26 and 37 while running single-phase and two-phase with strategies algorithms.

Single objective constrained real-world engineering problems and sensitivity analyses
The same set of four real-world problems used in experiment 1 are analyzed here also using the two-phase algorithm embedded with a few strategies.In one study (Tab.18), the population size is varied in steps; 5, 10, 20 and 30 and the number of iterations is increased to 1500, 750, 375 and 250 respectively to keep the number of function evaluations as the same.
"Best" results are reported in 3 of the 4 cases when the PS is taken as 5 and the fourth best result is credited to PS 20.However, when the "Mean" and "Standard Deviation" values are considered, the case is different.A PS of 30 could produce better "Mean" and "SD" values in two instances each.PS 5 and PS 10 report better "Mean" values in one case each and PS 10 is accountable for better "SD" in the two remaining problems.From this, though no clear conclusion could be drawn, less PS may result in better "Best" results in most cases.To summarize, PS 5 and PS 30 performed well in four cases of each of the 12 metrics ("Best", "Mean" and "SD" for 4 problems totalling 12) followed by 3 better performances by PS 10.
In another trial (Tab.19), to study the impact of the tuning parameter, three cases of " = 1/1.5/2"are considered and simulated.When  = 1, "Best" results are obtained in 3 problems.However when all the 12 metrics are considered, " = 2.0" is accountable for better results in 7 cases followed by " = 1" in five." = 1.5" could not report any better results for this set of problems.
Table 20 summarises and shows that the two-phase algorithm performs better than its single-phase counterpart and accounts for the better "Best" values in all four problem instances.In all four cases, the population size, PS = 5.

Two objective unconstrained ZDT problems
The results of ZDT two-objective problems analyzed using the proposed two-phase algorithms are presented in Table 21.The results of the single-phase ABN-2 algorithm (RG: 5%) are reproduced in the same table.Similar to experiment 1, a single run is conducted and the same metrics are used.
For the "Pure Diversity", "Hypervolume" and "Spread" metrics, the two-phase ABN-2 algorithm outperforms the corresponding single-phase algorithm for the same number of function evaluations.Only in the case of "Spacing", the single-phase algorithm performs moderately better producing better results in ZDT3 and ZDT6 problems.
When the objective function cost values are considered, the two-phase algorithm is ahead in 4 of the 5 cases in both the objectives when the "Mean" values which is a measure of "central tendency" are considered.The single-phase algorithm is better in "Objective 1" of ZDT3 and "Objective 2" of ZDT1 problems.In the case of the "Standard Deviation", the two-phase algorithm is better at reporting larger "Standard Deviations" (except "Objective 1" of ZDT2) confirming that the data points are more spread out in the obtained Pareto Fronts.
The "Pareto Fronts" obtained are shown in Figure 4 for both the single-phase and two-phase ABN-2 algorithms.This can be correlated with the values of the "Performance Metrics" values presented in Table 21.
The cost values of both objectives vary from "zero to one" except for the second objective of ZDT3.Hence, the mean values of the cost should be closer to 0.5 and for the second objective of ZDT3, closer to zero.The best "Mean" values reported are 0.49105 and 0.56837 by the single-phase for the ZDT3 and ZDT6 respectively for the first objective.
For the second objective, 0.49185 is reported for ZDT1 by the single-phase version and, 0.54032 is reported for ZDT6 by the two-phase version.
The obtained "Pareto Fronts" show clear offsets for the problem of ZDT4 (Fig. 4).The worst "Mean" values of 0.088376 and 0.84716 are reported for the two objectives by the single-phase algorithm which is far away from "0.5".The corresponding values are 0.20128 and 0.71336 for the two-phase algorithm which is significantly better than the single-phase one.When ZDT2 is considered, the "Mean" cost values reported are 0.26515, 0.85231 and 0.69012, 0.451 by the single and two-phase versions respectively for the two objectives.For this benchmark also, the deviations from "0.5" are significant but, comparatively better for the two-phase algorithm.However, as declared earlier, this is a preliminary evaluation carried out with PS = 80, and FEs = 8000 for single trials in each case.More trials and higher FEs are required to make a concrete conclusion.

Comparison with a few popular algorithms
In this section, as a new two-phase algorithm T-Cos, the performance is analysed and compared with a few other popular optimization algorithms available in the literature.
The test bench consists of two sets, unconstrained and constrained.Thirteen single objective, unconstrained scalable problems each of dimensions 30, 100, 500 and 1000 are taken from the AOA paper (F1 to F13).The reported values of T-Cos, the strategies applied and the best results extracted from the AOA [3] paper are presented in Tables 22 and 23.The performance of T-Cos is extremely good in all dimensions; 30, 100, 500 and 1000.T-Cos outperforms other algorithms in 6 problems of dimension 30 and in 8 cases each of dimensions 100, 500 and 1000 (Tab.23).AOA is the next better performer reporting better results in 6, 3, 3 and 3 problems respectively in each dimension.T-Cos reports the best "Mean" results in 30 cases followed by AOA in 15 cases.Grey wolf optimizer (GWO) [34], Genetic Algorithm (GA) [26], Firefly Algorithm (FA) [9] and Cuckoo Search (CS) [23] Algorithm performed well in 2 problems each whereas, Biogeography-Based Optimization (BBO) [40] yields better result in only one case.
The second set is of single objective real-world constrained engineering problems.The optimal values, number of dimensions and constraints are listed in Table 2.More details can be found in [24].Several popular algorithms like Atomic Orbital Search (AOS) [5], Chaos Game Optimization (CGO) [42], Cuckoo Search (CS) [23] Algorithm, Genetic Algorithm (GA) [26] and, Firefly Algorithm (FA) [9] were considered for the comparison by the authors.
In the first simulation (Tab.24), the two-phase algorithm, T-Cos is run for different iterations and tuning parameters keeping the population size as 5 for all problems [RG + SS + TP :  = 1/1.5/2.0].The number of function evaluations (FEs) varies for each problem.The "Best" results and corresponding "Rcount" values are provided for each problem instance.The "Best" values are reproduced in Table 25 for comparison.
For the "Stepped Cantilever (Continuous)" problem, the "Rcount" is the highest with 1.8129e+07 regenerations and the minimum is 22 663 for the "Reinforced Concrete Beam".
In the second simulation, only the tuning parameter is considered (TP:  = 1) without any other strategy while running the two-phase algorithm.The PS is again taken as 5 and the "Best" values are presented in Table 25 and the complete results in Table 26.The results of the first 9 problems are taken from [10] for a few well-known algorithms and compared with the results obtained in this study.Himmelblau's Function [19], Tension/compression spring (Discrete) [20] and, Cantilever stepped beam (Continuous) [41] are three more benchmarks considered in this analysis.
In five instances (Tab.25), the T-Cos algorithm (with tuning, TP:  = 1) reports "Best" values followed by the other version (RG+SS+TP:  = 1/1.5/2.0) and SNS in four cases each.In several cases, the results of T-Cos are very close to the best results reported by other algorithms.

Limitations, future work and scope
The proposed algorithms could be used as single-phase as well as two-phase algorithms for solving different optimization problems including real-world design problems.For any algorithm, solving real-world problems is more important as this will demonstrate the real capability of an algorithm.This paper demonstrated this requirement through several experiments.The main limitation of these algorithms is that when we regenerate the solutions that go beyond the bounds, the execution time increases significantly.This paper applied several strategies for solving the optimization problems.The analyses have been carried out by combining two or more strategies.In this case, the impact of each strategy could not be assessed and hence further investigations are required to make clear conclusions.Similarly, more benchmarks in each category need to be analysed and compared with other recent and better-performing algorithms.

Conclusion
This paper proposes a set of simple population-based algorithms for solving single-objective, multi-objective, unconstrained and constrained problem instances.They are simple expressions representing a "fraction of an approximate solution" that will be added to its original approximate solution.Initially, they are used as singlephase algorithms for solving any kind of optimization problem under discussion.In the later stage of the analysis, one of the seven expressions (ABN-2) is combined with another expression to form a two-phase algorithm whose capability is also demonstrated with the same benchmark datasets and the results show significant improvements.The strategies used here are; (i) regenerating new approximate solutions if they do not fall within the bounds, without corner bounding (ii) using (mu + lambda) selection strategy in the first phase and greedy selection in its second phase (iii) using "opposite" solutions for a fraction of the worst solutions in the first phase (iv) applying "symmetrical bounds" wherever required and, (v) using a "Tuning Parameter" if required.Sixty-one single-objective unconstrained benchmarks, fifteen single-objective constrained real-world engineering and five two-objective unconstrained ZDT problems are used for analysing both single-phase and two-phase heuristic algorithms and the obtained results are encouraging.The expressions used in both phases are indigenously proposed by the author.However, the strategies could be comfortably adopted by any heuristic.

Figure 1 .
Figure 1.Examples of single objective, unconstrained unimodal and multimodal functions.

Figure 2 .
Figure 2. Pseudo code of the proposed two-phase algorithm (T-Cos).

Figure 3 .
Figure 3. Flow chart of the proposed two-phase algorithm, T-Cos.

Figure 4 .
Figure 4. Obtained Pareto Fronts of the ZDT two-objective problems.

Table 3 .
Set 3: Two objective unconstrained ZDT problems.Selection.Compare the first random solution with its corresponding new solution based on its cost function values.Retain the best solution and go to the next solution.

Table 5 .
Value of the fractions in different algorithms.

Table 7 .
Summary of performance of algorithms.

Table 23 .
Best "Mean" values by different algorithms.
Notes.Two function evaluations per iteration.