current position:Home>[workshop scheduling] solve the workshop scheduling problem based on MATLAB particle swarm optimization algorithm [including Matlab source code phase 013]

[workshop scheduling] solve the workshop scheduling problem based on MATLAB particle swarm optimization algorithm [including Matlab source code phase 013]

2021-08-25 07:42:57 Buddha anger lotus

One 、 Introduction to workshop scheduling

1 Job shop scheduling definition
Job shop scheduling refers to the allocation of processing shop order according to the reasonable needs of product manufacturing , So as to make rational use of product manufacturing resources 、 The purpose of improving the economic benefits of enterprises . The job shop scheduling problem can be described mathematically as having n The parts to be machined should be in m On this machine . The conditions to be met include that each process of each part uses no more than... Per machine 1 Time , Each part is processed in a certain order .

2 Traditional job shop scheduling
Example of traditional job shop with scheduling
 Insert picture description here
There are several workpieces , Each workpiece has several processes , There are multiple processing machines , But each process can only be processed on one machine . The example corresponding to the table above is , Two pieces , workpiece J1 There are three processes , working procedure Q11 Only in M3 Top processing , The processing time is 5 Hours .
Constraints are for an artifact , The relative sequence of operations cannot be changed .O11->O12->O13. Every moment , Each workpiece can only be processed on one machine ; There can only be one workpiece on each machine .
The task of scheduling is to arrange the processing sequence of the operation , The machining sequence is determined , Because only one machine is available for each process , The processing machine is determined .
The purpose of scheduling is to minimize the total completion time ( It can also be other goals ). for instance , For example, I'm sure O21->O22->O11->O23->O12->O13 After the processing sequence , We can according to the constraints of the processing machine , Calculate the total processing time .
M2 machining O21 Consume 6 Hours , workpiece J2 Current processing time 6 Hours .
M1 machining O22 Consume 9 Hours , workpiece J2 Current processing time 6+9=15 Hours .
M3 machining O11 Consume 5 Hours , workpiece J1 Current processing time 5 Hours .
M4 machining O23 Consume 7 Hours , workpiece J2 Processing time 15+7=22 Hours .
M1 machining O12 Consume 11 Hours , But wait M1 After processing O22 Then start processing O12, So the workpiece J1 The current processing time of is max(5,9)+11=20 Hours .
M5 machining O13 Consume 8 Hours , workpiece J2 Processing time 20+8=28 Hours .
The total completion time is max(22,28)=28 Hours .

2 Flexible job shop scheduling
Flexible job shop with scheduling example ( Refer to teacher Gao Liang's paper
《 Improved genetic algorithm for flexible job shop scheduling problem 》—— Journal of Mechanical Engineering )
 Insert picture description here
Compared with traditional job shop scheduling , Flexible job shop scheduling relaxes the constraints on processing machines , More in line with the actual production situation , The optional processing machines for each process become multiple , It can be processed by one of multiple processing machines . For example, the examples in the above table ,J1 Of O12 The operation can choose M2 and M4 machining , The processing time is 8 Hours and 4 Hours , But you don't have to choose M4 machining , The final total completion time is shorter , therefore , Scheduling algorithm is needed to solve the optimization problem .

Compared with the traditional workshop , The scheduling task of flexible job shop scheduling should not only determine the processing sequence of the operation , And it is necessary to determine the machine allocation of each process . such as , To determine the O21->O22->O11->O23->O12->O13 The processing sequence , We can't process machines for corresponding processes , So we should also determine the corresponding [M1、M3、M5]->[M1、M2、M3]->[M1、M2、M3、M4、M5]->[M2、M3、M4、M5]->[M2、M4]->[M1、M3、M4、M5] A combination of machines . The purpose of scheduling is to minimize the overall completion time ( It can also be other goals , For example, the maximum load of the machine is the shortest 、 The total machine load is the shortest )

Two 、 Introduction to particle swarm optimization

1 introduction
The group behavior of birds and fish in nature has always been the research interest of scientists . biologist Craig Reynolds stay 1987 A very influential bird flock aggregation model was proposed in , In his simulation , Every individual follows : Avoid colliding with neighboring individuals : The speed of matching neighborhood individuals ; Fly to the center of the flock , And the whole group flies to the target . Only the above three simple rules are used in the simulation , We can very closely simulate the phenomenon of birds flying .1990 year , biologist Frank Heppner A bird model is also proposed , The difference is : Birds are attracted to fly to their habitat . In the simulation , At first, every bird had no specific flight target , Just use simple rules to determine your flight direction and speed , When a bird flies to its habitat , The birds around it will follow to their habitat , Eventually the whole flock will fall into its habitat .
1995 year , American social psychologist James Kennedy And Electrical Engineer RussellEberhart A particle swarm optimization algorithm is proposed (ParticleS warm Optimization, PSO) , The proposed algorithm is inspired by the research results of modeling and Simulation of bird group behavior . Their models and simulation algorithms are mainly for Frank Heppner The model is modified , So that the particles fly to the solution space and land at the optimal solution . Once particle swarm optimization algorithm is proposed , Because its algorithm is simple , Easy to implement , It immediately attracted extensive attention of scholars in the field of Evolutionary Computing , Form a research hotspot .2001 Published in 2002 J.Kennedy And R.Eberhart Co authored 《 Group intelligence 》 Further expand the influence of swarm intelligence [] , Subsequently, a large number of research reports and research results on particle swarm optimization algorithm emerged , Then it set off an upsurge of research at home and abroad [2-7].
Particle swarm optimization algorithm comes from the regularity of bird population activities , Then a simplified model is established by using swarm intelligence . It simulates the foraging behavior of birds , The search space for solving the problem is compared to the flight space of birds , Abstract each bird into a particle without mass and volume
Son , Use it to represent a possible solution of the problem , The process of finding the optimal solution of the problem is regarded as the process of birds looking for food , Then solve the complex optimization problem . Particle swarm optimization algorithm is the same as other evolutionary algorithms , Is based on “ population ” and “ evolution ” The concept of , Through individual
Collaboration and competition , Realize the search of the optimal solution in complex space . meanwhile , It does not cross individuals like other evolutionary algorithms 、 variation 、 Selection and other evolutionary operator operations , Instead, individuals in the group are seen as l Particles without mass and volume in dimensional search space , Each particle moves in solution space at a certain speed , And to their best position in history P best And the best location in neighborhood history g best Gather , Realize the evolution of candidate solutions . Particle swarm optimization algorithm has a good biological and social background and is easy to understand , It is easy to implement because of few parameters , For nonlinearity 、 Multimodal problems have strong global search ability , It has been widely concerned in scientific research and engineering practice . at present , The algorithm has been widely used in function optimization 、 Neural network training 、 Pattern classification 、 Fuzzy control and other fields .

2 Particle swarm optimization theory
2.1 Particle swarm optimization algorithm description
Birds in the process of predation , Members of the flock can obtain the discovery and flight experience of other members through information exchange and sharing among individuals . Under the condition that food sources are scattered and unpredictable , The advantages of this collaboration mechanism are decisive , Far greater than for food
Disadvantages caused by competition . Particle swarm optimization algorithm is inspired by the predation behavior of birds and imitates this behavior , The search space of the optimization problem is compared with the flight space of birds , Abstract each bird into a particle , Particles have no mass 、 No volume , To represent a feasible solution to the problem , The optimal solution to the optimization problem is equivalent to the food source that birds are looking for . Particle swarm optimization algorithm formulates simple behavior rules similar to bird motion for each particle , The motion of the whole particle swarm shows similar characteristics to that of bird predation , Thus, complex optimization problems can be solved .
The information sharing mechanism of particle swarm optimization algorithm can be interpreted as a symbiotic and cooperative behavior , That is, every particle is constantly searching , And its search behavior is affected by other individuals in the group to varying degrees [8], At the same time, these particles also have the ability to remember the best position they experience
Memory ability , That is, their search behavior is influenced by other individuals and guided by their own experience . Based on the unique search mechanism , Particle swarm optimization algorithm first generates the initial population , That is, the velocity and position of particles are randomly initialized in the feasible solution space and velocity space , The position of the particle is used to characterize the feasible solution of the problem , Then the optimization problem is solved through the cooperation and competition of particle individuals among populations .
2.2 Particle swarm optimization modeling
Particle swarm optimization algorithm comes from the study of bird predation behavior : A flock of birds searched the area randomly for food , All birds know how far they are from their food , So the simplest and most effective strategy is to search the area around the bird nearest to the food . Particle swarm optimization
This model is used to get enlightenment and applied to solve optimization problems . In particle swarm optimization , The potential solution of each optimization problem is a bird in the search space , Call it particles . All particles have a fitness value determined by the optimized function , Each particle also has a velocity that determines the direction and distance they fly . then , The particles follow the current optimal particle to search in the solution space [9].

Particle swarm optimization algorithm first initializes particle swarm optimization randomly in a given solution space , The number of variables of the problem to be optimized determines the dimension of the solution space . Each particle has its initial position and initial velocity , Then through iterative optimization . In each iteration , Each particle tracks two “ extremum ” To update their spatial position and flight speed in solution space : An extreme value is the optimal solution found by a single particle itself in the iterative process , This particle is called the individual extremum : Another extreme value is the optimal solution found by all particles of the population in the iterative process , This particle is a global extremum . The above method is called global particle swarm optimization . If you don't use all the particles in the population and only use some of them as the neighbor particles of the particle , Then the extremum in all neighbor particles is the local extremum , This method is called local particle swarm optimization .

2.3 Characteristics of particle swarm optimization
Particle swarm optimization is essentially a random search algorithm , It is a new intelligent optimization technology . The algorithm can converge to the global optimal solution with high probability . Practice has proved , It is suitable for dynamic 、 Optimization in multi-objective optimization environment , Compared with traditional optimization algorithms , It has a fast design
Computing speed and better global search ability .
(1) Particle swarm optimization is an optimization algorithm based on swarm intelligence theory , The swarm intelligence generated by the cooperation and competition among particles in the swarm guides the optimal search . Compared with other algorithms , Particle swarm optimization is an efficient parallel search algorithm .
(2) Both particle swarm optimization algorithm and genetic algorithm are randomly initialized populations , The fitness value is used to evaluate the advantages and disadvantages of individuals and carry out a certain random search . But particle swarm optimization algorithm determines the search speed according to its own speed , There is no crossover and mutation of genetic algorithm . Compared with evolutionary algorithm , Particle swarm optimization algorithm retains the global search strategy based on population , But the speed it uses - The displacement model is easy to operate , Avoid complex genetic operations .
(3) Because each particle still maintains its individual extremum at the end of the algorithm , That is, particle swarm optimization algorithm can not only find the optimal solution of the problem , Some better suboptimal solutions will be obtained , Therefore, applying particle swarm optimization algorithm to scheduling and decision-making problems can give a variety of meaningful schemes .
(4) The unique memory of particle swarm optimization makes it possible to dynamically track the current search situation and adjust its search strategy . in addition , Particle swarm optimization algorithm is not sensitive to the size of the population , Even when the population decreases , The performance degradation is not great .

3 Particle swarm optimization algorithm
3.1 Basic particle swarm optimization
 Insert picture description here
3.2 Standard particle swarm optimization
Two concepts often used in the study of particle swarm optimization are introduced : One is “ Explore ”, It means that the particle leaves the original search track to a certain extent , Search in a new direction , It embodies the ability to explore unknown areas , Similar to global search ; Two is “ Development ”, It means that the particle continues to search in a finer step on the original search track to a certain extent , It mainly refers to further searching the area searched in the process of exploration . Exploration is to deviate from the original optimization trajectory to find a better solution , Exploration ability is the global search ability of an algorithm . Development is the use of a good solution , Continue the original optimization trajectory to search for a better solution , It is the local search ability of the algorithm . How to determine the ratio of local search ability to global search ability , The process of solving a problem is very important .1998 year , Shi Yuhui An improved particle swarm optimization algorithm with inertia weight is proposed by et al [10], Because the algorithm can ensure better convergence effect , So it is defaulted to the standard particle swarm optimization algorithm . Its evolutionary process is :
 Insert picture description here
In the type (6.7) in , The first part represents the previous velocity of the particle , It is used to ensure the global convergence performance of the algorithm ; The second part 、 The third part makes the algorithm have local convergence ability . It can be seen that , type (6.7) Inertia weight in w Indicates to what extent the original speed is retained :W
more , Then the global convergence ability is strong , The ability of local convergence is weak ;w smaller , Then the local convergence ability is strong , The ability of global convergence is weak .
When w=1 when , type (6.7) And form (6.5) Exactly the same as , It shows that the particle swarm optimization algorithm with inertia weight is an extension of the basic particle swarm optimization algorithm . Experimental results show that :w stay 0.8~1.2 Between time , Particle swarm optimization algorithm has faster convergence speed ; And when w>1.2 when , The algorithm is easy to fall into local extremum .
in addition , In the search process, you can search w Make dynamic adjustments : At the beginning of the algorithm , May give w Give a larger positive value , As the search goes on , Can make w Gradually decrease , This ensures that at the beginning of the algorithm , Each particle can probe in the global range with a large velocity step
Better areas are measured ; And later in the search , smaller w Value ensures that particles can do fine search around extreme points , Thus, the algorithm has a high probability of converging to the global optimal solution . Yes w Adjustment , It can weigh the ability of global search and local search . at present , More dynamic inertia weight values are Shi A linear decreasing weight strategy is proposed , Its expression is as follows :
 Insert picture description here
3.3 Compression factor particle swarm optimization
Clerc Et al. Proposed using constraint factors to control the final convergence of system behavior [11] , This method can effectively search different regions , And high quality solutions can be obtained . The velocity updating formula of the compression factor method is :
 Insert picture description here
Experimental results show that : Compared with particle swarm optimization algorithm using inertia weight , Use tools
The particle swarm optimization algorithm with constraint factor has faster convergence speed .
3.4 Discrete particle swarm optimization
The basic particle swarm optimization algorithm is a powerful tool for searching the extreme value of function in continuous domain . After the basic particle swarm optimization , Kennedy and Eberhart A discrete binary version of particle swarm optimization algorithm is proposed [12]. In this discrete particle swarm optimization method , The discrete problem space is mapped to the continuous particle motion space , And modify the particle swarm optimization algorithm to solve , The speed of classical particle swarm optimization algorithm is still retained - Location update operation rules . The values and changes of particles in state space are limited to 0 and 1 Two values , And every dimension of velocity vi y Represents the position of each xi The value is 1 The possibility of . therefore , In continuous particle swarm optimization vij The update formula remains unchanged , however P best and :best Only in [0, 1] The inner value is . The position update equation is expressed as follows :
 Insert picture description here
4 Particle swarm optimization process
Particle swarm optimization algorithm is based on “ population ” and “ evolution ” The concept of , Through cooperation and competition among individuals , Realize the search of the optimal solution in complex space [13], The process is as follows :
(1) Initialize the particle swarm , Including group size N, The position of each particle x; And speed Vio
(2) Calculate the fitness value of each particle fit[i] .
(3) For each particle , Use its fitness value fit[ Gate and individual extremum P best(i) Compare . If fit[i] <P best(i) , Then use fit[i] Replace P best(i) .
(4) For each particle , Use its fitness value fit[i] And global extremum g best Compare . If fit[i] < 8 best, Then use fit[i] Replace g best.
(5) Iteratively update the speed of particles v; And location xj.
(6) Carry out boundary condition treatment .
(7) Judge whether the termination conditions of the algorithm are met : if , End the algorithm and output the optimization result ; Otherwise return to step (2).
The operation flow of particle swarm optimization algorithm is shown in Figure 6.1 Shown .
 Insert picture description here
5 Key parameter description
In particle swarm optimization , The choice of control parameters can affect the performance and efficiency of the algorithm ; How to select appropriate control parameters to optimize the performance of the algorithm , It is a complex optimization problem . In practical optimization problems , The control parameters are usually selected according to the user's experience .
The control parameters of particle swarm optimization algorithm mainly include : Particle population size N, Inertia weight w, Acceleration factor c and c, Maximum speed Via x, Stop criteria , Setting of neighborhood structure , Boundary condition treatment strategy, etc [14],
Particle population size N
The choice of particle population size depends on the specific problem , However, the number of particles is generally set to 20~50. For most questions 10 A particle , Good results have been achieved : But for difficult problems or specific types of problems , The number of particles can be taken as 100 or
200. in addition , The larger the number of particles , The larger the space the algorithm searches , It is easier to find the global optimal solution ; Of course , The longer the algorithm runs .
Inertia weight w
Inertia weight w It is a very important control parameter in standard particle swarm optimization algorithm , It can be used to control the development and exploration ability of algorithms . The size of the inertia weight indicates how much is inherited from the particle's current velocity . When the inertia weight value is large , Strong global optimization ability , Local optimization capability
Weak : When the inertia weight value is small , Weak global optimization ability , Strong local optimization ability . The choice of inertia weight usually has fixed weight and time-varying weight . Fixed weight is to select a constant as the inertia weight value , Its value remains unchanged during evolution , The general value is
[0.8,1.2]: Time varying weight is to set a change interval , In the process of evolution, the inertia weight is gradually reduced in some way . The selection of time-varying weight includes variation range and decline rate . The fixed inertia weight can make the particles maintain the same exploration and development ability , The time-varying weight can make particles have different exploration and development abilities at different stages of evolution .
Acceleration constant c1 and c2
Acceleration constant c and c 2 Adjust the direction respectively P best and g best Maximum step length for directional flight , They determine the influence of particle individual experience and group experience on particle trajectory , Reflect the information exchange between particle swarm optimization . If cr=c2=0, The particle will fly to the boundary at its current flight speed . here , Particles can only search a limited area , So it's hard to find the optimal solution . If q=0, Then for “ social ” Model , Particles lack cognitive ability , And only group experience , It converges faster , But it is easy to fall into local optimum ; If oy=0, Then for “ cognition ” model
type , There is no social sharing of information , There is no information interaction between individuals , So the probability of finding the optimal solution is small , One scale is D The group is equivalent to running N A particle that goes its own way . Therefore, the general setting c1=C2, You can usually take c1=cg=1.5. such , Individual experience and group experience have the same important influence , Make the final optimal solution more accurate .
The maximum velocity of a particle vmax
The velocity of a particle has a maximum velocity limit on each dimension of space vd max, Used to clamp the velocity of particles , Keep the speed within the range [-Vimax, +va max] Inside , This determines the strength of problem space search , This value is generally set by the user .Vmax It's a very important parameter , If it's too much , Then the particles may fly over the excellent area : And if the value is too small , Then the particles may not be able to fully detect the region outside the local optimal region . They may fall into local optima , And can't move far enough to jump out of the local optimum , Achieve a better position in space . The researchers point out that , Set up Vmax And adjusting inertia weight , therefore !max It is generally used to set the initialization of the population , the vmax Set as the variation range of each dimensional variable , Instead of carefully selecting and adjusting the maximum speed .
Stop criteria
Maximum number of iterations 、 Calculation accuracy or maximum stagnation steps of the optimal solution ▲t( Or an acceptable satisfactory solution ) It is generally considered to be the stopping criterion , That is, the termination condition of the algorithm . According to the specific optimization problem , The setting of stop criteria should take into account the solution time of the algorithm 、 Optimize quality and
Search efficiency and other performance .
Setting of neighborhood structure
The global version of particle swarm optimization takes the whole population as the neighborhood of particles , It has the advantage of fast convergence , But sometimes the algorithm will fall into local optimization . The local version of particle swarm optimization takes the individuals with similar positions as the neighborhood of particles , Slow convergence , It is not easy to fall into local optimum
value . Practical application , The global particle swarm optimization algorithm can be used to find the direction of the optimal solution , That is, get the approximate result , Then the local particle swarm optimization algorithm is used to carry out fine search near the best point .
Boundary condition treatment
When the position or speed of a certain dimension or if the dimension exceeds the set value , Using the boundary condition processing strategy, the position of particles can be limited to the feasible search space , This can avoid the expansion and divergence of the population , It can also avoid the blind search of particles in a large range , Thus, the search efficiency is improved .
There are many specific methods , For example, by setting the maximum position limit Xmax And maximum speed limit Vmax, When the maximum position or speed is exceeded , Randomly generate a value in the range instead of , Or set it to the maximum value , That is, boundary absorption .

3、 ... and 、 Partial source code

function []=FJSP_PSO()
[~, ~, alldata] = xlsread('data2.xlsx');
[row, columns]=size(alldata);
for i=2:row
    for j=1:columns
        if ischar(cell2mat(alldata(i,j)))
            alldata(i,j)={str2num(cell2mat(alldata(i,j)))};
        end
    end
end
 
TaskAmo=alldata{row,1};
SubtaskAmo=row-1;
SubtaskNum=zeros(1,TaskAmo);
MachineAmo=0;
 
for i=1:SubtaskAmo
    MachineAmo=max(MachineAmo,max(alldata{i+1,5}));
end
 
count1=0;
for i=2:row
    if ~isnan(alldata{i,2})
        count1=count1+1;
        SubtaskNum(count1)=alldata{i,2};
    end
end
 
tic
nind=50;
maxgen=20;
w=0.72;
c1=1.79;
c2=1.79;
gen=0;
 
trace=zeros(2,maxgen);
chrom=zeros(nind,2*SubtaskAmo);
X_process=zeros(1,SubtaskAmo);
chrom_operating=zeros(nind,2*SubtaskAmo);
chrom_temp_P=zeros(nind,SubtaskAmo);
chrom_temp_M=zeros(nind,SubtaskAmo);
 
% individual_best=zeros(nind,3);
% population_best=zeros(1,3);
 
 
for r=1:nind
    for i=1:SubtaskAmo
        X_process(i)=alldata{i+1,1};
    end
    
    Queue=randperm(numel(X_process));
    X_process=X_process(Queue);
    
    X_machine=M_generate3(alldata,X_process,TaskAmo,SubtaskAmo,SubtaskNum);
    chrom(r,:)=[X_process,X_machine];
end;
 
[objective,chrom_X_P,chrom_X_M,minP,minM,minfit]=FJSP_objv3(alldata,TaskAmo,SubtaskAmo,SubtaskNum,nind,chrom);
 
individual_best_object=objective;
individual_best_P=chrom_X_P;
individual_best_M=chrom_X_M;
 
population_best_objective=minfit;
population_best_P=minP;
population_best_M=minM;
 
 
while gen<maxgen
    for r=1:nind
        for i=1:SubtaskAmo
            chrom_temp_P(r,i)=w*chrom(r,i)+c1*rand*(individual_best_P(r,i)-chrom(r,i))+c2*rand*(population_best_P(i)-chrom(r,i));
        end
        [~,temp_node]=sort(chrom_temp_P(r,:));
        temp=chrom(r,1:SubtaskAmo);
        chrom(r,1:SubtaskAmo)=temp(temp_node);
        chrom(r,SubtaskAmo+1:2*SubtaskAmo)=M_generate3(alldata,chrom(r,1:SubtaskAmo),TaskAmo,SubtaskAmo,SubtaskNum);
    end
    
    [objective,chrom_X_P,chrom_X_M,minP,minM,minfit]=FJSP_objv3(alldata,TaskAmo,SubtaskAmo,SubtaskNum,nind,chrom);
    
    for r=1:nind
        if objective(r)<individual_best_object(r)
            individual_best_object(r)=objective(r);
            individual_best_P(r,:)=chrom(1:SubtaskAmo);
            individual_best_M(r,:)=chrom(SubtaskAmo+1:2*SubtaskAmo);
        end
    end
    
    if minfit<population_best_objective
        population_best_objective=minfit;
        population_best_P=minP;
        population_best_M=minM;
    end
    
    gen=gen+1;
    
    trace(1,gen)=min(objective);
    trace(2,gen)=sum(objective)/length(objective);  
end
toc
 
population_best_W=W_generate2(alldata,SubtaskAmo,TaskAmo,SubtaskNum,population_best_P,population_best_M);
[T,~,trace1,trace2,trace3,~]=FJSP_ADV_fitness4(population_best_P,population_best_M,population_best_W);
 
minXulie=[population_best_P,population_best_M,population_best_W]
minFitness=population_best_W
 
figure(1);
plot(trace(1,1:gen));
hold on;
plot(trace(2,1:gen),'-.');grid;
 
 
figure(2);
colorset1=rand(1,TaskAmo);
colorset2=rand(1,TaskAmo);
[Trow,Tcol]=size(T);
for i=1:Trow
    for j=1:Tcol
        if trace2(i,j)~=0
            fill([T(i,j),T(i,j)+trace1(i,j),T(i,j)+trace1(i,j),T(i,j)],[1*i-0.3,1*i-0.3,1*i+0.3,1*i+0.3],[trace2(i,j)/TaskAmo,0.9*colorset1(trace2(i,j)),0.8*colorset2(trace2(i,j))]);hold on
            text(T(i,j),i,strcat(num2str(trace2(i,j)),',',num2str(trace3(i,j))));
        end
    end
end
end
        
        
 

      
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
  • 78.
  • 79.
  • 80.
  • 81.
  • 82.
  • 83.
  • 84.
  • 85.
  • 86.
  • 87.
  • 88.
  • 89.
  • 90.
  • 91.
  • 92.
  • 93.
  • 94.
  • 95.
  • 96.
  • 97.
  • 98.
  • 99.
  • 100.
  • 101.
  • 102.
  • 103.
  • 104.
  • 105.
  • 106.
  • 107.
  • 108.
  • 109.
  • 110.
  • 111.
  • 112.
  • 113.
  • 114.
  • 115.
  • 116.
  • 117.
  • 118.
  • 119.
  • 120.
  • 121.
  • 122.
  • 123.
  • 124.
  • 125.
  • 126.
  • 127.
  • 128.
  • 129.
  • 130.
  • 131.
  • 132.

Four 、 Running results

 Insert picture description here

5、 ... and 、matlab Edition and references

1 matlab edition
2014a

2 reference
[1] Baoziyang , Yu Jizhou , Poplar . Intelligent optimization algorithm and its application MATLAB example ( The first 2 edition )[M]. Electronic industry press ,2016.
[2] Zhang Yan , Wu Shuigen .MATLAB Optimization algorithm source code [M]. tsinghua university press ,2017.

copyright notice
author[Buddha anger lotus],Please bring the original link to reprint, thank you.
https://caren.inotgo.com/2021/08/20210825074252941x.html

Random recommended