Page 76 - AJWEP-22-5
P. 76
Jun, et al.
G
x (),0 x ( )0 x min x , max ( IX ) space. For each individual x , its evolutionary process
G
L
i
j
j
includes the following three steps:
For individuals in the local subpopulation, it was (a) Differential variation
also necessary to initialize the first and second moment Randomly select three different individuals,
estimates of their gradients: xx, r2 x , r3 P , and construct the mutation
x
G
G
r1
j
i ()
i ()
m = 0, v = 0 (X) vector:
0
0
v = x + F⋅(x -x ) (XVI)
j r1 r2 r3
This setting provides the initial conditions for the Where F ∈ [0,2] is the scaling factor.
subsequent parameter updates of the Nadam optimizer. (b) Cross-operation
Construct the test vector using the cross probability
3.3.2. Local optimization stage CR [0,1]:∈
The evolution of the local subpopulation P mainly relies
L
01
on the gradient-driven mechanism of the Nadam v , ifrand ( ,) CRork k rand
jk,
optimizer. For each generation t, the current gradient was u jk, x , otherwise (XVII)
G
jk,
i
L
calculated as g t i () f xt() , followed by multi- order
estimation and parameter updates based on this gradient: Here, k ensures that at least one dimension is taken
rand
(a) First-order moment estimation (momentum): from the mutation vector.
i ()
m 1 m t1 1 1 g t i () (XI) (c) Selection operation
i ()
t
G
j
j
(b) Second-order moment estimation (variance): Compare u and x based on the fitness function
f (⋅):
i ()
i ()
i ()
v 2 v t1 1 2 g 2 (XII) u , iff u
f x
G
t
t
G
xt( 1 ) j j j (XVIII)
(c) First-order moment Nesterov correction: j x , otherwise
G
j
ˆ ()i The evolutionary operation of DE does not rely on
t m = β 1 m + t () i (1 β − 1 ) g () i (XIII) gradients and is suitable for optimization problems
t
(d) Second-order moment deviation correction: with complex search spaces and non-differentiable or
discontinuous objective functions.
ˆ ()i v ()i (d) IEM
t v = t (XIV)
1 β 2 t To achieve synergy between the two optimization
−
strategies, this study designed an IEM based on the
(e) Parameter update formula: principle that “the superior replaces the inferior.” This
ˆ ()i mechanism was executed once every T proxy. The
ex
L
L
( +
=
( ) η−⋅
xt 1) xt t m (XV) specific process is as follows:
i
i
ˆ ()i Select the best-performing individual from the
t v + ε Nadam subpopulation P :
L
*
Here, β1 and β2 control the decay rates of momentum x argmin f x() (XIX)
L
and variance, η is the learning rate, and ϵ is a small xP L
constant added to avoid a division by zero. Select the worst-performing individual from the DE
Through the above-mentioned gradient dominance subpopulation P :
mechanism, the local subpopulations can rapidly G
converge toward the target region in the search space. x G worst argmax f x() (XX)
This approach is especially suitable for problems where xP G
the shape of the function is known or where gradient Replace x worst with x to make the DE subpopulation
*
G
L
information is available. inherit the local optimal information.
Similarly, extract the current global optimal solution
3.3.3. Global search stage from P :
The global subpopulation P adopts the DE algorithm * G (XXI)
G
for iteration to ensure the wide coverage of the search x argmin f x()
G
xP G
Volume 22 Issue 5 (2025) 70 doi: 10.36922/AJWEP025210165

