Page 113 - IJOCTA-15-4
P. 113
Rolling bearing fault diagnosis method based on GJO–VMD, multiscale fuzzy entropy, and GSABO–BP...
formula utilized by the chosen golden sine opera- at improving overall performance by quantifying
tor is delineated as follows: the BP neural network’s performance under the
present parameter configuration. In order to bal-
ance training and generalization to improve model
X i (t + 1) =X i (t) |sin R 1 | + R 2 sin(R 1 ) |aX best
resilience, the fitness function in this study was
− bX i (t)
defined as the sum of the recognition error rates
(10)
on the training set and the test set. The following
In this context, R 1 is a random number rang-
is the definition of the fitness function:
ing from 0 to 2π, which determines the step size
for updating the position of individual i in the
F = TrE + TeE (12)
next generation; R 2 is a random number ranging
from 0 to π, which dictates the direction of the In the equation, TrE and TeE represent the
next update for individual i; and the parameters recognition error rates of the training and testing
a and b are the golden section coefficients that sets, respectively.
influence the search space of the particles, facil- The steps for optimizing the BP neural net-
itating a more optimal selection that guides the work with the GSABO algorithm are as follows
particles toward superior values. The formula for (Figure 2):
the calculation is presented as follows:
• Step 1: Set the initial weight, threshold, and
parameters of the GSABO algorithm of the BP
(
a = −π(1 − τ) + πτ
(11) neural network.
b = −πτ + π(1 − τ)
• Step 2: Calculate the fitness value for each par-
In this equation, τ represents the golden ratio, ticle.
√
with a value of 5−1 . • Step 3: Check to determine whether the fitness
2
The GSABO is the outcome of combining the has altered. If a change occurs, update the par-
golden sine and SABO. Every time an iteration ticle’s position immediately; if not, apply the
occurs, all particles’ positions and velocities are golden sine method.
updated, and the objective function values for • Step 4: Modify the BP neural network’s param-
their new locations are computed. Each parti- eters.
cle’s individual best location is updated by com- • Step 5: Check if the maximum iteration stop-
paring these new values with the previously saved ping condition is met. If it is met, output the
individual best values. In the end, the algorithm optimal parameters. If not, revert to Step 2
determines which particle is in the optimal place for additional iterations, continuing this process
globally. until the termination criteria are met.
2.5.3. Back-propagation neural network
3. Fault diagnosis method based on
The fundamental attributes of the BP neural net-
GJO–VMD, MFE, and GSABO–BP
work encompass the forward propagation of sig-
neural network
nals and the subsequent back propagation of er-
rors, defining its structure as a multilayer feed- The steps of the proposed fault diagnosis method
forward neural network. The network uses a non- are as follows (Figure 3):
linear transformation of the activation function
• Step 1: Utilize a rolling fault simulation test
and a weighted summation of the input data to
bench to acquire vibration signals from rolling
generate an output signal. When the actual out-
bearings across varying operational conditions.
put deviates from the expected output, the error • Step 2: Utilize the envelope entropy as a fitness
backpropagation process is triggered. During this
process, the algorithm adjusts the weights and criterion. The VMD method has been improved
thresholds to minimize the error, thereby improv- with the GJO algorithm. The fault signal is
ing the model’s acuracy. 23 then subjected to decomposition into a series
of IMFs by implementing VMD with these op-
2.5.4. Golden sine subtraction-average-based timized parameters.
optimizer–back-propagation neural • Step 3: Apply a fine-grained IMF part separa-
network tion algorithm, which is built on a comprehen-
The selection of an appropriate fitness function sive judgment index in the time–frequency do-
is crucial for optimizing the parameters of the main. This step identifies the IMF components
BP neural network through the application of ar- that are sensitive to signal characteristic infor-
tificial intelligence search methods. This func- mation, and these components are then used to
tion allows for parameter adjustments targeted reconstruct the signal.
655

