Page 201 - IJOCTA-15-4
P. 201
FastLoader: Leveraging large language models to accelerate cargo loading optimization with numerous
include “Dry-ice must not share a sealed com- 3.4. Search space reducer
partment with live animals” or “Lithium batter-
We design search space reducer to reduce the sub-
ies may only occupy ULD positions equipped with
stantial time caused by cargo loading solver. In
fire-suppression liners”. Unlike the numeric con-
each iteration of search, the search space reducer
stants, these clauses are deliberately retained in
accepts the search space for crossover and muta-
natural language so that an LLM performs fast,
tion and the corresponding scoring matrix:
semantics-level validation before a candidate plan
(k)
is ever scored numerically. (3) LLM engine binds S 1,1 cg 1,1
the cargo dictionaries to the aircraft map, con- (k)
S 1,2 cd 1,2
structing an output that is processed by cargo .
.
loading solver. The same output is serialized into (k) . . . .
S score = (3)
(k)
a concise prompt for the LLM so that complex S N,1 cg N,1
constraints are processed effectively by the LLM. . . .
.
. .
(k)
S cg N,N
3.3. Cargo loading solver N,N
where the cg i,j represents the center-of-gravity fit-
Cargo loading solver only performs iteration (k)
ness of solution S . Driven by the LLM, the
searching in classic heuristic search to solve the i,j
air cargo loading problem. Note that each (k-th) search space reducer quickly classifies the current
infeasible solutions and achieves the goal of sig-
iteration has its own search space, which is de-
fined as a matrix of solutions S (k) : nificantly simplifying the search space:
(k) (k)
(k) (k) (k) S S · · ·
S S S · · · 1,1 1,2
1,1 1,2 1,3 (k) . . .
. . . . . . . . . . . . S update = . . . . (4)
.
.
S (k) = (k) (k) (k) (1) (k) (k) · · ·
S 2,1 S 2,2 S 2,3 · · · S n,1 S n,2
(k) (k) (k)
S S S · · · where the length of the search space after simpli-
N,1 N,2 N,3
fication, n, is much smaller than N before sim-
(k) (k)
where S represents one of the k-th iteration’s plification. The simplified search space S is
i,j update
solutions, N represents the length of the search input into the cargo loading solver to participate
space. Here, i and j are used to identify the in the next round of heuristic search iteration.
position of a candidate solution in the two-
dimensional search space grid. In each iteration, 4. Evaluation
the cargo loading solver updates the solution in
In this section, we evaluate the performance of
the search space to facilitate the search for the
optimal solution. FastLoader compared with state-of-the-art base-
lines. We conduct experiments from the following
In the search space, not all solutions satisfy
all the complex constraints in the scenario. We three perspectives: performance evaluation, run-
define the solution that satisfies all constraints as time decomposition, and module ablation anal-
a feasible solution and assign it a value of 1. Sim- ysis. The experiment comprises seven baseline
air cargo loading algorithms, five large language
ilarly, an infeasible solution is assigned a value of
models (LLMs), four representative aircraft types,
0:
( and an industrial cargo dataset containing 6.1 mil-
(k) 1 feasible solution lion loading records.
S ← (2)
i,j
0 infeasible solution.
4.1. Basic setting
A single-point mutation operator is applied
after crossover, randomly flipping selected genes Testbed. We choose five representative LLMs to
in the binary loading matrix to inject new cargo implement FastLoader. The five LLMs include:
hold positions. Roulette wheel selection then pre- (1) DeepSeek-R1 39 is a 671 billion-parameter
serves fitter individuals by drawing each candi- transformer decoder trained on about two tril-
P
date with probability p i = cg i / cg j where cg i lion multilingual tokens; (2) Llama-2-70B 40 is
j
represents its center-of-gravity fitness. Even af- Meta’s 70-billion-parameter open-source founda-
ter this evolutionary update, the expanded search tion model with an optimized transformer back-
space still contains a high density of infeasible so- bone for general text generation; (3) Phi-3 (3.8
lutions, so convergence typically demands thou- B) 41 is Microsoft’s 3.8-billion-parameter compact
sands of iterations and incurs substantial compu- model, tuned on curated and synthetic data to
tational time. outperform larger peers on reasoning tasks; (4)
743

