Abstract:
In smart warehouses that use automated guided vehicles (AGVs) for goods transportation, task allocation has a great impact on operational efficiency. Currently, warehouse...Show MoreMetadata
Abstract:
In smart warehouses that use automated guided vehicles (AGVs) for goods transportation, task allocation has a great impact on operational efficiency. Currently, warehouse task allocation is typically modeled as a pickup and delivery problem (PDP), which requires vehicles to start and return from the same depot to construct several closed-loop routes. This approach increases the vehicle travel distance without load in high-throughput warehouses and results in resource wastage. Thus we re-model the task allocation problem as an open-loop routing problem with heterogeneous starting points and name it capacitied multiagent open pickup and delivery problem (CMOPDP), which has more complex solution space and constraints than PDP. The solving speed of existing heuristic methods cannot meet the real-time processing demands of large-scale warehouses. And deep reinforcement learning (DRL)-based methods typically satisfy constraints through the output mask of decoders, which leads to unsatisfactory quality of solutions under complex constraints. To address these limitations, we design an DRL-based model with encoder-decoder architecture to solve the CMOPDP. Specifically, first, an encoder with heterogeneous attention is designed to fully explore constraint relationships between nodes. Second, we utilize dual decoders and information sharing to maximize vehicle-customer nodes matching. Finally, entropy rewards are introduced to enhance exploration during reinforcement learning, preventing the model from getting stuck in local optima. Extensive experiments on random datasets and various warehouse maps demonstrate that our method improves solution quality by at least 1.76% over baselines, while maintaining competitive solving time and exhibiting good generalization performance.
Published in: IEEE Internet of Things Journal ( Early Access )