Abstract:
Service robots across industrial, medical, and domestic environments face significant challenges in flattening garments, due to their nearly infinite degrees of freedom a...Show MoreMetadata
Abstract:
Service robots across industrial, medical, and domestic environments face significant challenges in flattening garments, due to their nearly infinite degrees of freedom and complex dynamic properties. Traditional quasi-static methods for handling fabrics require extensive interactions and are inefficient. While high-velocity dynamic actions enable efficient unfolding, they are unable to perform fine-grained adjustments. To address this, we propose a self-supervised learning framework combining dynamic (Fling) and static (pick and place and pick and drag) manipulation primitives using a dual-arm robot. This method unfolds crumpled garments efficiently without expert demonstrations or hand-labeled data. We introduce a novel factorized reward function based on garment coverage, shape matching, and smoothness. In addition, we employ a spatial action maps network, leveraging a Vision Transformer, to accurately determine grasping points. Our approach significantly reduces interaction steps and improves fabric coverage. We conducted simulations and real-world experiments on 14 garments across six categories. Our approach quickly achieved over 80% coverage within just two actions on long sleeves, surpassing the state-of-the-art by about 9%, and reached 88% –92.9% coverage across all categories within five actions. Our zero-shot sim-to-real transfer has successfully been demonstrated on UR5 robots, proving the effectiveness of the model on various fabrics.
Published in: IEEE/ASME Transactions on Mechatronics ( Early Access )