AMDT Lab AI + Electric Motors for Robotics

Vision-Language-Action-Model-Based Compliant Control of Robotic Arms

The objectives of this project are as belows:

(1) Dynamic Inconsistency Between VLA-Guided Semantic Trajectories and Low-Level Physical Execution For complex manipulation tasks, manipulator trajectory planning must not only satisfy end-effector precision requirements but also strictly adhere to constraints of dynamic feasibility and motion compliance. This study investigates the integration of trajectory optimization and physical constraint modeling within the Vision-Language-Action (VLA) framework, to generate continuous and executable trajectories for embodied intelligence tasks. The core of this problem resides in establishing a unified predictive optimization model for trajectory smoothness and physical constraints, thereby providing implementable reference trajectories for low-level compliant control.

(2) Degradation of Control Performance Under Complex Disturbances and Dynamic Uncertainties In real-world execution environments, factors such as friction, auxiliary payloads, joint flexibility, and communication/actuation delays exhibit time-varying, uncertain characteristics and may be strongly coupled with task objectives. Conventional static model-based controllers tend to fail under such conditions. This study focuses on designing a control paradigm that synergizes predictive and feedback mechanisms: the predictive model can timely capture the system’s time-varying properties within a limited computational budget, and the closed-loop framework effectively suppresses unmodeled dynamics. This ensures stability, tracking accuracy, and compliant interaction performance even in the presence of substantial uncertainties.

(3) Transfer Mismatch Between Simulation Training Strategies and Real-World System Deployment The performance of algorithms trained in simulation often degrades significantly during physical deployment, owing to hardware discrepancies and environmental complexity. This study constructs a reinforcement learning fine-tuning mechanism with model priors and data-driven calibration. This mechanism enables the controller to rapidly adapt to diverse execution platforms and task scenarios using limited online interaction samples, while guaranteeing the safety and stability of the learning process. Ultimately, this improves the generalization and adaptive capabilities of embodied intelligence systems.

Previous post
Multi-Temporal-Spatial-Scale Electromagnetic-Thermal Coupling Analytical Modeling and Multi-Fidelity Optimization of Permanent-Magnet Vernier Machines