I am a 1st year PhD student, and I am advised by Prof. Kostas Bekris. I am interested in robotic intelligence for solving arbitrary multi-step tasks in the real world. To this end, I am focused on task and motion planning (TAMP), natural language task description integration, and reinforcement learning (RL).
Joe H. Doerr – Google Scholar
Publications:
2025 |
Marougkas, I; Ramesh, D; Doerr, J; Granados, E; Sivaramakrishnan, A; Boularias, A; Bekris, K Integrating Model-based Control and RL for Sim2Real Transfer of Tight Insertion Policies Conference IEEE International Conference on Robotics and Automation (ICRA), 2025. @conference{marougkas2025integration, title = {Integrating Model-based Control and RL for Sim2Real Transfer of Tight Insertion Policies}, author = {I Marougkas and D Ramesh and J Doerr and E Granados and A Sivaramakrishnan and A Boularias and K Bekris}, year = {2025}, date = {2025-05-01}, booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, abstract = {Object insertion under tight tolerances (<1mm) is an important but challenging assembly task as even slight errors can result in undesirable contacts. Recent efforts have focused on using Reinforcement Learning (RL) and often depend on careful definition of dense reward functions. This work proposes an effective strategy for such tasks that integrates traditional model-based control with RL to achieve improved accuracy given training of the policy exclusively in simulation and zero- shot transfer to the real system. It employs a potential field- based controller to acquire a model-based policy for inserting a plug into a socket given full observability in simulation. This policy is then integrated with a residual RL one, which is trained in simulation given only a sparse, goal-reaching reward. A curriculum scheme over observation noise and action magnitude is used for training the residual RL policy. Both policy components use as input the SE(3) poses of both the plug and the socket and return the plug's SE(3) pose transform, which is executed by a robotic arm using a controller. The integrated policy is deployed on the real system without further training or fine-tuning, given a visual SE(3) object tracker. The proposed solution and alternatives are evaluated across a variety of objects and conditions in simulation and reality. The proposed approach outperforms recent RL methods in this domain and prior efforts for hybrid policies. Ablations highlight the impact of each component of the approach}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Object insertion under tight tolerances (<1mm) is an important but challenging assembly task as even slight errors can result in undesirable contacts. Recent efforts have focused on using Reinforcement Learning (RL) and often depend on careful definition of dense reward functions. This work proposes an effective strategy for such tasks that integrates traditional model-based control with RL to achieve improved accuracy given training of the policy exclusively in simulation and zero- shot transfer to the real system. It employs a potential field- based controller to acquire a model-based policy for inserting a plug into a socket given full observability in simulation. This policy is then integrated with a residual RL one, which is trained in simulation given only a sparse, goal-reaching reward. A curriculum scheme over observation noise and action magnitude is used for training the residual RL policy. Both policy components use as input the SE(3) poses of both the plug and the socket and return the plug's SE(3) pose transform, which is executed by a robotic arm using a controller. The integrated policy is deployed on the real system without further training or fine-tuning, given a visual SE(3) object tracker. The proposed solution and alternatives are evaluated across a variety of objects and conditions in simulation and reality. The proposed approach outperforms recent RL methods in this domain and prior efforts for hybrid policies. Ablations highlight the impact of each component of the approach |
2024 |
Bekris, K; Doerr, J; Meng, P; Tangirala, S The State of Robot Motion Generation Inproceedings International Symposium of Robotics Research (ISRR), Long Beach, California, 2024. @inproceedings{Bekris:2024aa, title = {The State of Robot Motion Generation}, author = {K Bekris and J Doerr and P Meng and S Tangirala}, url = {https://arxiv.org/abs/2410.12172 https://pracsys.cs.rutgers.edu/papers/the-state-of-robot-motion-generation/}, year = {2024}, date = {2024-12-01}, booktitle = {International Symposium of Robotics Research (ISRR)}, address = {Long Beach, California}, abstract = {This paper first reviews the large spectrum of methods for generating robot motion proposed over the 50 years of robotics research culminating to recent developments. It crosses the boundaries of methodologies, which are typically not surveyed together, from those that operate over explicit models to those that learn implicit ones. The paper concludes with a discussion of the current state-of-the-art and the properties of the varying methodologies highlighting opportunities for integration.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } This paper first reviews the large spectrum of methods for generating robot motion proposed over the 50 years of robotics research culminating to recent developments. It crosses the boundaries of methodologies, which are typically not surveyed together, from those that operate over explicit models to those that learn implicit ones. The paper concludes with a discussion of the current state-of-the-art and the properties of the varying methodologies highlighting opportunities for integration. |