Lab for Learning and Planning in Robotics

@ Interdisciplinary Science and Engineering Complex (ISEC), Northeastern University

With the prevalence of AI and robotics, autonomous systems are very common in all aspects of life. Real-world autonomous systems must deal with noisy and limited sensors, termed partial observability, as well as potentially other agents that are also present (e.g., other robots or autonomous cars), termed multi-agent systems. We work on planning and reinforcement learning methods for dealing with these realistic partial observable and/or multi-agent settings. The resulting method will allow agents to reason about, coordinate and learn to act even in settings with limited sensing and communication.


Team

Christopher Amato - Professor

tags: multi-agent, robotics
sites: Personal, NEU

Roi Yehoshua - Postdoc

tags:
sites: NEU

Sammie Katt - PhD Student

tags: Bayesian-rl, model-based, MCTS
sites: NEU

Yuchen Xiao - PhD Student

tags: multi-robot, Semi-Markovian decision processes
sites: Personal, NEU

Andrea Baisero - PhD Student

tags: policy-gradient, state-representations
sites: Personal, NEU

David Slayback - PhD Student

tags:
sites: NEU

Shuo Jiang - PhD Student

tags:
sites: Personal

Piyush Shrivastava - MSc Student

tags:
sites:

Aditya Narayanaswamy - MSc Student

tags:
sites:

Joshua Hoffman - Undergraduate

tags:
sites:


Alumni

Tian Xia - Undergraduate

tags: robotics, navigation
sites:

Lu Xueguang - Undergraduate

tags:
sites:

Shengjian Chen - Undergraduate

tags: perception, object-recognition
sites:

Brett Daley - Undergraduate

tags: model-free, deep-rl
sites:

Kevin Luo - Undergraduate

tags: SLAM
sites: