Our research is interdisciplinary and focuses on sensing, planning, reasoning, and control of single and multi-agent systems, including both manipulation and mobile robots. We strive to develop algorithms and methods for factory automation, smart building and transportation applications using machine learning, computer vision, RF/optical sensing, wireless communications, control theory and signal processing. Key research themes include bin picking and object manipulation, sensing and mapping of indoor areas, coordinated control of robot swarms, as well as robot learning and simulation.
Where: 3rd IAVSD Workshop on Dynamics of Road Vehicles: Connected and Automated Vehicles
MERL Contact: Stefano Di Cairano
Research Areas: Control, Optimization, RoboticsBrief
Date: April 28, 2019
- Stefano Di Cairano, Distinguished Scientist and Senior Team Leader in the Control and Dynamical Systems Group, will give an invited talk entitled: "Modularity, integration and synergy in architectures for autonomous driving" that covers recent work in the lab concerning building a modular, robust control framework for autonomous driving.
Where: CEATEC'18, Makuhari Messe, Tokyo
MERL Contacts: Devesh Jha; Daniel Nikovski; Diego Romeres; Alan Sullivan; Jeroen van Baar; William Yerazunis
Research Areas: Artificial Intelligence, Computer Vision, Data Analytics, RoboticsBrief
Date: October 15, 2018 - October 19, 2018
- MERL's work on robot learning algorithms was demonstrated at CEATEC'18, Japan's largest IT and electronics exhibition and conference held annually at Makuhari Messe near Tokyo. A team of researchers from the Data Analytics Group at MERL and the Artificial Intelligence Department of the Information Technology Center (ITC) of MELCO presented an interactive demonstration of a model-based artificial intelligence algorithm that learns how to control equipment autonomously. The algorithm developed at MERL constructs models of mechanical equipment through repeated trial and error, and then learns control policies based on these models. The demonstration used a circular maze, where the objective is to drive a ball to the center of the maze by tipping and tilting the maze, a task that is difficult even for humans; approximately half of the CEATEC'18 visitors who tried to steer the ball by means of a joystick could not bring it to the center of the maze within one minute. In contrast, MERL's algorithm successfully learned how to drive the ball to the goal within ten seconds without the need for human programming. The demo was at the entrance of MELCO's booth at CEATEC'18, inviting visitors to learn more about MELCO's many other AI technologies on display, and was seen by an estimated more than 50,000 visitors over the five days of the expo.
See All News & Events for Robotics
CD1257: Autonomous vehicles Planning and Control
The Control and Dynamical Systems (CD) group at MERL is seeking highly motivated interns at varying expertise levels to conduct research on planning and control for autonomous vehicles. The research domain includes algorithms for path planning, vehicle control, high level decision making, sensor-based navigation, driver-vehicle interaction. PhD students will be considered for algorithm development and analysis, and property proving. Master students will be considered for development and implementation in a scaled robotic test bench for autonomous vehicles. For algorithm development and analysis it is highly desirable to have deep background in one or more among: sampling-based planning methods, particle filtering, model predictive control, reachability methods, formal methods and abstractions of dynamical systems, and experience with their implementation in Matlab/Python/C++. For algorithm implementation, it is required to have working knowledge of Matlab, C++, and ROS, and it is a plus to have background in some of the above mentioned methods. The expected duration of the internship is 3-6 months with flexible start date.
DA1344: Learning from Demonstration (LfD) for Robotics
MERL is looking for a highly motivated intern to work on developing algorithms for robot learning using learning from demonstration, imitation learning and/or deep reinforcement learning. Successful candidate will collaborate with MERL researchers to design, analyze, and implement new algorithms, conduct experiments, and prepare results for publication. The candidate should have a strong background in (deep) reinforcement learning, Imitation Learning (or Learning from Demonstrations, LfD), machine learning and robotics. Prior experience of working with robotic systems is required. The candidate should be comfortable implementing the developed algorithms in Python and should have prior experience working with ROS. Prior exposure to deep learning and hands-on experience with packages such as Pytorch and/or Tensorflow is expected. The candidate is expected to be a PhD student in Computer Science, Electrical Engineering, Operations Research, Statistics, Applied Mathematics, or a related field, with relevant publication record. Expected duration of the internship is at least 3 months. The position is expected to be available starting late August or early September. Interested candidates are encouraged to apply with their recent CV with list of related publications and links to GitHub repositories (if any).
See All Internships for Robotics
- "Learning Heuristic Functions for Mobile Robot Path Planning Using Deep Neural Networks", International Conference on Automated Planning and Scheduling (ICAPS), July 2019. ,
- "Motion Planning of Autonomous Road Vehicles by Particle Filtering: Implementation and Validation", American Control Conference (ACC), July 2019. ,
- "Anomaly Detection for Insertion Tasks in Robotic Assembly Using Gaussian Process Models", European Control Conference (ECC), June 2019. ,
- "Semiparametrical Gaussian Processes Learning of Forward Dynamical Models for Navigating in a Circular Maze", IEEE International Conference on Robotics and Automation (ICRA), May 2019. ,
- "Learning Tasks in a Complex Circular Maze Environment", Modeling the Physical World: Perception, Learning, and Control, NIPS Workshop, December 2018. ,
- "Trajectory-based Learning for Ball-in-Maze Games", Imitation Learning and its Challenges in Robotics - NIPS, December 2018. ,
- "Derivative-Free Semiparametric Bayesian Models for Robot Learning", Advances in Neural Information Processing Systems (NIPS), December 2018. ,
- "Learning to Regulate Rolling Ball Motion", IEEE Symposium on Computational Intelligence in Engineering Solutions, DOI: 10.1109/SSCI.2017.8285376, November 2017. ,