Yasemin Bekiroglu: Learning and multi-modal sensing for robotic grasping and manipulation

Abstract

Robots are envisioned as capable machines who easily navigate and interact in a world built for humans. However, looking around us we see robots mainly confined to factories only performing repetitive tasks in environments built such as to circumvent their limitations. The central question of my research is how we can create robots that are capable of adapting such that they can co-inhabit our world. This means designing systems that are capable of functioning in unstructured environments that are continuously changing with unlimited combination of shapes, sizes, appearance, and positions of objects, able to understand, adapt and learn from humans, and importantly do so from small amounts of data. In specific my work focuses on grasping and manipulation, fundamental aspects to enable a robot to interact with humans in our environment, along with dexterity (e.g. to use objects/tools successfully) and high-level reasoning (e.g. to decide about which object/tool to use). Despite decades of research, robust autonomous grasping and manipulation approaching human skills remains an elusive goal. One main difficulty lies in dealing with the inevitable uncertainties in how a robot perceives the world. Our environment is dynamic, has a complex structure and sensory measurements are noisy and associated with a large degree of uncertainty which poses challenges to avoid failures. In my research I have developed methodologies that enable a robot to interact with natural objects and learn about object properties and relations between tasks and sensory streams. I have developed tools that allow a robot to use multiple streams of sensory data in a complementary fashion. In this talk I will specifically address how a robot can use vision and touch to address grasp related questions e.g. estimating unknown object properties such as shape, grasp stability estimation before manipulating objects that can be used to trigger plan corrections, and grasp adaptation/correction.

Date
Mar 10, 2020 2:00 PM — 11:00 AM
Event
SML Seminar
Location
AI Centre, UCL
Avatar
James Wilson
PhD Student