In the real world, a robot might face any unanticipated situation, such as damaged motors, unseen terrain conditions, faults in the sensors, and so on. Instead of aborting its mission when such situations occur, a robot could learn to adapt to those situations using reinforcement learning and continue its mission. However, the current reinforcement learning algorithms require prohibitively long interaction time (from several hours to days) to allow a complex physical robot to learn a new skill. In this talk, we present how we can accelerate the learning process on a real physical robot by leveraging prior knowledge derived from a simulator of the robot. More precisely, we use the model-based robot learning together with the priors from a simulator to allow relatively complex robots, such as a hexapod, to adapt to broken legs and fault in the sensors within a minute of interaction and accomplish the task.