Robots should be able to act as flexibly as possible in different environments and contexts of action. In order to realise this goal, learning algorithms are developed which permit learning following nature’s example. If a machine that learns in this way causes damage, the question arises as to who is responsible for it. A grey area between manufacturer and owner responsibility may arise. Starting from criteria of replace ability, a firm suggestion is made as to how this grey area could be handled.