Asimov's three laws of robotics have been inculcated so successfully into our culture that they now appear to shape expectations as to how robots should act around humans. However, there has been little serious discussion as to whether the laws really do provide a framework for human-robot interactions. Asimov actually used his laws as a literary device to explore the lack of resilience in the interplay between people and robots in a range of situations. This paper briefly reviews some of the practical shortcomings of each of Asimov's laws for framing the relationships between people and robots, including reminders about what robots can't do. The main focus of the paper is to propose an alternative, parallel set of laws of responsible robotics as a means to stimulate debate about the accountability relationships for robots when their actions can result in harm to people or human interests. The alternative laws emphasize (1) systems safety in terms of the responsibilities of those who develop and deploy robotic systems, (2) robots' responsiveness as they participate in dynamic social and cognitive relationships, and (3) smooth transfer of control as a robot encounters and initially responds to disruptions, impasses, or opportunities in context.