The US military currently deploys more than 4,000 ground robots in Iraq. Most are for bomb disposal but some are armed. In addition to the robots, semi-autonomous unmanned air vehicles come equipped with hellfire missiles. For now, all these systems employ a human-in-the-loop for the application of lethal force. But this is set to change. In the near- to mid-term future, the military might allow autonomous unmanned systems to make their own lethality decisions. This article probes the published US military plans and raises questions about the application of AI to discriminate between innocents and combatants in modern warfare. It points to the main ethical issues in terms of the laws of war and discusses the responsibilities of AI researchers embarking on military projects.