Many applications require knowledge about how to deceive, including those related to safety, security, and warfare. Speech and text analysis can help detect deception, as can cameras, microphones, physiological sensors, and intelligent software. Models of deception and noncooperation can make a virtual or mixed-reality training environment more realistic, improve immersion, and thus make it more suitable for training military or security personnel. Robots might need to operate in physical and nontraining environments where they must perform military activity, including misleading the enemy. The contributions to this installment of Trends &#x0026; Controversies present state-of-the-art research approaches to the analysis and generation of noncooperative and deceptive behavior in virtual humans, agents, and robots; the analysis of multiparty interaction in the context of deceptive behavior; and methods to detect misleading information in texts and computer-mediated communication. Articles include: "Computational Deception and Noncooperation," by Anton Nijholt; "Robots that Need to Mislead: Biologically-Inspired Machine Deception," by Ronald C. Arkin; "Deception in Sports Using Immersive Environments," by S&#x00E9;bastien Brault, Richard Kulpa, Franck Multon, and Benoit Bideau; "Non-Cooperative and Deceptive Virtual Agents," by David Traum; "Deception Detection in Multiparty Contexts,"by Hayley Hung; "Deception Detection, Human Reasoning, and Deception Intent," by Eugene Santos Jr., Deqing Li, and Fei Yu; and "Automatic Deception Detection in Computer-Mediated Communication," by Lina Zhou and Dongsong Zhang.