The motion picture "AI" lets the AI profession squirm in the glory of misrepresentation. It's not fun, especially when one's field suffers from waves of innovation/hype/backlash. The problem is that the film "AI" reinforces the dream of the android just when many who work toward "truly" intelligent technologies are cutting loose from the dream's more surrealistic aspects. People are asking new questions: Is the Turing test really the right kind of standard? If not, what is better? Must we define intelligence in reference to humans? Must intelligent technology be boxes chock-full of this thing we call intelligence, or should it operate as a "cognitive prosthesis" to amplify or extend human perceptual, cognitive and collaborative capabilities? Must intelligence always be in some individual thing - either a headbone or a box - or is intelligence a system property that is definable only in terms of the triple of humans-machines-contexts?.