Skip to Main Content
It's hard to create and configure security technology and hard to use it after deployment. However, the human mind is a component in both security creation and use. While we technologists have spent the last 40 years building fancier machines, psychologists have spent those decades documenting ways in which human minds systematically (and predictably) misperceive things. To what extent might cognitive bias affect the usable security problem?