Skip to Main Content
In the emerging Post-PC era, more and more computers 'in the net' can see, hear, or feel. Since these computers are networked, they can cooperate in the interpretation of their 'sensation'. Cameras, camcorders, etc. will soon be wirelessly connected, doubling as mobile phones. In other words: multimedia goes ubiquitous. On the other hand, users leverage off the wealth of text-based information present in the global Internet. However, the potential that lies in the 'cooperative sensation' and in the use of global textual information is by far not leveraged: it is the past, present, and future grand challenge to enable computers to 'make more sense' of all this information. The talk will provide a unified model for both multimedia sense-making and textual-information sense-making, and propose fostering the confluence of these two threads. Based on this unified view, it will suggest steps towards improved sense-making in the world of ubiquitous computers.