Cart (Loading....) | Create Account
Close category search window
 

A parallel architecture for meaning comparison

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Mohan, S. ; Dept. of Comput. Sci. & Eng., Texas A&M Univ., College Station, TX, USA ; Biswas, A. ; Tripathy, A. ; Pannigrahy, J.
more authors

In this paper we present a fine grained parallel architecture that performs meaning comparison using vector cosine similarity (dot product). Meaning comparison assigns a similarity value to two objects (e.g. text documents) based on how similar their meanings (represented as two vectors) are to each other. The novelty of our design is the fine grained parallelism which is not exploited in available hardware based dot product processor designs and can not be achieved in traditional server class processors like the Intel Xeon. We compare the performance of our design against that of available hardware based dot product processors as well a server class processor using optimum software code performing the same computation. We show that our hardware design can achieve a speedup of 62,000 times compared to an available hardware design and a speedup of 8866 times with 33% (1.5 times) less power consumption, compared to software code running on Intel Xeon processor for 1024 basis vectors. Our design can significantly reduce the amount of servers required for similarity comparison in a distributed search engine. Thus it can enable reduction in energy consumption, investment, operational costs and floor area in search engine data centers. This design can also be deployed for other applications which require fast dot product computation.

Published in:

Parallel & Distributed Processing (IPDPS), 2010 IEEE International Symposium on

Date of Conference:

19-23 April 2010

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.