By Topic

Huge Data Mining Based on Rough Set Theory and Granular Computing

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)

Data mining is a hot research field which has been studied by a lot of scientists and technicians for many years. Unfortunately, it is still a very difficult problem to mine huge data sets efficiently. Many researchers are working on developing fast data mining technologies and methods for processing huge data sets efficiently. The basic idea of quick sort is the divide and conquer method. It represents the idea of granular computing (GrC). The average time complexity of quick sort for an m dimensions table containing n records were usually considered to be mXnXlogn since the average time complexity of quick sort for a one detention array with n records is nXlogn. However, we find that it is just nX(m+logn), while not mXnXlogn. Based on this finding, there is an assumption that divide and conquer method can be used to improve the existed knowledge reduction algorithms in rough set theory and granular computing. It may be a good way to solve the problem of huge data mining. In this paper, we present our research plan about huge data mining based on rough set theory and granular computing. Besides, we also present our recent achievements.

Published in:

Web Intelligence and Intelligent Agent Technology, 2008. WI-IAT '08. IEEE/WIC/ACM International Conference on  (Volume:3 )

Date of Conference:

9-12 Dec. 2008