By Topic

Computer Science and Software Engineering (JCSSE), 2014 11th International Joint Conference on

Date 14-16 May 2014

Filter Results

Displaying Results 1 - 25 of 63
  • [Front cover]

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (2065 KB)  
    Freely Available from IEEE
  • Table of content

    Page(s): xxx - xxxv
    Save to Project icon | Request Permissions | PDF file iconPDF (118 KB)  
    Freely Available from IEEE
  • The intruder detection system for rapid transit using CCTV surveillance based on histogram shapes

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (730 KB) |  | HTML iconHTML  

    This paper presents the intruder detection system for rapid transit using CCTV surveillance based on the histogram shapes. In this paper, researchers proposed intruder detection algorithm of yellow line located on the ground next to rapid transit railway for preventing passengers from any harmful train incidents by using CCTV surveillance system based histogram shape that is incredibly convenient technique for image analysis. The histogram shapes of trespass and non-trespass are different. Therefore, it can be used to alarm as a warning system to the passengers who invade the yellow line. The good advantages of the histogram shapes method are; flexibility in use and stability in light changing. This research is suitable for CCTV surveillance system used in observation mode for intruder detection. The system can be worked both on real time and offline mode. The experimental results show the error of the system that is less than 5%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improved adaptive discriminant analysis for single sample face recognition

    Page(s): 7 - 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (381 KB) |  | HTML iconHTML  

    Face recognition is an automated process with the ability to identify individuals by their facial characteristics. Currently there is a problem in which the process requires several examples of the person of interest's face in order to produce accurate outcome, and the process is intolerant to the variation in facial expression and the condition of lighting of the face image needed to be identify. This inspired us to come up with an algorithm to increase accuracy of single sample facial recognition process. In the case where multiple samples are available, the best approach to identify a person by face recognition system is to use Fischer Linear Discriminant Analysis (FLDA) method which use multiple samples to calculate the within-class scatter matrix and could give output accurately. However with only one sample it means the sample does not have any variation, hence impossible to find the within-class scatter matrix. The Adaptive Discriminant Learning (ADL) [1] was proposed to solve the problem by deducing the within-class scatter matrix from auxiliary generic set which consist of multiple samples per person then use FLDA to recognize face image. In this paper, we improve the method by preprocessing the input image using a local illumination normalization to make the feature of the face became more obvious and suppress the effect of illumination variation and incorporating a part-based methodology to further increase the recognition rate. The system was tested with the FERET face database, and the recognition rate is improved from 77% to 93%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Saliency-weighted holistic scene text recognition for unseen place categorization

    Page(s): 12 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (530 KB) |  | HTML iconHTML  

    An improvement in framework for unseen place categorization using scene text is proposed. Category score calculation using visual saliency weighting method is proposed to cope with problem of different importance of word locations on scene images. Additionally, a HOG feature extraction using sliding window is proposed to obtain better holistic word recognition on scene images. As the result, the proposed method outperforms PHOG baseline in unseen place categorization with greater than 10 % improvement in the accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multi-criteria item-based collaborative filtering framework

    Page(s): 18 - 22
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (138 KB) |  | HTML iconHTML  

    Collaborative filtering methods are utilized to provide personalized recommendations for users in order to alleviate information overload problem in different domains. Traditional collaborative filtering methods operate on a user-item matrix in which each user reveal her admiration about an item based on a single criterion. However, recent studies indicate that recommender systems depending on multi-criteria can improve accuracy level of referrals. Since multi-criteria rating-based collaborative filtering systems consider users in multi-aspects of items, they are more successful at forming correlation-based user neighborhoods. Although, proposed multi-criteria user-based collaborative filtering algorithms' accuracy results are very promising, they have online scalability issues. In this paper, we propose an item-based multi-criteria collaborative filtering framework. In order to determine appropriate neighbor selection method, we compare traditional correlation approaches with multi-dimensional distance metrics. Also, we investigate accuracy performance of statistical regression-based predictions. According to real data-based experiments, it is possible to produce more accurate recommendations by utilizing multi-criteria item-based collaborative filtering algorithm instead of a single criterion rating-based algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Search space reduction of particle swarm optimization for hierarchical similarity measurement model

    Page(s): 23 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (249 KB) |  | HTML iconHTML  

    Particle Swarm Optimization (PSO) is one of many optimization techniques used to find a solution in many areas not limited to engineering or mathematics. It can discover the solution to a problem of finding input to a program based on the similarity of program's execution. However, identifying such solutions with standard PSO is not very efficient or in a few cases, not possible. There is a high probability that particles are stuck in an area of local maxima. The main reason is due to excessive exploitation steps. In addition, when the new exploration starts, there is no guarantee that particles will no longer be generated from earlier explored areas. This paper presents an algorithm of Search Space Reduction (SSR) applied to PSO for Hierarchical Similarity Measurement (HSM) model of program execution. The algorithm uses a fitness function computed from HSM model. SSR helps to find the solution by eliminating areas where the solution is most likely not to be found. It improves the optimization process by reducing the excessive exploitation step. Moreover, SSR can be applied to all variants of PSO. The experimental results demonstrate that PSO with SSR is the most effective method among all other three techniques used in experiment. SSR increases effectiveness in finding a solution by 73%. For each program under the experiment, SSR algorithm was able to find all solutions with the smallest number of exploitations. Regardless of the program's complexity, PSO with SSR usually manipulates the searching process faster than both versions of PSO without SSR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A social popularity aware scheduling algorithm for ad-hoc social networks

    Page(s): 28 - 33
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (410 KB) |  | HTML iconHTML  

    In an ad-hoc social network (ASNET), users normally require scheduling popular data packet first. However, ASNET users have some limitations due to the scarce bandwidth and unreliability in wireless connection. Traditional algorithms use First In First Out (FIFO) order for scheduling, which is not suitable in ASNETs and cannot work properly with congested environments. To overcome the above mentioned problems, in this paper, we introduce a social popularity aware scheduling algorithm in the context of ad-hoc social networks, namely Pop-aware. Pop-aware provides solution after calculating the traffic load of intermediate node and assigns priority to incoming flow using degree centrality (social property). It provides fairness in received service to each flow using the active service rate concept. Experimental results show that the performance of Pop-aware is better as compared against existing schemes, in terms of average throughput, packet loss rate and average delay. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Vehicle logo detection using convolutional neural network and pyramid of histogram of oriented gradients

    Page(s): 34 - 39
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (537 KB) |  | HTML iconHTML  

    This paper presents a new method for vehicle logo detection and recognition from images of front and back views of vehicle. The proposed method is a two-stage scheme which combines Convolutional Neural Network (CNN) and Pyramid of Histogram of Gradient (PHOG) features. CNN is applied as the first stage for candidate region detection and recognition of the vehicle logos. Then, PHOG with Support Vector Machine (SVM) classifier is employed in the second stage to verify the results from the first stage. Experiments are performed with dataset of vehicle images collected from internet. The results show that the proposed method can accurately locate and recognize the vehicle logos with higher robustness in comparison with the other conventional schemes. The proposed methods can provide up to 100% in recall, 96.96% in precision and 99.99% in recognition rate in dataset of 20 classes of the vehicle logo. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scalable resolution-based image coding algorithm

    Page(s): 40 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (427 KB) |  | HTML iconHTML  

    Currently, multimedia data has become increasingly more important with the growth of the Internet. Images are one example of sharing across the internet, especially through social media applications. More specifically, nearly every website consists of a considerable amount of image data. Some pictures are stored in multiple resolutions to be displayed as desired. When the same image is stored in different resolutions, it is difficult to manage. This paper presents an algorithm for image coding in scalable resolution. The proposed method consists of an intra-prediction and reversible cellular automata. Intra-predication aims to reduce the data to be compressed and reversible cellular automata allows image data to increase or decrease to the desired resolution. For the performance evaluation, an 8-bit grayscale was experimented with using the proposed method and then compared with JPEG2000 and RCAGIC using PSNR and SSIM statistics. In this regard, the proposed method provides more promising results than RCAGIC and JPEG2000. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi-camera based human localization for room utilization monitoring system

    Page(s): 46 - 51
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB) |  | HTML iconHTML  

    Room utilization monitoring system is an interesting application towards optimized facility usage. Due to its cost effectiveness and robustness, a camera-based system for room utilization is focused. This paper presents a multiple camera based process for room utilization monitoring system. The system is composed with three parts of processes that are single-camera processing, multi-camera processing and room event detection. In single camera processing, we have presented the object detection and tracking and transform the detected position on the image to the room map using Homography. The results of single camera processing are the location on the room map of detected object which are the input for multiple camera data fusion. We have purposed three different methods on multiple camera data fusion that are Uniform Bias Weighting, Best Camera Selection and Error Bias Weighting. And then we use the best detected object location on the map for room event detection. The experiment verified the performance of the different data fusion method on multiple cameras and showed that Error Bias weighting provide the best result. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SLA guarantee real-time monitoring system with soft deadline constraint

    Page(s): 52 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (507 KB) |  | HTML iconHTML  

    Real time monitoring of data streams can be used for monitoring performance, availability or system anomaly detection which can analyze and correlate events from logs file. However, due to the increasing complexity of modern distributed systems, the size of logs can become very large. Monitoring with timing constraint becomes more important in facilitating the enforcement of conditional guarantees. In general, real-time systems are usually classified into soft and hard which need to comply the term of Service Level Agreements (SLAs). In this paper, we focused on the probabilistic deadline in order to guarantee the achievement of SLA deadline for verification of soft deadlines in real-time system by adopt the Central limit theorem in order to approximate the distributions and calculate the probability of a violation of SLA guarantee which can be used to determine the deadline constraint for the system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Edge detection of medical image processing using vector field analysis

    Page(s): 58 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1554 KB) |  | HTML iconHTML  

    Ultrasound (US) breast cancer images is one of the most complicated medical images to extract the desired area of interest. It is often difficult to separate the tumor region from the background tissues. Therefore, tumor segmentation is the challenging problems in the computed aided diagnosis. Among many image segmentation techniques, a generalized gradient vector flow (GGVF) method is one of the popular techniques. It is based on vector transformation of the edge map of the gray scale image. GGVF introduces a non-uniform diffusion to preserve the large gradient of the boundary area and smooth the gradients caused by noise and speckles. However, the improper numerical iteration of GGVF may lead the false contours or the existing noise and finally the snake could not reach the true boundary. In this paper, the new vector field analysis for breast tumor US image segmentation is proposed. The GGVF vector field will be derived from the edge map of the original image. The algorithm analyzes the GGVF vectors in terms of the entropy of the angle of vectors in the corresponding window. The windows will be vertically and horizontally flipped, then the entropy will be evaluated again. Next, the ratio of the entropy before and after flip will be determined to be the classifier of the boundary and non-boundary. The algorithm has been tested on the real US breast tumor images with a set of ground truth images hand-drawn by radiologists. The proposed algorithm is compared with conventional edge detectors such as Sobel and Canny operator. The numerical experiments show that the proposed techniques lead to a better segmentation accuracy with the reference to the conventional edge detection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mobile remote sensing platform: An uncertainty calibration analysis

    Page(s): 64 - 69
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1479 KB)  

    This paper presents a method to estimate the uncertainty in the calibration of two sensors, a laser rangefinder and a multicamera-system. Both sensors were used in urban environment reconstruction tasks. A new calibration pattern, visible to both sensors is proposed. By this means, correspondence between each laser point and its position in the camera image is obtained so that texture and color of each LIDAR point can be known. Besides this allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. A practical methodology is presented to predict the uncertainty analysis inside the calibration process between both sensors. Statistics of the behavior of the camera calibration parameters and the relationship between the camera, LIDAR and the noisy of both sensors were calculated using simulations, an analytical analysis and experiments. Results for the calibration and uncertainty analysis are presented for data collected by the platform integrated with a LIDAR and the panoramic camera. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extraction and comparison of various prosodic feature sets on sentence segmentation task for Turkish Broadcast News data

    Page(s): 70 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (289 KB) |  | HTML iconHTML  

    In this work, prosodic features of the Turkish Broadcast News (BN) data are extracted using an open source prosodic feature extraction tool based on Praat. The profiles and effectiveness of these features are also investigated for the sentence segmentation task on the Turkish BN data. We not only used some combinations of the feature sets but also collected some of them in one prosodic feature model in order to achieve one of the best performance. The results of the experiments show that some combinations of the prosodic feature sets are very useful for the automatic sentence segmentation task on the Turkish BN data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prediction of tone naturalness perception using geometric model

    Page(s): 74 - 79
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (487 KB) |  | HTML iconHTML  

    Naturalness is an important issue in the Text-To-Speech (TTS) system. To support arbitrarily defined pitch contours for any synthesized syllables, a TTS should be able to maintain the naturalness of the synthetic speech. This work proposed an automatic evaluation of pitch contours in order to determine the level of naturalness of synthesized syllables when perceived by human listeners. By analyzing results, tone perception experiments conducted on human listeners in this work, a syllable tone naturalness prediction model based on the midpoint and endpoint of the syllable's rhyme part was proposed. The model was then used for developing a tone naturalness prediction algorithm using geometric models of pitch contours. The evaluation of the tone naturalness prediction algorithm involved human listeners perceiving the naturalness of syllables with 45 pitch contour patterns, each of which with 2 repetitions. The proposed algorithm achieved approximately 80% consistency rate compared against human listeners' decisions on tone naturalness of the syllables. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Text corpus for natural language story-telling sentence generation: A design and evaluation

    Page(s): 80 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (370 KB) |  | HTML iconHTML  

    Automatic generation of narrative sentences from unordered word sets is desirable in Augmentative and Alternative Communication (AAC) systems for children with certain learning disabilities (LD). Regardless of the complexity of the Natural Language Processing deployed in sentence generation procedures, the qualities of language models always affect the generation results. This work compared sentence generation accuracies obtained from a multi-tier N-gram-based procedure trained on BEST2010, a large publicly available text corpus, and a smaller but more specifically designed corpus in the task of Thai simple sentence generation. The latter, a new corpus called TELL-S, was created based on an analysis of the contents belonging to textbooks used in grade 1 and grade 2 for Thai language subjects according to the compulsory curriculum for Thai schools. The original procedure was also modified to incorporate additional constraints based on a story-telling guideline developed for LD children. Evaluated upon test sets of 195 sentences, each of which was composed of 3-6 words with a specific Part-Of-Speech combination, TELL-S was shown to provide better generalization and yielded higher accuracies than BEST2010 in all cases with unbiased word sets. The sentence generation accuracies were 100% and 70% for 3-word and 4-word sentences, respectively. The average accuracy was at 58.8% when longer sentences were also included. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An improvement of flat approach on hierarchical text classification using top-level pruning classifiers

    Page(s): 86 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (480 KB) |  | HTML iconHTML  

    Hierarchical classification has been becoming a popular research topic nowadays, particularly on the web as text categorization. For a large web corpus, there can be a hierarchy with hundreds of thousands of topics, so it is common to handle this task using a flat classification approach, inducing a binary classifier only for the leaf-node classes. However, it always suffers from such low prediction accuracy due to an imbalanced issue in the training data. In this paper, we propose two novel strategies: (i) “Top-Level Pruning” to narrow down the candidate classes, and (ii) “Exclusive Top-Level Training Policy” to build more effective classifiers by utilizing the top-level data. The experiments on the Wikipedia dataset show that our system outperforms the traditional flat approach unanimously on all hierarchical classification metrics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Emotion classification of Thai text based using term weighting and machine learning techniques

    Page(s): 91 - 96
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (435 KB) |  | HTML iconHTML  

    In this research, I proposed Emotion Classification of Thai Text based Using Term weighting and Machine Learning Techniques focusing on the comparison of various common term weighting schemes. I found Boolean weighting with Support Vector Machine is most effective in our experiments. I also discovered that the Boolean weighting is suitable for combination with the Information gain feature selection method. The Boolean weighting with Support Vector Machine algorithm yielded the best performance with the accuracy over all algorithms. Based on our experiments, the Support Vector Machine algorithm with the Information gain feature selection yielded the best performance with the accuracy of 77.86%. Our experimental results also reveal that feature weighting methods have a positive effect on the Thai Emotion Classification Framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A study on reconfiguring on-chip cache with nonvolatile memory

    Page(s): 97 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (226 KB) |  | HTML iconHTML  

    NVM has become a promising technology to partly replace SRAM as on-chip cache and reduce the gap between the core and cache. To take all advantages of NVM and SRAM, we propose a Hybrid Cache, constructing on-chip cache hierarchies with different technologies. As shown in article, hybrid cache performance and power consumption of Hybrid Cache have a large advantage over caches base on single technologies. In addition, we have shown some other methods that can optimize the performance of hybrid cache. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modification of MELD score by including Serum Albumin to improve prediction of mortality outcome of cirrhotic patient based on Thai cirrhotic patients

    Page(s): 100 - 105
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (600 KB) |  | HTML iconHTML  

    Nowadays, the Model for End-stage Liver Disease (MELD) has become a popular model and replaced the Child-Pugh score for the assessment of the mortality opportunity of patients with cirrhosis in 3-month period. The model predicts the severity of the disease based on 3 biochemical parameters: serum creatinine, serum total bilirubin, and INR. However, in the past, the first model like Child-Pugh score signified the importance of Serum Albumin, a protein producing in a liver. It is, thus, expected that the Serum Albumin has an effect on patients' mortality prediction. In this research, our main focus is to refine and evaluate the effect of Serum Albumin to mortality of Thai cirrhotic patients if included into the MELD model. We use the data collection from 158 Thai cirrhotic patients with different degrees of severity. They were treated at the Liver Unit and Clinic, King Chulalongkorn Memorial Hospital, The Thai Red Cross Society. The collected data were divided into the periods of 3 months, 6 months, 1 year and 2 years respectively[1]. The Kaplan-Meier statistic was used to analyze the survival opportunity of each period. Also, the Cox-Regression was utilized to evaluate the relationship and the statistical significance of the substance in each period in order to find the connection between the Serum Albumin and mortality opportunities. Results of the study show that of all the data from 158 patients, with the Serum Albumin level between 1.0 and 3.5 g/dL, when tested by Pearson's Chi-squared[2], Log Rank Test and Wilcoxon rank-sum (Mann-Whitney)[3] has the statistical significance at the 1% level of confidence (p <; 0.001). Moreover, the correlation of the results using Cox Regression demonstrated also that Serum Albumin influenced the mortality opportunity at the hazard ratio of 5.14 (95%CI:2.971-8.920) with level of confidence p-value <; 0.0001. Thus, we believe that the Serum Albumin affected the mortality prediction model. We also propose two refined MEL- models[4], ThaiMELD-Albumin and ThaiMELD-CTP[5]. For the efficiency assessment of the models, we compare our models to others using the ROC. We found that ThaiMELD-Albumin had 0.85 (95% CI: 0.68-1.00) and it is better than MELD, MELD-Albumin and 5vMELD, while ThaiMELD-CTP is just better than MELD. Consequently, ThaiMELD-Albumin is better for prediction of the mortality opportunity for Thai patients than the MELD, MELD-Albumin or 5vMELD. While ThaiMELD-CTP which just added a scale value to MELD could give a better assessment than MELD itself. Therefore, our model could benefit to Thai patients for the assessment of mortality opportunity as well as symptoms' severity. It could, perhaps, be further used for the consideration of liver transplantation in Thailand. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An evaluation of feature extraction in EEG-based emotion prediction with support vector machines

    Page(s): 106 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (287 KB) |  | HTML iconHTML  

    Electroencephalograph (EEG) data is a recording of brain electrical activities, which is commonly used in emotion prediction. To obtain promising accuracy, it is important to perform a suitable data preprocessing; however, different works employed different procedures and features. In this paper, we aim to investigate various feature extraction techniques for EEG signals. To obtain the best choice, there are four factors investigated in the experiment: (i) the number of channels, (ii) signal transformation methods, (iii) feature representations, and (iv) feature transformation techniques. Support Vector Machine (SVM) is chosen to be our baseline classifier due to its promising performance. The experiments were conducted on the DEAP benchmark dataset. The results showed that the prediction on EEG signals from 10 channels represented by the band power one-minute features gave the best accuracy and F1. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A simultaneous topology and sizing optimization for plane trusses

    Page(s): 111 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (190 KB) |  | HTML iconHTML  

    This paper presents two approaches to determine the optimal plane trusses using the particle swarm optimization. The two-stage optimization and the simultaneous topology-sizing optimization of plane trusses are investigated and compared. The matrix representation of both topology and element size is introduced and integrated into the standard particle swarm algorithm to enable higher flexibility and computational efficiency. The truss weight is to be minimized subjected to stability, stress and deformation constraints. The results show that the simultaneous optimization provided much better solutions with higher expense of computational time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallelizing the cellular potts model on GPU and multi-core CPU: An OpenCL cross-platform study

    Page(s): 117 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (506 KB) |  | HTML iconHTML  

    In this paper, we present the analysis and development of a cross-platform OpenCL parallelization of the Cellular Potts Model (CPM). In general, the evolution of the CPM is time-consuming. Using data-parallel programming model such as CUDA can accelerate the process, but it is highly dependent on the hardware type and manufacturer. Recently, OpenCL has attracted a lot of attention and been widely used by researchers. OpenCL provides a flexible solution, which allows us to come up with an implementation that can execute on both GPUs and multi-core CPUs regardless of the hardware type and manufacturer. Some optimizations are also made for both GPU and multi-core CPU implementations of the CPM, and we also propose a resource management method, MLBBRM. Experimental results show that the developed optimized algorithms for both GPU and multi-core CPU have an average speedup of about 30× and 8× respectively compared with the single threaded CPU implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Content updating method in FemtoCaching

    Page(s): 123 - 127
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (958 KB) |  | HTML iconHTML  

    Work in FemtoCaching proposes the algorithm to place contents in caches of femto-cells but it does not consider changes of content ranking and content popularities. However, content ranking and content popularities change during time. Therefore, the caches should be updated. In addition, the number of contents to be updated should be limited. In this research, we propose a method to update contents in caches of femto-cells in FemtoCaching. The proposed method has a complexity less than the contents placement algorithm of FemtoCaching. The performance of proposed method is evaluated by a simulation. The experimental results show that the proposed method has an acceptable performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.