Cart (Loading....) | Create Account
Close category search window
 
Skip to Results

Search Results

You searched for: big data
6,783 Results returned
Skip to Results
  • Save this Search
  • Download Citations Disabled
  • Save To Project
  • Email
  • Print
  • Export Results
  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Service-Generated Big Data and Big Data-as-a-Service: An Overview

    Zibin Zheng ; Jieming Zhu ; Lyu, M.R.
    Big Data (BigData Congress), 2013 IEEE International Congress on

    DOI: 10.1109/BigData.Congress.2013.60
    Publication Year: 2013 , Page(s): 403 - 410

    IEEE Conference Publications

    With the prevalence of service computing and cloud computing, more and more services are emerging on the Internet, generating huge volume of data, such as trace logs, QoS information, service relationship, etc. The overwhelming service-generated data become too large and complex to be effectively processed by traditional approaches. How to store, manage, and create values from the service-oriented big data become an important research problem. On the other hand, with the increasingly large amount of data, a single infrastructure which provides common functionality for managing and analyzing different types of service-generated big data is urgently required. To address this challenge, this paper provides an overview of service-generated big data and Big Data-as-a-Service. First, three types of service-generated big data are exploited to enhance system performance. Then, Big Data-as-a-Service, including Big Data Infrastructure-as-a-Service, Big Data Platform-as-a-Service, and Big Data Analytics Software-as-a-Service, is employed to provide common big data related services (e.g., accessing service-generated big data and data analytics results) to users to enhance efficiency and reduce cost. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Prominence of MapReduce in Big Data Processing

    Pandey, Shweta ; Tokekar, Vrinda
    Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on

    DOI: 10.1109/CSNT.2014.117
    Publication Year: 2014 , Page(s): 555 - 560

    IEEE Conference Publications

    Big Data has come up with aureate haste and a clef enabler for the social business, Big Data gifts an opportunity to create extraordinary business advantage and better service delivery. Big Data is bringing a positive change in the decision making process of various business organizations. With the several offerings Big Data has come up with several issues and challenges which are related to the Big Data Management, Big Data processing and Big Data analysis. Big Data is having challenges related to volume, velocity and variety. Big Data has 3Vs Volume means large amount of data, Velocity means data arrives at high speed, Variety means data comes from heterogeneous resources. In Big Data definition, Big means a dataset which makes data concept to grow so much that it becomes difficult to manage it by using existing data management concepts and tools. Map Reduce is playing a very significant role in processing of Big Data. This paper includes a brief about Big Data and its related issues, emphasizes on role of MapReduce in Big Data processing. MapReduce is elastic scalable, efficient and fault tolerant for analysing a large set of data, highlights the features of MapReduce in comparison of other design model which makes it popular tool for processing large scale data. Analysis of performance factors of MapReduce shows that elimination of their inverse effect by optimization improves the performance of Map Reduce. View full abstract»

  • Open Access

    Toward Scalable Systems for Big Data Analytics: A Technology Tutorial

    Hu, H. ; Wen, Y. ; Chua, T. ; Li, X.
    Access, IEEE

    Volume: 2
    DOI: 10.1109/ACCESS.2014.2332453
    Publication Year: 2014 , Page(s): 652 - 687

    IEEE Journals & Magazines

    Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Inconsistencies in big data

    Du Zhang
    Cognitive Informatics & Cognitive Computing (ICCI*CC), 2013 12th IEEE International Conference on

    DOI: 10.1109/ICCI-CC.2013.6622226
    Publication Year: 2013 , Page(s): 61 - 67
    Cited by:  Papers (1)

    IEEE Conference Publications

    We are faced with a torrent of data generated and captured in digital form as a result of the advancement of sciences, engineering and technologies, and various social, economical and human activities. This big data phenomenon ushers in a new era where human endeavors and scientific pursuits will be aided by not only human capital, and physical and financial assets, but also data assets. Research issues in big data and big data analysis are embedded in multi-dimensional scientific and technological spaces. In this paper, we first take a close look at the dimensions in big data and big data analysis, and then focus our attention on the issue of inconsistencies in big data and the impact of inconsistencies in big data analysis. We offer classifications of four types of inconsistencies in big data and point out the utility of inconsistency-induced learning as a tool for big data analysis. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Next Big Thing in Big Data: The Security of the ICT Supply Chain

    Tianbo Lu ; Xiaobo Guo ; Bing Xu ; Lingling Zhao ; Yong Peng ; Hongyu Yang
    Social Computing (SocialCom), 2013 International Conference on

    DOI: 10.1109/SocialCom.2013.172
    Publication Year: 2013 , Page(s): 1066 - 1073

    IEEE Conference Publications

    In contemporary society, with supply chains becoming more and more complex, the data in supply chains increases by means of volume, variety and velocity. Big data rise in response to the proper time and conditions to offer advantages for the nodes in supply chains to solve prewiously difficult problems. For any big data project to succeed, it must first depend on high-quality data but not merely on quantity. Further, it will become increasingly important in many big data projects to add external data to the mix and companies will eventually turn from only looking inward to also looking outward into the market, which means the use of big data must be broadened considerably. Hence the data supply chains, both internally and externally, become of prime importance. ICT (Information and Telecommunication) supply chain management is especially important as supply chain link the world closely and ICT supply chain is the base of all supply chains in today's world. Though many initiatives to supply chain security have been developed and taken into practice, most of them are emphasized in physical supply chain which is addressed in transporting cargos. The research on ICT supply chain security is still in preliminary stage. The use of big data can promote the normal operation of ICT supply chain as it greatly improve the data collecting and processing capacity and in turn, ICT supply chain is a necessary carrier of big data as it produces all the software, hardware and infrastructures for big data's collection, storage and application. The close relationship between big data and ICT supply chain make it an effective way to do research on big data security through analysis on ICT supply chain security. This paper first analyzes the security problems that the ICT supply chain is facing in information management, system integrity and cyberspace, and then introduces several famous international models both on physical supply chain and ICT supply chain. After that the authors d- scribe a case of communication equipment with big data in ICT supply chain and propose a series of recommendations conducive to developing secure big data supply chain from five dimensions. View full abstract»

  • Freely Available from IEEE

    Big data analytics for drug discovery

    Chan, K.C.C.
    Bioinformatics and Biomedicine (BIBM), 2013 IEEE International Conference on

    DOI: 10.1109/BIBM.2013.6732448
    Publication Year: 2013 , Page(s): 1

    IEEE Conference Publications

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Attribute Relationship Evaluation Methodology for Big Data Security

    Sung-Hwan Kim ; Nam-Uk Kim ; Tai-Myoung Chung
    IT Convergence and Security (ICITCS), 2013 International Conference on

    DOI: 10.1109/ICITCS.2013.6717808
    Publication Year: 2013 , Page(s): 1 - 4

    IEEE Conference Publications

    There has been an increasing interest in big data and big data security with the development of network technology and cloud computing. However, big data is not an entirely new technology but an extension of data mining. In this paper, we describe the background of big data, data mining and big data features, and propose attribute selection methodology for protecting the value of big data. Extracting valuable information is the main goal of analyzing big data which need to be protected. Therefore, relevance between attributes of a dataset is a very important element for big data analysis. We focus on two things. Firstly, attribute relevance in big data is a key element for extracting information. In this perspective, we studied on how to secure a big data through protecting valuable information inside. Secondly, it is impossible to protect all big data and its attributes. We consider big data as a single object which has its own attributes. We assume that a attribute which have a higher relevance is more important than other attributes. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Big data: Issues, challenges, tools and Good practices

    Katal, A. ; Wazid, M. ; Goudar, R.H.
    Contemporary Computing (IC3), 2013 Sixth International Conference on

    DOI: 10.1109/IC3.2013.6612229
    Publication Year: 2013 , Page(s): 404 - 409

    IEEE Conference Publications

    Big data is defined as large amount of data which requires new technologies and architectures so that it becomes possible to extract value from it by capturing and analysis process. Due to such large size of data it becomes very difficult to perform effective analysis using the existing traditional techniques. Big data due to its various properties like volume, velocity, variety, variability, value and complexity put forward many challenges. Since Big data is a recent upcoming technology in the market which can bring huge benefits to the business organizations, it becomes necessary that various challenges and issues associated in bringing and adapting to this technology are brought into light. This paper introduces the Big data technology along with its importance in the modern world and existing projects which are effective and important in changing the concept of science into big science and society too. The various challenges and issues in adapting and accepting Big data technology, its tools (Hadoop) are also discussed in detail along with the problems Hadoop is facing. The paper concludes with the Good Big data practices to be followed. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Big Data and Policy Design for Data Sovereignty: A Case Study on Copyright and CCL in South Korea

    Hyejung Moon ; Hyun Suk Cho
    Social Computing (SocialCom), 2013 International Conference on

    DOI: 10.1109/SocialCom.2013.165
    Publication Year: 2013 , Page(s): 1026 - 1029

    IEEE Conference Publications

    The purpose of this paper is as follows. First, I am trying to conceptualize big data as a social problem. Second, I would like to explain the difference between big data and conventional mega information. Third, I would like to recommend the role of the government for utilization of big data as a policy tools. Fourth, while referring to copyright and CCL(Creative Commons License) cases, I would like to explain the regulation for big data on data sovereignty. Finally, I would like to suggest a direction of policy design for big data. As for the result of this study, policy design for big data should be distinguished from policy design for mega information to solve data sovereignty issues. From a law system perspective, big data is generated autonomously. It has been accessed openly and shared without any intention. In market perspective, big data is created without any intention. Big data can be changed automatically in case of openness with reference feature such as Linked of Data. Some policy issues such as responsibility and authenticity should be raised. Big data is generated in a distributed and diverse way without any concrete form in technology perspective. So, we need a different approach. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Efficient and Customizable Data Partitioning Framework for Distributed Big RDF Data Processing in the Cloud

    Kisung Lee ; Ling Liu ; Yuzhe Tang ; Qi Zhang ; Yang Zhou
    Cloud Computing (CLOUD), 2013 IEEE Sixth International Conference on

    DOI: 10.1109/CLOUD.2013.63
    Publication Year: 2013 , Page(s): 327 - 334

    IEEE Conference Publications

    Big data business can leverage and benefit from the Clouds, the most optimized, shared, automated, and virtualized computing infrastructures. One of the important challenges in processing big data in the Clouds is how to effectively partition the big data to ensure efficient distributed processing of the data. In this paper we present a Scalable and yet customizable data PArtitioning framework, called SPA, for distributed processing of big RDF graph data. We choose big RDF datasets as our focus of the investigation for two reasons. First, the Linking Open Data cloud has put forwards a good number of big RDF datasets with tens of billions of triples and hundreds of millions of links. Second, such huge RDF graphs can easily overwhelm any single server due to the limited memory and CPU capacity and exceed the processing capacity of many conventional data processing software systems. Our data partitioning framework has two unique features. First, we introduce a suite of vertexcentric data partitioning building blocks to allow efficient and yet customizable partitioning of large heterogeneous RDF graph data. By efficient, we mean that the SPA data partitions can support fast processing of big data of different sizes and complexity. By customizable, we mean that the SPA partitions are adaptive to different query types. Second, we propose a selection of scalable techniques to distribute the building block partitions across a cluster of compute nodes in a manner that minimizes inter-node communication cost by localizing most of the queries on distributed partitions. We evaluate our data partitioning framework and algorithms through extensive experiments using both benchmark and real datasets. Our experimental results show that the SPA data partitioning framework is not only efficient for partitioning and distributing big RDF datasets of diverse sizes and structures but also effective for processing big data queries of different types and complexity. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Data Evolution Analysis of Virtual DataSpace for Managing the Big Data Lifecycle

    Xin Cheng ; Chungjin Hu ; Yang Li ; Wei Lin ; Haolei Zuo
    Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2013 IEEE 27th International

    DOI: 10.1109/IPDPSW.2013.57
    Publication Year: 2013 , Page(s): 2054 - 2063

    IEEE Conference Publications

    New challenge about the constantly changing of associated data in big data management has arisen, which leads to the issue of data evolution. In this paper, a data evolution model of Virtual Data Space (VDS) is proposed for managing the big data lifecycle. Firstly, the concept of data evolution cycle is defined, and the lifecycle process of big data management is described. Based on these, the data evolution lifecycle is analyzed from the data relationship, the user requirements, and the operation behavior. Secondly, the classification and key concepts about the data evolution process are described in detail. According to this, the data evolution model is constructed by defining the related concepts and analyzing the data association in VDS, for the capture and tracking of dynamic data in the data evolution cycle. Then we discuss the cost problem about data dissemination and change. Finally, as the application case, the service process of dynamic data in the field of materials science is described and analyzed. We verify the validity of data evolution modeling in VDS by the comparison of traditional database, data space, and VDS. It shows that this analysis method is efficient for the data evolution processing, and very suitable for the data-intensive application and the real-time dynamic service. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    An ensemble MIC-based approach for performance diagnosis in big data platform

    Pengfei Chen ; Yong Qi ; Xinyi Li ; Li Su
    Big Data, 2013 IEEE International Conference on

    DOI: 10.1109/BigData.2013.6691701
    Publication Year: 2013 , Page(s): 78 - 85

    IEEE Conference Publications

    The era of big data has began. Although applications based on big data bring considerable benefit to IT industries, governments and social organizations, they bring more challenges to the management of big data platforms which are the fundamental infrastructures due to the complexity, variety, velocity and volume of big data. To offer a healthy platform for big data applications, we propose a novel signature-based performance diagnosis approach employing MIC invariants between performance metrics. We formalize the performance diagnosis as a pattern recognition problem. The normal state of a big data application is used to train a set of MIC (Maximum Information Criterion) invariants. One performance problem occurred in the big data application is identified by a unique binary tuple consisted by a set violations of MIC invariants. All the signatures of performance problems form a diagnosis knowledge database. If the KPI (Key Performance Indicator) of the big data application deviates its normal region, our approach can identify the real culprits through looking for similar signatures in the signature database. To detect the deviation of the KPI, we propose a new metric named unpredictability based on ARIMA model. And considering the variety of big data applications, we build an ensemble performance diagnosis approach which means a unique ARIMA model and a unique set of MIC invariants are built for a specific kind of application. Through experiment evaluation in a controlled environment running a state of the art big data benchmark, we find our approach can pinpoint the real culprits of performance problems in an average 83% precision and 87% recall which is better than a correlation based and single model based performance diagnosis. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    A characterization of big data benchmarks

    Wen Xiong ; Zhibin Yu ; Zhendong Bei ; Juanjuan Zhao ; Fan Zhang ; Yubin Zou ; Xue Bai ; Ye Li ; Chengzhong Xu
    Big Data, 2013 IEEE International Conference on

    DOI: 10.1109/BigData.2013.6691707
    Publication Year: 2013 , Page(s): 118 - 125

    IEEE Conference Publications

    Recently, big data has been evolved into a buzzword from academia to industry all over the world. Benchmarks are important tools for evaluating an IT system. However, benchmarking big data systems is much more challenging than ever before. First, big data systems are still in their infant stage and consequently they are not well understood. Second, big data systems are more complicated compared to previous systems such as a single node computing platform. While some researchers started to design benchmarks for big data systems, they do not consider the redundancy between their benchmarks. Moreover, they use artificial input data sets rather than real world data for their benchmarks. It is therefore unclear whether these benchmarks can be used to precisely evaluate the performance of big data systems. In this paper, we first analyze the redundancy among benchmarks from ICTBench, HiBench and typical workloads from real world applications: spatio-temporal data analysis for Shenzhen transportation system. Subsequently, we present an initial idea of a big data benchmark suite for spatio-temporal data. There are three findings in this work: (1) redundancy exists in these pioneering benchmark suites and some of them can be removed safely. (2) The workload behavior of trajectory data analysis applications is dramatically affected by their input data sets. (3) The benchmarks created for academic research cannot represent the cases of real world applications. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    A big data implementation based on Grid computing

    Garlasu, D. ; Sandulescu, V. ; Halcu, I. ; Neculoiu, G. ; Grigoriu, O. ; Marinescu, M. ; Marinescu, V.
    Roedunet International Conference (RoEduNet), 2013 11th

    DOI: 10.1109/RoEduNet.2013.6511732
    Publication Year: 2013 , Page(s): 1 - 4
    Cited by:  Papers (2)

    IEEE Conference Publications

    Big Data is a term defining data that has three main characteristics. First, it involves a great volume of data. Second, the data cannot be structured into regular database tables and third, the data is produced with great velocity and must be captured and processed rapidly. Oracle adds a fourth characteristic for this kind of data and that is low value density, meaning that sometimes there is a very big volume of data to process before finding valuable needed information. Big Data is a relatively new term that came from the need of big companies like Yahoo, Google, Facebook to analyze big amounts of unstructured data, but this need could be identified in a number of other big enterprises as well in the research and development field. The framework for processing Big Data consists of a number of software tools that will be presented in the paper, and briefly listed here. There is Hadoop, an open source platform that consists of the Hadoop kernel, Hadoop Distributed File System (HDFS), MapReduce and several related instruments. Two of the main problems that occur when studying Big Data are the storage capacity and the processing power. That is the area where using Grid Technologies can provide help. Grid Computing refers to a special kind of distributed computing. A Grid computing system must contain a Computing Element (CE), and a number of Storage Elements (SE) and Worker Nodes (WN). The CE provides the connection with other GRID networks and uses a Workload Management System to dispatch jobs on the Worker Nodes. The Storage Element is in charge with the storage of the input and the output of the data needed for the job execution. The main purpose of this article is to present a way of processing Big Data using Grid Technologies. For that, the framework for managing Big Data will be presented along with the way to implement it around a grid architecture. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Characterizing the efficiency of data deduplication for big data storage management

    Ruijin Zhou ; Ming Liu ; Tao Li
    Workload Characterization (IISWC), 2013 IEEE International Symposium on

    DOI: 10.1109/IISWC.2013.6704674
    Publication Year: 2013 , Page(s): 98 - 108

    IEEE Conference Publications

    The demand for data storage and processing is increasing at a rapid speed in the big data era. Such a tremendous amount of data pushes the limit on storage capacity and on the storage network. A significant portion of the dataset in big data workloads is redundant. As a result, deduplication technology, which removes replicas, becomes an attractive solution to save disk space and traffic in a big data environment. However, the overhead of extra CPU computation (hash indexing) and IO latency introduced by deduplication should be considered. Therefore, the net effect of using deduplication for big data workloads needs to be examined. To this end, we characterize the redundancy of typical big data workloads to justify the need for deduplication. We analyze and characterize the performance and energy impact brought by deduplication under various big data environments. In our experiments, we identify three sources of redundancy in big data workloads: 1) deploying more nodes, 2) expanding the dataset, and 3) using replication mechanisms. We elaborate on the advantages and disadvantages of different deduplication layers, locations, and granularities. In addition, we uncover the relation between energy overhead and the degree of redundancy. Furthermore, we investigate the deduplication efficiency in an SSD environment for big data workloads. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Big Data Framework

    Tekiner, F. ; Keane, J.A.
    Systems, Man, and Cybernetics (SMC), 2013 IEEE International Conference on

    DOI: 10.1109/SMC.2013.258
    Publication Year: 2013 , Page(s): 1494 - 1499

    IEEE Conference Publications

    We are constantly being told that we live in the Information Era - the Age of BIG data. It is clearly apparent that organizations need to employ data-driven decision making to gain competitive advantage. Processing, integrating and interacting with more data should make it better data, providing both more panoramic and more granular views to aid strategic decision making. This is made possible via Big Data exploiting affordable and usable Computational and Storage Resources. Many offerings are based on the Map-Reduce and Hadoop paradigms and most focus solely on the analytical side. Nonetheless, in many respects it remains unclear what Big Data actually is, current offerings appear as isolated silos that are difficult to integrate and/or make it difficult to better utilize existing data and systems. Paper addresses this lacunae by characterising the facets of Big Data and proposing a framework in which Big Data applications can be developed. The framework consists of three Stages and seven Layers to divide Big Data application into modular blocks. The aim is to enable organizations to better manage and architect a very large Big Data application to gain competitive advantage by allowing management to have a better handle on data processing. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    5Ws Model for Big Data Analysis and Visualization

    Jinson Zhang ; Mao Lin Huang
    Computational Science and Engineering (CSE), 2013 IEEE 16th International Conference on

    DOI: 10.1109/CSE.2013.149
    Publication Year: 2013 , Page(s): 1021 - 1028

    IEEE Conference Publications

    Big Data, which contains image, video, text, audio and other forms of data, collected from multiple datasets, is difficult to process using traditional database management tools or applications. In this paper, we establish the 5Ws model by using 5Ws data dimension for Big Data analysis and visualization. 5Ws data dimension stands for, What the data content is, Why the data occurred, Where the data came from, When the data occurred, Who received the data and How the data was transferred. This framework not only classifies Big Data attributes and patterns, but also establishes density patterns that provide more analytical features. We use visual clustering to display data sending and receiving densities which demonstrate Big Data patterns. The model is tested by using the network security ISCX2012 dataset. The experiment shows that this new model with clustered visualization can be efficiently used for Big Data analysis and visualization. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Big data: A review

    Sagiroglu, S. ; Sinanc, D.
    Collaboration Technologies and Systems (CTS), 2013 International Conference on

    DOI: 10.1109/CTS.2013.6567202
    Publication Year: 2013 , Page(s): 42 - 47

    IEEE Conference Publications

    Big data is a term for massive data sets having large, more varied and complex structure with the difficulties of storing, analyzing and visualizing for further processes or results. The process of research into massive amounts of data to reveal hidden patterns and secret correlations named as big data analytics. These useful informations for companies or organizations with the help of gaining richer and deeper insights and getting an advantage over the competition. For this reason, big data implementations need to be analyzed and executed as accurately as possible. This paper presents an overview of big data's content, scope, samples, methods, advantages and challenges and discusses privacy concern on it. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Business model canvas perspective on big data applications

    Muhtaroglu, F.C.P. ; Demir, S. ; Obali, M. ; Girgin, C.
    Big Data, 2013 IEEE International Conference on

    DOI: 10.1109/BigData.2013.6691684
    Publication Year: 2013 , Page(s): 32 - 37

    IEEE Conference Publications

    Large and complex data that becomes difficult to be handled by traditional data processing applications triggers the development of big data applications which have become more pervasive than ever before. In the era of big data, data exploration and analysis turned into a difficult problem in many sectors such as the smart routing and health care sectors. Companies which can adapt their businesses well to leverage big data have significant advantages over those that lag this capability. The need for exploring new approaches to address the challenges of big data forces companies to shape their business models accordingly. In this paper, we summarize and share our findings regarding the business models deployed in big data applications in different sectors. We analyze existing big data applications by taking into consideration the core elements of a business (via business model canvas) and present how these applications provide value to their customers by making profit out of using big data. View full abstract»

  • Freely Available from IEEE

    Potentials of big data for governmental services

    Fasel, Daniel
    eDemocracy & eGovernment (ICEDEG), 2014 First International Conference on

    DOI: 10.1109/ICEDEG.2014.6819936
    Publication Year: 2014 , Page(s): 17 - 18

    IEEE Conference Publications

  • Freely Available from IEEE

    Security — A big question for big data

    Schell, R.
    Big Data, 2013 IEEE International Conference on

    DOI: 10.1109/BigData.2013.6691547
    Publication Year: 2013 , Page(s): 5

    IEEE Conference Publications

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Big data for business managers — Bridging the gap between potential and value

    Rajpurohit, A.
    Big Data, 2013 IEEE International Conference on

    DOI: 10.1109/BigData.2013.6691794
    Publication Year: 2013 , Page(s): 29 - 31

    IEEE Conference Publications

    Given the surge of interest in research, publication and application on Big Data over the last few years, the potential of Big Data seems to be well-established now across businesses. However, in most of the business implementations Big Data still seem to be struggling to deliver the promised value (ROI). Such results despite using the market leading Big Data solutions and talented deployment team are forcing the business managers to think what needs to be done differently. This paper lays down the framework for business managers to understand Big Data processes. Besides providing a business overview of Big Data core components, the paper presents several questions that the managers must ask to assess the effectiveness of their Big Data processes. This paper is based on the analysis of several Big Data projects that never delivered and comparison against successful ones. The hypothesis is developed based on public information and is proposed as the first step for business managers keen on effectively leveraging Big Data. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Study on Big Data Center Traffic Management Based on the Separation of Large-Scale Data Stream

    Hyoung Woo Park ; Il Yeon Yeo ; Lee, J.R. ; Haengjin Jang
    Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2013 Seventh International Conference on

    DOI: 10.1109/IMIS.2013.104
    Publication Year: 2013 , Page(s): 591 - 594

    IEEE Conference Publications

    The network of traditional data center has been usually designed and constructed for the provision of user's equal access of data centre's resource or data. Therefore, network administrators have a strong tendency to manage user traffic from the viewpoint that the traffic has a similar size and characteristics. But, the emersion of big data begins to make data centers have to deal with 1015 byte-data transfer at once. Such a big data transfer can cause problems in network traffic management in the existed data center. And, the tiered network architecture of the legacy data center magnifies the magnitude of the problems. One of the well-known big data in science is from large hadron collider such as LHC in Swiss CERN. CERN LHC generates multi-peta byte data per year. From our experience of CERN data service, this paper showed the impact of network traffic affected by large-scale data stream using NS2 simulation, and then, suggested the evolution direction based on separating of large-scale data stream for the big data center's network architecture. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Cross-platform aviation analytics using big-data methods

    Larsen, T.
    Integrated Communications, Navigation and Surveillance Conference (ICNS), 2013

    DOI: 10.1109/ICNSurv.2013.6548579
    Publication Year: 2013 , Page(s): 1 - 9

    IEEE Conference Publications

    This paper identifies key aviation data sets for operational analytics, presents a methodology for application of big-data analysis methods to operational problems, and offers examples of analytical solutions using an integrated aviation data warehouse. Big-data analysis methods have revolutionized how both government and commercial researchers can analyze massive aviation databases that were previously too cumbersome, inconsistent or irregular to drive high-quality output. Traditional data-mining methods are effective on uniform data sets such as flight tracking data or weather. Integrating heterogeneous data sets introduces complexity in data standardization, normalization, and scalability. The variability of underlying data warehouse can be leveraged using virtualized cloud infrastructure for scalability to identify trends and create actionable information. The applications for big-data analysis in airspace system performance and safety optimization have high potential because of the availability and diversity of airspace related data. Analytical applications to quantitatively review airspace performance, operational efficiency and aviation safety require a broad data set. Individual information sets such as radar tracking data or weather reports provide slices of relevant data, but do not provide the required context, perspective and detail on their own to create actionable knowledge. These data sets are published by diverse sources and do not have the standardization, uniformity or defect controls required for simple integration and analysis. At a minimum, aviation big-data research requires the fusion of airline, aircraft, flight, radar, crew, and weather data in a uniform taxonomy, organized so that queries can be automated by flight, by fleet, or across the airspace system. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Big Data and Transformational Government

    Joseph, R.C. ; Johnson, N.A.
    IT Professional

    Volume: 15 , Issue: 6
    DOI: 10.1109/MITP.2013.61
    Publication Year: 2013 , Page(s): 43 - 48

    IEEE Journals & Magazines

    The big data phenomenon is growing throughout private and public sector domains. Profit motives make it urgent for companies in the private sector to learn how to leverage big data. However, in the public sector, government services could also be greatly improved through the use of big data. Here, the authors describe some drivers, barriers, and best practices affecting the use of big data and associated analytics in the government domain. They present a model that illustrates how big data can result in transformational government through increased efficiency and effectiveness in the delivery of services. Their empirical basis for this model uses a case vignette from the US Department of Veterans Affairs, while the theoretical basis is a balanced view of big data that takes into account the continuous growth and use of such data. This article is part of a special issue on big data and business analytics. View full abstract»

Skip to Results

SEARCH HISTORY

Search History is available using your personal IEEE account.

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.