By Topic

Information Forensics and Security (WIFS), 2011 IEEE International Workshop on

Date Nov. 29 2011-Dec. 2 2011

Filter Results

Displaying Results 1 - 25 of 40
  • Use of turbo codes with low-rate convolutional constituent codes in fingerprinting scenarios

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (468 KB) |  | HTML iconHTML  

    We discuss the use of turbo codes in fingerprinting schemes. More precisely, we present a family of turbo codes that are secure against attacking coalitions of size 2. This family is build upon a class of low-rate convolutional codes with maximum free distance. Low rate convolutional codes are commonly used in code-spread CDMA applications. Moreover, we show how efficient traitor tracing can be performed by means of the turbo decoding algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • “Re-synchronization by moments”: An efficient solution to align Side-Channel traces

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (414 KB) |  | HTML iconHTML  

    Modern embedded systems rely on cryptographic co-processor to ensure security. These cryptographic co-processor are theoretically secure but their physical implementations are vulnerable against Side-Channel Analysis (SCA). Therefore, embedded systems should be evaluated for their robustness against these attacks. In SCA, the preprocessing of acquired traces is crucial to mount an efficient analysis and therefore make a reliable evaluation. This paper mainly deals with the common problem of aligning SCA traces. For this purpose, we put forward an innovative re-synchronization algorithm and show its efficiency compared to existing techniques. Our results are based on real measurements acquired from several cryptographic implementations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Holmes: A data theft forensic framework

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (194 KB) |  | HTML iconHTML  

    This paper presents Holmes, a forensic framework for postmortem investigation of data theft incidents in enterprise networks. Holmes pro-actively collects potential evidence from hosts and the network for correlation analysis at a central location. In order to optimize the storage requirements for the collected data, Holmes relies on compact network and host data structures. We evaluate the theoretical storage requirements of Holmes in average networks and quantify the improvements compared to raw data collection alternatives. Finally, we present the application of Holmes to two realistic data theft investigation scenarios and discuss how combining network and host data can improve the efficiency and reliability of these investigations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variable window power spectral density attack

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1072 KB) |  | HTML iconHTML  

    Side channel attacks permit the recovery of the secret key held within a cryptographic device. This paper presents a new EM attack in the frequency domain, using a power spectral density analysis that permits the use of variable spectral window widths for each trace of the data set and demonstrates how this attack can therefore overcome both inter-and intra-round random insertion type countermeasures. We also propose a novel re-alignment method exploiting the minimal power markers exhibited by electromagnetic emanations. The technique can be used for the extraction and re-alignment of round data in the time domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved Spread Spectrum multibit watermarking

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (955 KB) |  | HTML iconHTML  

    This paper presents a Multibit Improved Spread Spectrum modulation (MISS) by properly adjusting the energy of the pseudo random sequences modulated by Code Division (CDM). We extend the one-bit spread spectrum watermarking approach proposed by Malvar and Florencio [1] to multibit watermarking by using an optimization procedure to achieve the best performance possible in robustness and transparency while mitigating the cross correlations among sequences and the host interference. The proposed multibit approach also tradeoffs the resulting watermarking distortion with the host interference rejection. We describe the improved modulation method and present results to illustrate the performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • DWT-based additive image watermarking using the Student-t prior

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (997 KB) |  | HTML iconHTML  

    In this work, a class of new blind watermark detectors is proposed for the DWT (Discrete Wavelet Transform)-based additive image watermarking problem. More specific, we model the marginal subband wavelet distributions with the Student-t probability density function (pdf) deriving a new watermark detector. The proposed detector shows high performance with regard to the watermark detection and increased robust properties against intentional or unintentional attacks. Experimental results on real images demonstrate these properties comparing the proposed detector with other state of the art methods in the transform domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient privacy preserving K-means clustering in a three-party setting

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB) |  | HTML iconHTML  

    User clustering is a common operation in online social networks, for example to recommend new friends. In previous work [5], Erkin et al. proposed a privacy-preserving K-means clustering algorithm for the semi-honest model, using homomorphic encryption and multi-party computation. This paper makes three contributions: 1) it addresses remaining privacy weaknesses in Erkin's protocol, 2) it minimizes user interaction and allows clustering of offline users (through a central party acting on users' behalf), and 3) it enables highly efficient non-linear operations, improving overall efficiency (by its three-party structure). Our complexity and security analyses underscore the advantages of the solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secure binary embeddings for privacy preserving nearest neighbors

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2140 KB) |  | HTML iconHTML  

    We present a novel method to securely determine whether two signals are similar to each other, and apply it to approximate nearest neighbor clustering. The proposed method relies on a locality sensitive hashing scheme based on a secure binary embedding, computed using quantized random projections. Hashes extracted from the signals preserve information about the distance between the signals, provided this distance is small enough. If the distance between the signals is larger than a threshold, then no information about the distance is revealed. Theoretical and experimental justification is provided for this property. Further, when the randomized embedding parameters are unknown, then the mutual information between the hashes of any two signals decays to zero exponentially fast as a function of the ℓ2 distance between the signals. Taking advantage of this property, we suggest that these binary hashes can be used to perform privacy-preserving nearest neighbor search with significantly lower complexity compared to protocols which use the actual signals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Painless migration from passwords to two factor authentication

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (710 KB) |  | HTML iconHTML  

    In spite of growing frequency and sophistication of attacks two factor authentication schemes have seen very limited adoption in the US, and passwords remain the single factor of authentication for most bank and brokerage accounts. Clearly the cost benefit analysis is not as strongly in favor of two factor as we might imagine. Upgrading from passwords to a two factor authentication system usually involves a large engineering effort, a discontinuity of user experience and a hard key management problem. In this paper we describe a system to convert a legacy password authentication server into a two factor system. The existing password system is untouched, but is cascaded with a new server that verifies possession of a smartphone device. No alteration, patching or updates to the legacy system is necessary. There are now two alternative authentication paths: one using passwords alone, and a second using passwords and possession of the trusted device. The bank can leave the password authentication path available while users migrate to the two factor scheme. Once migration is complete the password-only path can be severed. We have implemented the system and carried out two factor authentication against real accounts at several major banks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Attack using reconstructed fingerprint

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2537 KB) |  | HTML iconHTML  

    Most fingerprint recognition systems store minutiae-based fingerprint template in a database. As the minutiae template is very compact, many take it for granted that these minutiae points do not contain sufficient information for reconstructing the original fingerprint. This paper proposes a scheme to reconstruct the fingerprint from minutiae points based on the amplitude and frequency modulated (AM-FM) fingerprint model to fool a system that requires a full print. We first generate a binary ridge pattern which has a similar ridge flow to that of the original fingerprint. The continuous phase is intuitively reconstructed by removing the spirals in the phase image estimated from the ridge pattern. We further introduce a phase refinement process to reduce the artifacts created due to the discontinuity in the reconstructed phase image, which is the combination of the continuous phase and the spiral phase (computed from the minutiae points). Compared with previous works, our reconstructed fingerprint matches better against the original fingerprint and the other impressions. In addition, it contains fewer artifacts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mixing fingerprints for generating virtual identities

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (887 KB) |  | HTML iconHTML  

    This work explores the possibility of mixing two different fingerprints at the image level in order to generate a new fingerprint. To mix two fingerprints, each fingerprint is decomposed into two different components, viz., the continuous and spiral components. After pre-aligning the components of each fingerprint, the continuous component of one fingerprint is combined with the spiral component of the other fingerprint image. Experiments on a subset of the WVU fingerprint dataset show that the proposed approach can be used to generate virtual identities from images of two different fingers pertaining to a single individual or different individuals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • How contact pressure, contact time, smearing and oil/skin lotion influence the aging of latent fingerprint traces: First results for the binary pixel feature using a CWL sensor

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB) |  | HTML iconHTML  

    Determining the age of latent fingerprint traces found at crime scenes is an unresolved research issue since decades. In prior work, we have suggested to use optical, non-invasive image sensory in combination with a new aging feature called `binary pixel' (shown to have a characteristic logarithmic aging tendency on hard disk platters) to solve this important research challenge. In this paper, we evaluate the influence of the fingerprint application process (such as contact pressure, contact time, smearing of the fingerprint or contamination with skin lotion or oil) on the aging curves of the binary pixel feature (inter-application-factor-variance). We furthermore evaluate differences of fingerprint traces applied in a similar way (intra-application-factor-variance). Examining 25 fingerprint samples of a test subject with a total of 500 scans, we show that the application of substances to a finger seems to increase the present amount of residue and that substances containing water increase the aging speed of a fingerprint trace significantly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fingerprinting Tor's hidden service log files using a timing channel

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (704 KB) |  | HTML iconHTML  

    Hidden services are anonymously hosted services that can be accessed over Tor, an anonymity network. In this paper we present an attack that allows an entity to prove, once a machine suspect to host a hidden server has been confiscated, that such machine has in fact hosted a particular content. Our solution is based on leaving a timing channel fingerprint in the confiscated machine's log file. In order to be able to fingerprint the log server through Tor we first study the noise sources: the delay introduced by Tor and the log entries due to other users. We then describe our fingerprint method, and analytically determine the detection probability and the rate of false positives. Finally, we empirically validate our results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cumulants-based Radar Specific Emitter Identification

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (278 KB) |  | HTML iconHTML  

    In this paper we consider the problem of Radar Specific Emitter Identification (SEI) with the aim of distinguishing among several transmitting sources of the same kind which is a very hot topic in the device forensic field. At the design stage, we introduce a classification technique based on some suitable features evaluated from the cumulants of the signal emitted by the radar system. The devised features share some invariance properties which make them very attractive for the SEI problem. Hence, we use them as the input to a K-Nearest Neighbor (KNN) classifier which performs the assignment of the emitter to a specific class. At the analysis stage, we assess the performance of the new system on a real dataset containing radar signals from three identical airborne emitters. The results highlight that a satisfactory classification performance is achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Dempster-Shafer framework for decision fusion in image forensics

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (779 KB) |  | HTML iconHTML  

    In this work a decision fusion strategy for image forensics is presented, based on Dempster-Shafer's Theory of Evidence. The goal is to automatically summarize the information provided by several image forensics tools, allowing both a binary and a soft interpretation of the global output produced. The proposed strategy is easily extendable to an arbitrary number of tools, it does not require that the output of the various tools be probabilistic and it takes into account available information about tools reliability. Comparison with logical disjunction- and SVM-based fusion shows an improvement in classification accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel algorithm for obfuscated code analysis

    Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (757 KB) |  | HTML iconHTML  

    Obfuscated code is machine or source code that is made difficult to be read by humans. It is usually done to hide some important business logic or to hide malicious intent. There has been a dramatic increase in the use of obfuscated codes for drive-by-download web browser attacks using javascripts. In this paper we will present a novel approach for detecting this type of code without the need for de-obfuscation, allowing its usage on real-time traffic analysis programs like Intrusion Prevention Systems or Web Application Firewalls. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the consistency of the biometric menagerie for irises and iris matchers

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB) |  | HTML iconHTML  

    The biometric menagerie is useful in identifying the troublesome users within a biometric recognition system. In order to maximize the benefits of the menagerie classifications, it is imperative that the classifications remain constant for each subject. Irises present one of the unique scenarios for classification since each iris represents the same subject but the two irises are independent of each other. We have taken the ICE 2005 iris image dataset [8] and applied three different iris recognition algorithms to it. For each algorithm, we classified the subjects within the biometric menagerie and studied the consistency of the classifications across algorithms. We also broke the dataset into subsets by left and right iris and studied the consistency of the classifications between irises. Our results have shown that the biometric menagerie classifications are algorithm dependent and dependent on which iris is chosen. One-third of the population was classified as a weak user by only a single algorithm and a quarter of the population had irises with non-matching classifications, one of which was a weak user classification. These two subsets of the population represent all the potentially weak users in the population but the subjects cannot be considered weak due to the disagreement between the algorithms and the mismatched classifications of the two irises. In order to use the biometric menagerie effectively, one algorithm must always be used for all recognitions and modalities must be kept in disjoint datasets to reliably label weak users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of non-aligned double JPEG artifacts for the localization of image forgeries

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (217 KB) |  | HTML iconHTML  

    In this paper, we present a forensic algorithm to discriminate between original and forged regions in JPEG images, under the hypothesis that the tampered image presents a non-aligned double JPEG compression (NA-JPEG). Unlike previous approaches, the proposed algorithm does not need to manually select a suspect region to test the presence or the absence of NA-JPEG artifacts. Based on a new statistical model, the probability for each 8 × 8 DCT block to be forged is automatically derived. Experimental results, considering different forensic scenarios, demonstrate the validity of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Eye detection in the Middle-Wave Infrared spectrum: Towards recognition in the dark

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1028 KB) |  | HTML iconHTML  

    In this paper, the problem of eye detection in the Middle-Wave Infrared (MWIR) spectrum is studied in order to demonstrate the importance of performing eye detection in the thermal band. While currently there are methods that are capable of performing automatic eye detection efficiently in the visible and active infrared (IR) spectrum (i.e., Near-IR and Short-Wave IR), eye detection in the thermal band is a very challenging problem. This is because in the thermal domain limited features can be extracted from the eye region, mainly eyelashes and eyebrows, while features such as human irises, pupils, and superficial blood vessels of the conjunctiva are not clear. Our proposed eye detection method operates in the MWIR band by combining a set of methodological steps such as face normalization, integral projections, and template-based matching. In this paper, a face database in the MWIR spectrum of 50 subjects is first assembled and used to illustrate the challenges associated with the problem. Then, a set of experiments is performed in order to demonstrate the possibility for eye detection in the MWIR band. Experiments show that (i) human eyes on still frontal face images captured in the MWIR wavelength band can be detected with promising results, (ii) that MWIR face images can efficiently be matched to MWIR face images (same session) using both research and commercial software (originally not designed to address such a specific problem), and (iii) the problem of matching MWIR images from different sessions is challenging. To the best of our knowledge this is the first time in the open literature that the problem of thermal-based eye detection using still frontal face images is being investigated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Software code obfuscation by hiding control flow information in stack

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB) |  | HTML iconHTML  

    Software code released to the user has the risk of reverse engineering attacks. Obfuscation is a technique in which the software code is transformed into a semantically equivalent form which is harder to reverse engineer. In this paper, we propose an algorithm to obfuscate software programs. The basic idea of our algorithm is to remove vital information such as jump instructions from the program code section and hide them in the data section. These instructions are then reconstructed to their original form dynamically at run time, thus making the program semantically equivalent to the original program. Experimental results on programs from the SPECint benchmark suites indicate that the algorithm performs well in introducing instruction disassembly errors and control flow errors without bloating up the size of the program too much. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analysis on attacker actions in fingerprint-copy attack in source camera identification

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1007 KB) |  | HTML iconHTML  

    Multimedia forensics deals with the analysis of multimedia data to gather information on their origin and authenticity through the use of specific tools. At this point an important question arises: how much reliable are these algorithms? In this work we have considered the technique presented in [1] where it is shown how source camera identification can be attacked. In particular, the problem investigated concerns the situation when an adversary estimates the sensor fingerprint from a set of images belonging to the person he wants to frame and superimposes it onto an image acquired by a different camera to charge the innocent victim as the author of that photo. In [1], a countermeasure against such attack, named Triangle Test, is introduced. In this paper we have analyzed if a more sophisticated action of the attacker can invalidate such countermeasure. Experimental results are provided to prove how hacker's action could be improved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Splay trees based early packet rejection mechanism against DoS traffic targeting firewall default security rule

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1300 KB) |  | HTML iconHTML  

    As the size of the firewall security policies grows; the discarded packets by the default security rule affect significantly the system performance and become increasingly harmful in terms of filtering processing time. In this paper, we propose a mechanism to improve firewall performance through the early rejection of Denial of Service (DoS) traffic targeting the default security rule. To do that, the mechanism optimizes the order of the security policy filtering fields, using a traffic statistical scheme which is based on multilevel filtering modules, splay trees and hash tables. The proposed scheme can easily reject unwanted traffic in early stages as well as accept repeated packets with less memory accesses, and thus less overall packets matching time. The numerical results obtained by simulation demonstrated that the proposed mechanism reduced significantly the filtering processing time of DoS traffic targeting the firewall default security rule, compared to the related Self Adjusting Binary Search on Prefix Length (SA-BSPL) technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SmartCM a smart card fault injection simulator

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (318 KB) |  | HTML iconHTML  

    Smart card are often the target of software or hardware attacks. The most recent attack is based on fault injection which modifies the behavior of the application. We propose an evaluation of the effect of the propagation and the generation of hostile application inside the card. We designed several countermeasures and models of smart cards. Then we evaluate the ability of these countermeasures to detect the faults, and the latency of the detection. In a second step we evaluate the mutant with respect to security properties in order to focus only on the dangerous mutants. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • BotCloud: Detecting botnets using MapReduce

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (714 KB) |  | HTML iconHTML  

    Botnets are a major threat of the current Internet. Understanding the novel generation of botnets relying on peer-to-peer networks is crucial for mitigating this threat. Nowadays, botnet traffic is mixed with a huge volume of benign traffic due to almost ubiquitous high speed networks. Such networks can be monitored using IP flow records but their forensic analysis form the major computational bottleneck. We propose in this paper a distributed computing framework that leverages a host dependency model and an adapted PageRank [1] algorithm. We report experimental results from an open-source based Hadoop cluster [2] and highlight the performance benefits when using real network traces from an Internet operator. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A study of face recognition of identical twins by humans

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1526 KB) |  | HTML iconHTML  

    Recent studies have shown that face recognition performance degrades considerably for images of identical twins. Human face matching capability is often considered as a benchmark for assessing and improving automatic face recognition algorithms. In this work, we investigate human capability to distinguish between identical twins. If humans are able to distinguish between facial images of identical twins, it would suggest that humans are capable of identifying discriminating facial traits that can potentially be useful to develop algorithms for this very challenging problem. Experiments with different viewing times and imaging conditions are conducted to determine if humans viewing a pair of facial images can perceive if the image pairs belong to the same person or to a pair of identical twins. The experiments are conducted on 186 twin subjects, making it the largest such study in the literature to date. We observe that humans can perform the task significantly better if they are given enough time and tend to make more mistakes when images differ in imaging conditions. Our analysis also suggests that humans look for facial marks like moles, scars, etc. to make their decision and do worse when presented with images lacking such marks. Experiments with automatic face recognition systems show that human observers outperform automatic matchers for this task. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.