By Topic

Computational Intelligence for Financial Engineering & Economics (CIFEr), 2012 IEEE Conference on

Date 29-30 March 2012

Filter Results

Displaying Results 1 - 25 of 70
  • A multi-covariate semi-parametric conditional volatility model using probabilistic fuzzy systems

    Publication Year: 2012 , Page(s): 1 - 8
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (282 KB) |  | HTML iconHTML  

    Value at Risk (VaR) has been successfully estimated using single covariate probabilistic fuzzy systems (PFS), a method which combines a linguistic description of the system behaviour with statistical properties of data. In this paper, we consider VaR estimation based on a PFS model for density forecast of a continuous response variable conditional on a high-dimensional set of covariates. The PFS model parameters are estimated by a novel two-step process. The performance of the proposed model is compared to the performance of a GARCH model for VaR estimation of the S&P 500 index. Furthermore, the additional information and process understanding provided by the different interpretations of the PFS models are illustrated. Our findings show that the validity of GARCH models are sometimes rejected, while those of PFS models of VaR are never rejected. Additionally, the PFS model captures both instant and periods of high volatility, and leads to less conservative models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A mean-reverting strategy based on fuzzy transform residuals

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (197 KB) |  | HTML iconHTML  

    This paper develops a stock market price model, which is based on a detrending time series by iterating the application of fuzzy trasform and computing residuls over a given lookback period. The model is used to define a mean-reverting strategy with stationary and gaussian residuals. A preliminary experimention is aimed at comparing the proposed strategy to well-established GARCH method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling principles in fuzzy time series forecasting

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (554 KB) |  | HTML iconHTML  

    Fuzzy time series forecasting is one of the most applied extensions of the fuzzy set theory. Since it is first introduced by Song and Chissom [1,2], several improvements are indicated by many scholars and its practical popularity increases gradually. While the FTS methods are applied for many different problems, fundamental drawbacks are found in the existing literature. The stationarity problem, non-linear dataset and identification of initial fuzzy intervals are some of the debated topics in the FTS research. This paper discusses the principles of the FTS modeling and deals with the common mistakes in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linguistic decision making with probabilistic information and induced aggregation operators

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (244 KB) |  | HTML iconHTML  

    We develop a new model for linguistic decision making by using probabilistic information and induced aggregation operators. We introduce the induced linguistic probabilistic ordered weighted average (ILPOWA). It is a new aggregation operator that uses probabilities and OWA operators in the same formulation considering the degree of importance that each concept has in the formulation. Moreover, it uses complex attitudinal characters that can be assessed by using order inducing variables. Furthermore, we analyze an uncertain environment where the information can not be studied in a numerical scale but it is possible to use linguistic variables. We study this approach in a group decision making context where we use multi-person aggregation operators in order to deal with the opinion of several experts in the analysis. We focus on a problem regarding importation strategies in the administration of a country. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The use of Neural Networks for modeling nonlinear mean reversion: Measuring efficiency and integration in ADR markets

    Publication Year: 2012 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (270 KB) |  | HTML iconHTML  

    We propose the use of a Neural Network (NN) methodology for evaluating models of time series that exhibit nonlinear mean reversion, such as those stemming from equilibrium relationships that are affected by transaction costs or institutional rigidities. Given the vast array of such models found in the literature, the proposed NN procedure represents a useful graphical tool, providing the researcher with the ability to visualize the data before choosing the most appropriate approach for modeling mean-reversion dynamics with either a Threshold Autoregression (TAR), a Smooth Transition Autoregression (STAR), or any hybrid model. Our case study is involved with understanding the nature of cross-listed stocks (ADRs) and the degree of market integration and efficiency, as captured by the NN methodology. This is done through an analysis of the intradaily price discrepancies of cross-listed French, Mexican and American stocks. The results of the NN methodology are relevant in describing the arbitrage forces that maintain the Law of One Price in these ADR markets, and thus provide a more explicit insight on how these markets are integrated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A learning adaptive Bollinger band system

    Publication Year: 2012 , Page(s): 1 - 8
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (755 KB) |  | HTML iconHTML  

    This paper introduces a novel forecasting algorithm that is a blend of micro and macro modelling perspectives when using Artificial Intelligence (AI) techniques. The micro component concerns the fine-tuning of technical indicators with population based optimization algorithms. This entails learning a set of parameters that optimize some economically desirable fitness function as to create a dynamic signal processor which adapts to changing market environments. The macro component concerns combining the heterogeneous set of signals produced from a population of optimized technical indicators. The combined signal is derived from a Learning Classifier System (LCS) framework that combines population based optimization and reinforcement learning (RL). This research is motivated by two factors, that of non-stationarity and cyclical profitability (as implied by the adaptive market hypothesis [10]). These two properties are not necessarily in contradiction but they do highlight the need for adaptation and creation of new models, while synchronously being able to consult others which were previously effective. The results demonstrate that the proposed system is effective at combining the signals into a coherent profitable trading system but that the performance of the system is bounded by the quality of the solutions in the population. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complex stock trading strategy based on Particle Swarm Optimization

    Publication Year: 2012 , Page(s): 1 - 6
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (483 KB) |  | HTML iconHTML  

    Trading rules have been utilized in the stock market to make profit for more than a century. However, only using a single trading rule may not be sufficient to predict the stock price trend accurately. Although some complex trading strategies combining various classes of trading rules have been proposed in the literature, they often pick only one rule for each class, which may lose valuable information from other rules in the same class. In this paper, a complex stock trading strategy, namely weight reward strategy (WRS), is proposed. WRS combines the two most popular classes of trading rules-moving average (MA) and trading range break-out (TRB). For both MA and TRB, WRS includes different combinations of the rule parameters to get a universe of 140 component trading rules in all. Each component rule is assigned a start weight and a reward/penalty mechanism based on profit is proposed to update these rules' weights over time. To determine the best parameter values of WRS, we employ an improved time variant Particle Swarm Optimization (PSO) algorithm with the objective of maximizing the annual net profit generated by WRS. The experiments show that our proposed WRS optimized by PSO outperforms the best moving average and trading range break-out rules. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Limit order placement across multiple exchanges

    Publication Year: 2012 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (457 KB) |  | HTML iconHTML  

    The US equity exchange market is organized as a National Market System, enforcing price priority across exchanges, but otherwise allowing competition for order flow among exchanges. This flexibility has naturally evolved to a market where exchanges have varying quality and cost of execution. To meet the obligation for best execution, a broker must employ a strategy for selecting the exchange to which limit orders are placed. We consider a market consisting of exchanges with different pricing and priority schemes, and derive a theoretical model to estimate the delay until execution of limit orders. We estimate model parameters from quote and trade data of stocks in the Russell 1000 index, and use them to evaluate expected delay per exchange. We show that inverted cost exchanges and price-size priority exchanges offer improved performance over short time intervals, while traditional and price-time priority exchanges offer improved performance over longer time intervals. We observe that while exchanges with large market share may have high market order liquidity, they may in fact have low limit order liquidity. Low limit order liquidity in turn leads to low execution quality of algorithmic orders. For time-sensitive algorithmic orders, we show a trade-off between execution quality and cost, and show exchanges on an efficient frontier for each stock that achieves good trade-off performance given current exchange pricing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new approach to asset pricing with rational agents behaving strategically

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (197 KB) |  | HTML iconHTML  

    The volatility of stock prices is difficult to explain within the confines of rational pricing models. Changes in prices have become permanent; Therefore, as we keep the hypothesis of a rational behavior of agents, we must give a new explanation to the pricing of financial assets at any moment of time. In a model based on an original mathematical framework, we introduce persistent time-varying prices resulting from strategic interactions between rational agents. We demonstrate that in a close to equilibrium market, actual prices give the best approximation of the fundamental value; We also explain why, in some circumstances, rational behavior may lead to the development of a bubble or the surge of a financial crisis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bio-inspired optimization of Fuzzy Cognitive Maps for their use as a means in the pricing of complex assets

    Publication Year: 2012 , Page(s): 1 - 8
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (321 KB) |  | HTML iconHTML  

    Fuzzy Cognitive Maps (FCMs) are well-suited techniques to model the behavior of complex systems. But, since they depend on the quality of human knowledge they are prone to subjective false estimations. Moreover, even domain experts can fully describe only less complex systems which, in turn, limits the effectiveness of FCMs in real world applications. Thus, a technique is needed that, on the one hand, eases the use and, on the other hand, optimizes the formal structure of an FCM. Against that background, this article elucidates the use of different bio-inspired algorithms to optimize the application of the FCMs: Evolutionary Strategies (ES), Particle Swarm Optimization (PSO), and Big Bang-Big Crunch (BB-BC). The proposed approaches are applied to a novel valuation technique using FCMs. Besides principal feasibility, we demonstrate its potential to facilitate the use of vague and incomplete expert knowledge in the valuation of complex investment opportunities such as, but not limited to, young high-technology ventures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multi-phase, flexible, and accurate lattice for pricing complex derivatives with multiple market variables

    Publication Year: 2012 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    With the rapid growth of financial markets, many complex derivatives have been structured to meet specific financial goals. But most complex derivatives have no analytical formulas for their prices, e.g., when more than one market variable is factored. As a result, they must be priced by numerical methods such as lattice. A derivative is called multivariate if its value depends on more than one market variable. A lattice for a multivariate derivative is called a multivariate lattice. This paper proposes a flexible multi-phase method to build a multivariate lattice for pricing derivatives accurately. First, the original, correlated processes are transformed into uncorrelated ones by the orthogonalization method. A multivariate lattice is then constructed for the transformed, uncorrelated processes. To sharply reduce the nonlinearity error of many numerical pricing methods, our lattice has the flexibility to match the so-called “critical locations” - the locations where nonlinearity of the derivative's value function occurs. Numerical results for vulnerable options, insurance contracts guaranteed minimum withdrawal benefit, and defaultable bonds show that our methodology can be applied to the pricing of a wide range of complex financial contracts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pricing discrete Asian barrier options on lattices

    Publication Year: 2012 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (186 KB) |  | HTML iconHTML  

    Asian barrier options are barrier options whose trigger is based on an average underlying price. They provide the advantages of both Asian options and barrier options. This paper introduces the first quadratic-time lattice algorithm to price European-style Asian barrier options. It is by far the most efficient lattice algorithm with convergence guarantees. The algorithm relies on the Lagrange multipliers to optimally distribute the number of states for each node of the multinomial lattice. We also show experiment results to demonstrate effectiveness and efficiency of our algorithm by comparing with Monte Carlo simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving non-parametric option pricing during the financial crisis

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    Financial option prices have experienced excessive volatility in response to the recent economic and financial crisis. During the crisis periods, financial markets are, in general, subject to an abrupt regime shift which imposes a significant challenge to option pricing models. In this context, swiftly evolving markets and institutions require valuation models that are capable of recognizing and adapting to such changes. Both parametric and non-parametric pricing models have shown poor forecast ability for options traded in late 1987 and 2008. Surprisingly, the pricing inaccuracy was more pronounced for non-parametric models than for parametric models. To address this problem, we propose a novel hybrid methodology - modular neural network-fuzzy learning vector quantization (MNN-FLVQ) model - that uses the Kohonen unsupervised learning and fuzzy clustering algorithms to classify the S&P 500 stock market index options, and thereby detect a regime shift. In our empirical application, the results for the 2008 financial crisis demonstrate that the MNN-FLVQ model is superior to the competing methods in regards to option pricing during regime shifts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Closed-form mortgage pricing formula with outstanding principal as prepayment value

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (233 KB) |  | HTML iconHTML  

    This study considers all the possible actions a borrower may have, i.e., to default, to prepay, and to maintain the mortgage, during mortgage horizon. Then, we provide an effective and accurate pricing formula, which not only considers the effect that default might affect the mortgage value, but also more accurately explores the impact due to prepayment risk. In our model, we define prepayment value of the mortgage as the amount of outstanding principle. In contrast, previous literature defines prepayment value as a constant proportion of maintaining value of the mortgage. Finally, based on closed-form pricing formula, we analyze the yield, duration and convexity of risky mortgage loan, providing a better framework for risk management. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exponential length of intervals for fuzzy time series forecasting

    Publication Year: 2012 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (226 KB) |  | HTML iconHTML  

    The aim of this paper investigates the effective length of intervals for the fuzzy time series forecasting (FTSF) method. The length of intervals plays a significant role for the forecasting accuracy. The exponential length of intervals method is proposed and an empirical study is performed for forecasting of the time charter rates of Handymax dry bulk carrier ship. Rather than the existing literature, the proposed model is not only compared with the previous FTS models in which different length of intervals methods are applied, but also with the conventional time series methods such as the generalized autoregressive conditional heteroscedasticity GARCH model. The result of root mean squared error (RMSE) and mean absolute percentage error (MAPE) of proposed method is found superior than compared methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decision making in complex environments with generalized aggregation operators

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (222 KB) |  | HTML iconHTML  

    We introduce a new framework for aggregating the information by using the concept of unified aggregation operators. Thus, we can deal with complex environments where the information comes from different sources. Furthermore, we also use generalized aggregation operators that includes a wide range of aggregation operators by using the concept of generalized means. We introduce the generalized unified aggregation operator (GUAO) operator. Its main advantage is that it unifies a wide range of aggregation operators according to the available information. Thus, we can use different sources of information including subjective, objective and the attitudinal character of the decision maker. We apply the new approach in a decision making problem regarding the general strategy of the public debt ratio in the convergence criteria of the European Union (EU) integration process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MIMO evolving functional fuzzy models for interest rate forecasting

    Publication Year: 2012 , Page(s): 1 - 8
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (161 KB) |  | HTML iconHTML  

    Forecasting the term structure of interest rates plays a crucial role in portfolio management, household finance decisions, business investment planning, and policy formulation. This paper proposes the use of evolving fuzzy inference systems for interest rate forecasting in the US and Brazilian markets. Evolving models provide a high level of system adaptation and learns the system dynamic continuously, which is essential for uncertain environments as fixed income markets. Besides the usefulness evaluation of evolving methods to forecast yields, this paper suggests the interest rate factors forecasting taking into account multi-input-multi-output (MIMO) evolving systems, which reduces computational time complexity and provides more accurate forecasts. Results based on mean squared forecast errors showed that MIMO evolving methods perform better than traditional benchmark for short and long-term maturities, for both fixed income markets evaluated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Group decision making in fuzzy environment

    Publication Year: 2012 , Page(s): 1 - 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (277 KB) |  | HTML iconHTML  

    In this paper, a methodology for solving a group decision making problem is developed. The method obtains preference relationships of the alternatives after incorporating the experts' opinions about the alternatives in pair wise in linguistically or fuzzily defined terms. The importance of experts' opinions in the final ranking of the alternatives is calculated through an agreement matrix. The concept of PROMETHEE is applied thereafter to rank the given set of alternatives in a fuzzy environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Behavior based learning in identifying High Frequency Trading strategies

    Publication Year: 2012 , Page(s): 1 - 8
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (795 KB) |  | HTML iconHTML  

    Electronic markets have emerged as popular venues for the trading of a wide variety of financial assets, and computer based algorithmic trading has also asserted itself as a dominant force in financial markets across the world. Identifying and understanding the impact of algorithmic trading on financial markets has become a critical issue for market operators and regulators. We propose to characterize traders' behavior in terms of the reward functions most likely to have given rise to the observed trading actions. Our approach is to model trading decisions as a Markov Decision Process (MDP), and use observations of an optimal decision policy to find the reward function. This is known as Inverse Reinforcement Learning (IRL), and a variety of approaches for this problem are known. Our IRL-based approach to characterizing trader behavior strikes a balance between two desirable features in that it captures key empirical properties of order book dynamics and yet remains computationally tractable. Using an IRL algorithm based on linear programming, we are able to achieve more than 90% classification accuracy in distinguishing High Frequency Trading from other trading strategies in experiments on a simulated E-Mini S&P 500 futures market. The results of these empirical tests suggest that High Frequency Trading strategies can be accurately identified and profiled based on observations of individual trading actions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical Temporal Memory-based algorithmic trading of financial markets

    Publication Year: 2012 , Page(s): 1 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (823 KB) |  | HTML iconHTML  

    This paper explores the possibility of using the Hierarchical Temporal Memory (HTM) machine learning technology to create a profitable software agent for trading financial markets. Technical indicators, derived from intraday tick data for the E-mini S&P 500 futures market (ES), were used as features vectors to the HTM models. All models were configured as binary classifiers, using a simple buy-and-hold trading strategy, and followed a supervised training scheme. The data set was divided into a training set, a validation set and three test sets; bearish, bullish and horizontal. The best performing model on the validation set was tested on the three test sets. Artificial Neural Networks (ANNs) were subjected to the same data sets in order to benchmark HTM performance. The results suggest that the HTM technology can be used together with a feature vector of technical indicators to create a profitable trading algorithm for financial markets. Results also suggest that HTM performance is, at the very least, comparable to commonly applied neural network models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust stock trading using fuzzy decision trees

    Publication Year: 2012 , Page(s): 1 - 8
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (321 KB) |  | HTML iconHTML  

    Stock market analysis has traditionally been proven to be difficult due to the large amount of noise present in the data. Different approaches have been proposed to predict stock prices including the use of computational intelligence and data mining techniques. Many of these methods operate on closing stock prices or on known technical indicators. Limited studies have shown that Japanese candlestick analysis serve as rich information sources for the market. In this paper decision trees based on the ID3 algorithm are used to derive short-term trading decisions from candlesticks. To handle the large amount of uncertainty in the data, both inputs and output classifications are fuzzified using well-defined membership functions. Testing results of the derived decision trees show significant gains compared to ideal mid and long-term trading simulations both in frictionless and realistic markets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal selection of simulated years of catastrophe activity for improved efficiency in insurance risk management

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB) |  | HTML iconHTML  

    Catastrophe risk models used in the insurance industry consist of complex Monte Carlo simulations of events such as earthquakes or hurricanes. Due to the large uncertainty in the characteristics and severity of these events, the number of samples needs to be large enough to capture the spectrum of possible consequences. This creates operational challenges in terms of computational time. The tendency in the industry has been to reduce the computational burden by selecting a subset of samples of years of simulated activity. The subsampling process usually takes place after the larger sample has been built and, therefore, it is not possible to apply traditional sampling variation reduction techniques within the Monte Carlo process. In this paper, an algorithm based on evolutionary computation is presented to construct an optimal subset of samples that minimizes the statistical variation between the larger and smaller sets for a specific portfolio of risks. Numerical testing has shown a tenfold improvement in computational time with a 90% reduction of sampling error with respect to a random subsampling process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Event-based historical Value-at-Risk

    Publication Year: 2012 , Page(s): 1 - 7
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (228 KB) |  | HTML iconHTML  

    Value-at-Risk (VaR) is an important tool to assess portfolio risk. When calculating VaR based on historical stock return data, we hypothesize that this historical data is sensitive to outliers caused by news events in the sampled period. In this paper, we research whether the VaR accuracy can be improved by considering news events as additional input in the calculation. This involves processing the historical data in order to reflect the impact of news on the stock returns. Our experiments show that when an event occurs, removing the noise (that is caused by an event) from the measured stock prices for a small time window can improve VaR predictions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online estimation of stochastic volatility for asset returns

    Publication Year: 2012 , Page(s): 1 - 7
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (291 KB) |  | HTML iconHTML  

    An important application of financial institutions is quantifying the risk involved in investing in an asset. These are various measures of risk like volatility or value-at-risk. To estimate them from data, a model for underlying financial time series has to be specified and parameters have to be estimated. In the following, we propose a framework for estimation of stochastic volatility of asset returns based on adaptive fuzzy rule based system. The model is based on Takagi-Sugeno fuzzy systems, and it is built in two phases. In the first phase, the model uses the Subtractive Clustering algorithm to determine group structures in a reduced data set for initialization purpose. In the second phase, the system is modified dynamically via adding and pruning operators and a recursive learning algorithm determines automatically the number of fuzzy rules necessary at each step, whereas one step ahead predictions are estimated and parameters are updated as well. The model is applied for forecasting financial time series volatility, considering daily values the REAL/USD exchange rate. The model suggested is compared against generalized autoregressive conditional heteroskedaticity models. Experimental results show the adequacy of the adaptative fuzzy approach for volatility forecasting purposes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Liquidity risk spillover: Evidence from cross-country analysis

    Publication Year: 2012 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (65 KB)  

    We investigate the spillover of market liquidity risk across 50 countries using daily data from 1995 to 2010. By employing market liquidity risk measures from Pastor and Stambaugh (2003) and Acharya and Pedersen (2005), we estimate market liquidity risk associations among global stock markets. Empirical results show that the market liquidity risks across countries are correlated. Our study is important for governments, securities exchanges officials, institutional and individual investors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.