Fighting Money Laundering with Statistics and Machine Learning

Money laundering is a profound global problem. Nonetheless, there is little scientific literature on statistical and machine learning methods for anti-money laundering. In this paper, we focus on anti-money laundering in banks and provide an introduction and review of the literature. We propose a unifying terminology with two central elements: (i) client risk profiling and (ii) suspicious behavior flagging. We find that client risk profiling is characterized by diagnostics, i.e., efforts to find and explain risk factors. On the other hand, suspicious behavior flagging is characterized by non-disclosed features and hand-crafted risk indices. Finally, we discuss directions for future research. One major challenge is the need for more public data sets. This may potentially be addressed by synthetic data generation. Other possible research directions include semi-supervised and deep learning, interpretability, and fairness of the results.


Introduction
Officials from the United Nations Office on Drugs and Crime estimate that money laundering amounts to 2.1-4% of the world economy [1]. The illicit financial flows help criminals avoid prosecution and undermine public trust in financial institutions [2][3][4]. Multiple intergovernmental and private organizations assert that modern statistical and machine learning methods hold great promise to improve anti-money laundering (AML) operations [5][6][7][8][9]. The hope, among other things, is to identify new types of money laundering and allow a better prioritization of AML resources. The scientific literature on statistical and machine learning methods for AML, however, remains relatively small and fragmented [10][11][12].
The international framework for AML is based on recommendations by the Financial Action Task Force (FATF) [13]. Within the framework, any interaction with criminal proceeds practically corresponds to money laundering from a bank perspective (regardless of intent or transaction complexity) [14]. Furthermore, the framework requires that banks: 1. know the identity of, and money laundering risk associated with, clients, and 2. monitor and report suspicious behavior.
Note that we, to reflect FATF's recommendations, are intentionally vague about what constitutes "suspicious" behavior.
To comply with the first requirement, banks ask their clients about identity records and banking habits. This is known as know-your-costumer (KYC) information and is used to construct risk profiles. The profiles are, in turn, often used to determine intervals for ongoing due diligence, i.e., checks on KYC information. To comply with the second requirement, banks use electronic AML systems to raise alarms for human inquiry. Bank officers then dismiss or report the alarms to national financial intelligence units (i.e., authorities). The process is illustrated in Figure 1. Traditional AML systems rely on predefined and fixed rules [15,16]. Although the rules are formulated by experts, they are essentially 'if-this-then-that' statements; easy to interpret but inefficient. Indeed, over 98% of all AML alarms can be false positives [17]. Banks are not allowed to disclose information about alarms and generally receive little feedback on filled reports. Furthermore, money launderers may change their behavior in response to AML efforts. For instance, banks in the United States must, by law, report all currency transactions over $10,000 (regardless of whether they constitute money laundering or not) [18]. In response, money launderers may employ smurfing (i.e., split up large transactions). Finally, as money laundering has no direct victims, it can potentially go undetected for longer than other types of financial crime (e.g., credit card or wire fraud).
In this paper, we focus on AML in banks and aim to provide a technical review that researchers and industry practitioners (statisticians and machine learning engineers) can use as a guide to the current literature on statistical and machine learning methods for AML in banks. Furthermore, we aim to provide a terminology that can facilitate policy discussions, and to provide guidance on open challenges within the literature. To achieve our aims, we (i) propose a unified terminology for AML in banks, (ii) review selected exemplary methods, and (iii) present recent machine learning concepts that may improve AML.
The rest of the paper is organized as follows. Section 2 presents our terminology, distinguishing between (i) client risk profiling and (ii) suspicious behavior flagging. Section 3 then reviews the literature on client risk profiling, while Section 4 reviews the literature on suspicious behavior flagging. Note that both Sections 3 and 4 contain subsections that further distinguish between unsupervised and supervised methods. Next, Section 5 discusses future research directions. Finally, Section 6 concludes the paper.

Terminology
Inspired by FATF's recommendations, we argue that banks face two principal data analysis problems in AML: (i) client risk profiling and (ii) suspicious behavior flagging. We use these to structure our terminology and review. A related topic, not discussed here, concerns how authorities treat AML reports (see, for instance, Savage et al. [19], Drezewski et al. [20], Li et al. [21], or Baltoi et al. [22]). We further make a distinction between unsupervised and supervised methods. Unsupervised methods utilize data sets on the form {x c | c = 1, . . . , n} where n denotes some number of clients. Supervised methods, by contrast, utilize data sets {(x c , y c ) | c = 1, . . . , n} where some labels (e.g., risk scores) y c are given.

Client Risk Profiling
Client risk profiling is used to assign general risk scores to clients. Let x c ∈ R d be a vector of features specific to client c and P be a generic set. A client risk profiling is a mapping where ρ(x c ) captures the money laundering risk associated with client c. For example, we may have P = {L, M, H}, where L symbolizes low risk, M symbolizes medium risk, and H symbolizes high risk. We stress that client risk profiling in our terminology is characterized by working on the client, not transaction, level.

Suspicious Behavior Flagging
Suspicious behavior flagging is used to raise alarms on clients, accounts, or transactions. Consider a setup where client c has a = 1, . . . , A c accounts. Furthermore, let each account (c, a) have t = 1, . . . , T (c,a) transactions and let x (c,a,t) ∈ R d be some features specific to transaction (c, a, t). An AML system is a function where s(x (c,a,t) ) = 1 indicates that an alarm is raised on transaction (c, a, t). Multiple approaches may be used to construct an AML system. Regardless of approach, we argue that all AML systems are built on one fundamental premise. To cite Bolton and Hand [23]: "... given that it is too expensive to undertake a detailed investigation of all records, one concentrates investigation on those thought most likely to be fraudulent." Thus, a good AML system needs to model the probability where y (c,a,t) = 1 indicates that transaction (c, a, t) should be reported for money laundering (with y (c,a,t) = 0 otherwise). We may then raise alarms given some threshold value ≥ 0 and an indicator function s(x (c,a,t) ) = 1 {F (x (c,a,t) )≥ } . It can be difficult to determine if a transaction, in itself, is money laundering. As a remedy, the level of analysis may be changed (see Figure 2). We may, for instance, consider account features x (c,a) ∈ R d that summarize all activity on account (c, a). Alternatively, we may consider the set of all feature vectors X (c,a) = {x (c,a,1) , . . . , x (c,a,T (c,a) ) } for transactions t = 1, . . . , T (c,a) made on account (c, a). Defining y (c,a) ∈ {0, 1} in analogy to y (c,a,t) , we may then model or F (X (c,a) ) = P (y (c,a) = 1 | X (c,a) ), i.e., the probability that account (c, a) should be reported for money laundering given x (c,a) or X (c,a) . Similarly, we could raise alarms directly at the client level, modeling where y c ∈ {0, 1} indicates (with y c = 1) that client c should be reported for money laundering. Note that suspicious behavior flagging and client risk profiling can overlap at the client level. Indeed, we could use F (x c ) as a risk profile for client c.

Client Risk Profiling
We find that studies on client risk profiling are characterized by diagnostics, i.e., efforts to find and explain risk factors. Specifically, unsupervised methods are used to search for new "risky" observations or risk factors. On the other hand, supervised methods are used with an explanatory focus. We also find that studies employing unsupervised methods generally use relatively large data sets. By contrast, studies employing supervised methods use small (labeled) data sets. This difference is likely associated with the cost of labeling observations. Finally, we note that while all studies use private data sets, most share a fair amount of information about the features that they use. As we shall see later, this contrasts with the literature on suspicious behavior flagging.

Unsupervised Client Risk Profiling
Alexandre and Balsa [24] employ K-means clustering [25] to construct risk profiles. The algorithm seeks a clustering ρ : R d → {S 1 , . . . , S K } that assigns every client c to a cluster k = 1, . . . , K. This is achieved by solving for where µ k ∈ R d denotes the mean of cluster k and ρ k = {c = 1, . . . , n|ρ(x c ) = k} denotes the set of clients assigned to cluster k. The problem is addressed in a greedy optimization fashion; iteratively setting µ k = 1 |ρ k | ρ k x c and ρ(x c ) = arg min k=1,...,K x c − µ k 2 . To evaluate the approach, the authors employ a data set with approximately 2.4 million clients from an undisclosed financial institution. Disclosed features include the average size and number of transactions. The authors implement K = 7 clusters, designating two of them as risky. The first contains clients with many transactions but low transaction values. The second contains clients with older accounts but larger transaction values. Finally, the authors employ decision trees (see Section 3.2) to find classification rules that emulate the clusters. The motivation is, presumably, that bank officers find it easier to work with rules than with K-means. Cao and Do [26] present a similar study, applying clustering with slope [27]. Starting with 8,020 transactions from a Vietnamese bank, the authors first change the level of analysis to individual clients. Features include the sum of in-and outgoing transactions, the number of sending and receiving third parties, and the difference between funds sent and received. The authors then discretize features and build clusters based on cluster histograms' height-to-width ratios. They finally simulate 25 accounts with money laundering behavior, some easily identifiable in the produced clusters. Much may, however, depend on the nature of the simulations.
Paula et al. [28] use an autoencoder neural network to find outlier Brazilian export firms. Neural networks are directed, acyclic graphs connecting computational units (i.e., neurons) in layers. The output of a feedforward neural network with l = 1, . . . , L layers is given by where and φ (1) , . . . , φ (L) are (non-linear) activation functions. Neural networks are commonly trained with iterative gradient-based optimization. This includes backpropagation [29] coupled with stochastic gradient descent [30] or more recent adaptive schemes like Adam [31]. The aim is to minimize a loss function l(o c , nn(x c )) over all observations c = 1, . . . , n where o c is a target value or vector. Autoencoders, as employed by the authors, are a special type of neural networks that seek a latent representation of their inputs. To this end, they employ an encoder-decoder (i.e., "hourglass") architecture and try to replicate their inputs in their outputs, i.e., have o c = x c . The authors specifically use 5 layers with 18, 6, 3, 6, and 18 neurons. The first two layers (with 18 and 6 neurons) form an encoder. The middle layer with 3 neurons then obtains a latent representation. Finally, the last two layers (with 6 and 18 neurons) form a decoder. The approach is tested on a data set with 819, 990 firms. Features include information about debit and credit transactions, export volumes, taxes paid, and previous customs inspections. As a measure of risk, the authors employ the reconstruction error ρ(x c ) = 1 q nn(x c ) − x c 2 , frequently used for anomaly or novelty detection in this setting (see, for instance, [32]). This way, they identify 20 high-risk firms.

Supervised Client Risk Profiling
Colladon and Remondi [33] combine social network analysis and logistic regression. Using 33,670 transactions from an Italian factoring firm, the authors first construct three graphs; G 1 , G 2 , and G 3 . All share the same nodes, representing clients, while edges represent transactions. In G 1 , edges are weighted relative to transaction size. In G 2 , they are weighted relative to connected clients' business sectors. Finally, in G 3 , they are weighted relative to geographic factors. Next, a set of graph metrics are used to construct features for every client. These include in-, out-, and total-degrees, closeness, betweenness, and constraint. A label y c ∈ {0, 1} is also collected for 288 clients, denoting (with y c = 1) if the client can be connected to a money laundering trial. The authors then employ a logistic regression model where β ∈ R d denotes the learnable coefficients. The approach achieves an impressive performance. Results indicate that in-degrees over G 3 and total-degrees over G 1 are associated with higher risk. By contrast, constraint over G 2 and closeness over G 1 are associated with lower risk. Rambharat and Tschirhart [34] use panel data from a financial institution in the United States. The data tracks risk profiles y cp ∈ {1, 2, 3, 4}, assigned to c = 1, . . . , 494 clients over p = 1, .., 13 periods. Specifically, y cp represents low-, medium-, and two types of high-risk profiles. Period-specific features x cp ∈ R d include information about clients' business departments, four non-specified "law enforcement actions", and dummy (one-hot encoded) variables that capture the time dimension. To model the data, the authors use an ordinal random effects model where errors and fixed effects are assumed to follow Gaussian distributions. If we let Φ(·) denote the standard Gaussian cumulative distribution function, the model can be expressed as where α c denotes a random client effect, β ∈ R q denotes coefficients, and θ m represents a cut-off value transforming a continuous latent variable y * cp into y cp . Specifically, we have y cp = m if and only if θ m−1 < y * cp ≤ θ m . The level of confidentiality makes it hard to generalize results from the study. The study does, however, illustrate that banks can benefit from a granular risk rating of high-risk clients.
Martínez-Sánchez et al. [35] use decision trees to model clients of a Mexican financial institution. Decision trees [36] are flowchart-like models where internal nodes split the feature space into mutually exclusive subregions. Final nodes, called leaves, label observations using a voting system. The authors use data on 181 clients, all labeled as either high-risk or low-risk. Features include information about seniority, residence, and economic activity. Notably, no train-test split is used. This makes the focus on diagnostics apparent. The authors find that clients with more seniority are comparatively riskier.
Badal-Valero et al. [37] combine Benford's Law and four machine learning models. Benford's Law [38] gives an empirical distribution of leading digits. The authors use it to extract features from financial statements. Specifically, they consider statements from 335 suppliers to a company on trial for money laundering. Of these, 23 suppliers have been investigated and labeled as colluders. All other (non-investigated) suppliers are treated as benevolent. The motivating idea is that any colluders, hiding in the non-investigated group, should be misclassified by the employed models. These include a logistic regression, feedforward neural network, decision tree, and random forest. Random forests [39], in particular, combine multiple decision trees. Every tree uses a random subset of features in every node split. To address class imbalance, i.e., the unequal distribution of labels, the authors investigate weighting and synthetic minority oversampling [40]. The former weighs observations during training, giving higher importance to data from the minority class. The latter balances the data before training, generating synthetic observations of the minority class. According to the authors, synthetic minority oversampling works the best. However, the conclusion is apparently based on simulated evaluation data.
González and Valásquez [41] employ a decision tree, feedforward neural network, and Bayesian network to model Chilean firms using false invoices. Bayesian networks [42], in particular, are probabilistic models that represent variable dependencies via directed acyclic graphs. The authors use data on 582,161 firms, 1,692 of which have been labeled as either fraudulent or non-fraudulent. Features include information about previous audits and taxes paid. Because most firms are unlabeled, the authors first use unsupervised learning to characterize high-risk behavior. To this end, they employ self-organizing maps [43] and neural gas [44]. Both are neural network techniques that build on competitive learning [45] rather than error correction (i.e., gradient-based optimization). While the methods do produce clusters with some behavioral patterns, they do not appear useful for false invoice detection. On the labeled training data, the feedforward neural network achieves the best performance.

Suspicious Behavior Flagging
We find that the literature on suspicious behavior flagging is characterized by a large proportion of short and suggestive papers. This includes applications of fuzzy logic [46], autoregression [47], and sequence matching [48]. Very few studies apply outlier or anomaly detection techniques [12]. In contrast to work by Canhoto [49], our review demonstrates that there is ample scope to employ both unsupervised and supervised methods for suspicious behavior flagging. Studies using unsupervised methods, however, often contain little performance evaluation. By contrast, studies that use supervised methods naturally use (a part of) their labeled data for evaluation. In line with thoughts by Breiman [50] (on fraud detection), there is some evidence that supervised methods might perform better than unsupervised methods; see the last part of Section 4.2. However, different types of employed data and the small size of the literature make it difficult to draw a conclusion. Furthermore, non-disclosed features and hand-crafted risk indices generally make it difficult to compare studies.

Unsupervised Suspicious Behavior Flagging
Larik and Haider [51] flag transactions with a combination of principal component analysis and K-means. Given data on approximately 8.2 million transactions, the authors first seek to cluster clients. To this end, principal component analysis [52] is applied to client features x c ∈ R d , c = 1, ..., n. The method seeks lower-dimensional, linear transformations z c ∈ R q , q < d, that preserve the greatest amount of variance. Let S denote the data covariance matrix. The first coordinate of z c , called the first principal component, is then given by u T 1 x c where the principal direction u 1 ∈ R d is determined by By analogy, the j'th principal component is given by u T j x c where u j ∈ R d maximizes u T j Su j subject to u T j u j = 1 and orthogonality with the previous principal components h = 1, . . . , j − 1. Principal components are commonly obtained by the eigenvectors of S corresponding to maximal eigenvalues. Next, the authors use a modified version of K-means to cluster z c , c = 1, . . . , n. The modification introduces a parameter to control the maximum distance between an observation and the mean of its assigned cluster. A hand-crafted risk index is then used to score and flag incoming transactions. The index compares the sizes and frequencies of transactions within assigned client clusters. As no labels are available, evaluation is limited.
Rocha-Salazar et al. [53] mix fuzzy logic, clustering, and principal component analysis to raise alarms. With fuzzy logic [54], experts first assign risk scores to feature values. These include information about client age, nationality, and transaction statistics. Next, strict competitive learning, fuzzy C-means, selforganizing maps, and neural gas are used to build client clusters. The authors find that fuzzy C-means [55], in particular, produces the best clusters. This algorithm is similar to K-means but uses scores to express degrees of cluster membership rather than hard assignments. The authors further identify one high-risk cluster. Transactions in this cluster are then scored with a hand-crafted risk index. This builds on principal component analysis, weighing features relative to their variances. Data from a Mexican financial institution is used to evaluate the approach. Training is done with 26,751 private and 3,572 business transactions; testing with 1,000 private and 600 business transactions. The approach shows good results on balanced accuracy (i.e., the average of the true positive and true negative rate).
Raza and Haider [56] propose a combination of clustering and dynamic Bayesian networks. First, client features x c are clustered with fuzzy C-means. For each cluster, a q-step dynamic Bayesian network [57] is then trained on transaction sequences X (c,a) = {x (c,a,1) , . . . , x (c,a,T (a,c) ) }. Transaction features x c,a,t include information about amount, period, and type. At test time, incoming transactions (along with the previous q = 1, 2 transactions) are passed through the network. A hand-crafted risk index, building on outputted posterior probabilities, is then calculated. The approach is implemented on a data set with approximately 8.2 million transactions (presumably the same data used by Larik and Haider [51]). However, as no labels are available, evaluation is limited.
Camino et al. [58] flag clients with three outlier detection techniques: an isolation forest, a one-class support vector machine, and a Gaussian mixture model. Isolation forests [59] build multiple decision trees using random feature splits. Observations isolated by comparatively few feature splits (averaged over all trees) are then considered outliers. One-class support vector machines [60] use a kernel function to map data into a reproducing Hilbert space. The method then seeks a maximum margin hyperplane that separates data points from the origin. A small number of observations are allowed to violate the hyperplane; these are considered outliers. Finally, Gaussian mixture models [61] assume that all observations are generated by a number of Gaussian distributions. Observations in low-density regions are then considered outliers. The authors combine all three techniques into a single ensemble method. The method is tested on a data set from an AML software company. This contains one million transactions with client-level features recording summary statistics. The authors report positive feedback from the data-supplying company; otherwise, evaluation is limited.
Sun et al. [62] apply extreme value theory [63] to flag outliers in transaction streams. The authors start by engineering two features. The first records the number of times an account has reached a balanced state, i.e., when money transferred into an account is transferred out again. The second records the number of effective fan-ins associated with an account, i.e., when money transferred into the account surpasses a given limit and the account again reaches a balanced state. Next, the Pickands-Balkema-De Haan theorem [64,65] is invoked to model (derived) conditional feature exceedances according to a generalized Pareto distribution. The approach allows the authors to flag transactions according to a probabilistic limit p (in analogy to the p-values used to test null hypotheses). The approach is tested on real bank data with simulated noise and outliers.

Supervised Suspicious Behavior Flagging
Deng et al. [66] combine logistic regression, stochastic approximation, and sequential D-optimal design for active learning. The question is how we sequentially should select new observations for inquiry (revealing y (c,a) ) and use them in the estimation of F (x (c,a) ) = P (y (c,a) = 1 | x (c,a) ).
The authors employ a data set with 92 inquired accounts and two highly engineered features. The first feature x (c,a) ∈ R captures the velocity and size of transactions; the second x (c,a) ∈ R captures peer comparisons. Assuming that F (·) is an increasing function in both features, the authors further define a synthetic variable z (c,a) = ωx Finally, z (c,a) is subject to a univariate logistic regression on y (c,a) . This allows a combination of stochastic approximation [67] and sequential D-optimal design [68] for new observation selection. The approach significantly outperforms random selection. Furthermore, simulations show that it is robust to underlying data distributions.
Borrajo et al. [69] argue that AML models may benefit from other types of information than simple transaction statistics. To this end, the authors consider behavior trances. These, among other things, contain information about account creation and company ownership. Using custom distance functions, the authors apply K-nearest neighbors [70,71] to flag illicit behavior. The method predicts that a new observation belongs to the same class as the majority of its k nearest neighbors. While the authors report excellent results, these are, notably, obtained on simulated data.
Zhang and Trubey [72] employ six machine learning models to predict the outcome of AML alarm inquiries. We note that the setup can be used both to qualify existing and raise new alarms under appropriate assumptions. Indeed, let s c ∈ {0, 1} indicate (with s c = 1) that client c is flagged by a traditional AML system. Assuming that s c and y c are conditionally independent given x c , we have that If we also assume that P (s c = 1|x c ) > 0 for all x c ∈ {x ∈ R d : P (y c = 1|x) > 0}, we can use a model, only trained on previously flagged clients, to raise new alarms. The authors use a data set with 6,113 alarms from a financial institution in the United States. Of these, 34 alarms were reported to authorities. The data set contains ten non-disclosed features. In order to address class imbalance, the authors investigate random over-and undersampling. Both techniques, in particular, increase the performance of a support vector machine [73]. This model seeks to maximize the margin between feature observations of the two classes and a class separating hyperplane (possibly in transformed space). However, a feedforward neural network, robust to both sampling techniques, shows the best performance. Jullum et al. [74] use gradient boosted trees to model AML alarms. The approach additively combines f 1 , . . . , f K regression trees (i.e., decision trees with continuous outputs) and is implemented with XGBoost [75]. Data comes from a Norwegian bank and contains: Tertychnyi et al. [76] propose a two-layer approach to flag suspicious clients. In the first layer, a logistic regression is used to filter out clients with transaction patterns that are clearly non-illicit. In the second layer, the remaining clients are subject to gradient boosted trees implemented with CatBoost [77]. The authors employ a data set from an undisclosed bank. This contains approximately 330,000 clients from three countries. About 0.004% of the clients have been reported for money laundering. The remaining are randomly sampled. Client-level features include demographic data and transaction statistics. Model performance varies significantly over the three countries in the authors' data set. However, the performance decreases when each country is modeled separately.
Eddin et al. [78] investigate how aggregated transaction statistics and different graph features can be used to flag suspicious bank client behavior. To this end, the authors consider a random forest model, generalized linear model [79], and gradient boosted trees with LightGBM [80]. The authors utilize a large data set from a non-disclosed bank. This contains 500,000 flagged transactions distributed over 400,000 accounts (3% of which are deemed truly suspicious and labeled as positives). To construct graph features on the data, the authors treat accounts as nodes and transactions as directed edges. Results indicate that the inclusion of GuiltyWalker features [81], using random walks to capture the distances between a given node and illicit nodes, increases model performance.
Charitou et al. [82] combine a sparse autoencoder and a generative adversarial network to flag money laundering in online gambling. The sparse autoencoder is first used to obtain higher-dimensional latent feature encodings. The goal is to increase the distance between positive (i.e., illicit) and negative (i.e., licit) observations. The latent encodings are then used to train a generative adversarial network [83]. This is composed of two competing networks. A generative network produces synthetic observations from Gaussian noise. A discriminative network tries to separate these from real observations and determine the class of the observations. The approach is tested on multiple data sets. In an AML context, the most relevant of these pertains to money laundering in online gambling. This data set contains 4,700 observations (1,200 of which were flagged for potential money laundering). fi Weber et al. [84] use graph convolutional neural networks to flag suspicious bitcoin transactions. An open data set is provided by Elliptic, a private cryptocurrency analytics company. The data set contains a transaction graph G = (V, E) with |V| = 203, 769 nodes and |E| = 234, 355 edges. Nodes represent bitcoin transactions, while edges represent directed payment flows. Using a heuristic approach, 21% of the nodes are labeled as licit; 2% as illicit. For all nodes, 166 features are recorded. Of these, 94 record local information, while the remaining 72 record one-hop information. Graph convolutional neural networks [85] are neural networks designed to work on graph data. LetÂ denote the normalized adjacency matrix of graph G. The output of the network's l'th layer is obtained by where W (l) ∈ R h (l−1) ×h (l) is a weight matrix, H (l−1) ∈ R |V |×h (l−1) is the output from layer l − 1 (initiated with feature values), and φ (l) is an activation function. While the best performance is achieved by a random forest model, the graph convolutional neural network proved competitive. Utilizing a time dimension in the data, the authors also fit a temporal graph convolutional neural network [86]. This outperforms the simple graph convolutional neural network. However, it still falls short of the random forest model. We finally highlight three recent studies that use the Elliptic data set [84]. Alarab et al. [87] propose a neural network structure where graph convolutional embeddings are concatenated with linear embeddings of the original features. This increases model performance significantly. Vassallo et al. [88] investigate the use of gradient boosting on the Elliptic data. Results, in particular, indicate that gradient boosted trees outperform random forests. Furthermore, the authors propose an adapted version of XGBoost to reduce the impact of concept drift. Lorenz et al. [89] experiment with unsupervised anomaly detection. The authors try seven different techniques: local outlier factor [90], K-nearest neighbors [70,71], principal component analysis [52], one-class support vector machine [60] , cluster-based outlier factor [91], angle-based outlier detection [92], and isolation forest [59]. For evaluation, the F1-score is used, recording the harmonic mean between precision and recall (i.e., the true positive rate). Strikingly, all seven unsupervised methods perform substantially worse than a supervised random forest benchmark. As noted by the authors, this contradicts previous literature on unsupervised behavior flagging (see, for example, [58]). One possible explanation is that the Elliptic data, constructed over bitcoin transactions, is qualitatively different from bank transaction data. The authors, following Deng et al. [66], further experiment with four active learning strategies combined with a random forest, gradient boosted trees, and logistic regression model. Two of the active learning strategies build on unsupervised techniques: elliptic envelope [93] and isolation forest [59]. The remaining two build on supervised techniques: uncertainty sampling [94] and expected model change [95]. Results show that the supervised techniques perform the best.

Future Research Directions
Our review reveals that class imbalance and the lack of publicly available data sets are central challenges to AML research. Both may motivate the use of synthetic data. We also note how banks hold vast amounts of high-dimensional and unlabeled data [96]. This may motivate the use of dimension reduction and semisupervised learning techniques. Other possible research directions include data visualization, deep learning, and interpretable and fair machine learning. In the following, we introduce each of these topics. We also provide brief descriptions of related methods and techniques within each topic.

Class Imbalance, Evaluation Metrics, and Synthetic Data
Due to class imbalance, AML systems tend to label all observations as benevolent. This implies that accuracy is a poor evaluation metric. Instead, we highlight the receiver operating characteristic (ROC) curve [97], plotting true positive versus false positive rates for varying classification thresholds. The area under a ROC curve, called ROCAUC (or sometimes just AUC), is a measure of separability; equal to 1 for perfect classifiers and 0.5 for naive classifiers. Another possible evaluation tool is the precision-recall (PR) [98] curve, plotting precision versus true positive rates for varying classification thresholds. This curve is particularly relevant when class imbalance is severe and true positive rates are of high importance. Notably, both ROC and PR curves consider the relative ranking of predictions for binary outcome models. For multi-class models, Cohen's κ [99] is appealing. This metric evaluates the agreement between two labelings, accounting for agreement by chance. Finally, note that none of the presented metrics introduced above consider calibration, i.e., if model outputs reflect true likelihoods.
To combat class imbalance, data augmentation can be used. Simple approaches include under-and oversampling (see, for instance, [100]). Synthetic minority oversampling (SMOTE) by Chawla et al. [40] is another option for vector data. The technique generates convex combinations of minority class observations. Extensions include borderline-SMOTE [101] and borderline-SMOTE-SVM [102]. These generate observations along estimated decision boundaries. Another SMOTE variant, ADASYN [103], generates observations according to data densities. For time series data (e.g., transaction sequences), there is relatively little literature on data augmentation [104]. Some basic transformations are: 1. window cropping, where random time series slices are extracted, 2. window wrapping, compressing (i.e., down-sampling), or extending (i.e., up-sampling) time series slices, 3. flipping, where the signs of time series are flipped (i.e., multiplied with −1), and 4. noise injection, where (typically Gaussian) noise is added to time series.
A few advanced methods also bear mentioning. Teng et al. [105] propose a wavelet transformation to preserve low-frequency time series patterns while noise is added to high-frequency patterns. Iwana and Uchida [106] utilize the element alignment properties of dynamic time wrapping to mix patterns; features of sample patterns are wrapped to match the time steps of reference patterns. Finally, some approaches combine multiple transformations. Cubuk et al. [107] propose to combine transformations at random. Fons et al. [108] propose two adaptive schemes; the first weighs transformed observations relative to a model's loss, the second selects a subset of transformations based on rankings of prediction losses.
Simulating known or hypothesized money laundering patterns from scratch may be the only option for researchers with no available data. Used together with private data sets, the approach may also ensure some reproducibility and generalizability. We refer to the work by Lopez-Rojas and Axelsson [109] for an in-depth discussion of simulated data for AML research. The authors develop a simulator, PaySim, for mobile phone transfers. The simulator is, in particular, employed by [110], proposing a generalized version of Isolation Forests to flag suspicious transactions. Weber et al. [111] and Suzumura and Kanezashi [112] further augment PaySim, tailoring it to a more classic bank setting.
We have found only one public data set within the AML literature: the Elliptic data set [84]. This contains a graph over bitcoin transactions. We do, however, note that graph-based approaches may be difficult to implement in a bank setting. Indeed, any bank only knows about transactions going to or from its own clients. Instead, graph approaches may be more relevant for authorities' treatment of AML reports; see work by Savage et al. [19], Drezewski et al. [20], Li et al. [21], and Baltoi et al. [22].

Visualization, Dimension Reduction, and Semi-supervised Learning
Visualization techniques may help identify money laundering [113]. One option is t-distributed stochastic neighbor embedding [114] and its parametric counterpart [115]. The approach is often used for 2-or 3dimensional embeddings, aiming to keep similar observations close and dissimilar observations distant. First, a probability distribution over pairs of observations is created in the original feature space. Here, similar observations are given higher probability; dissimilar observations are given lower. Next, we seek projections that minimize the Kullback-Leibler divergence [116] to a distribution in a lower-dimensional space. Another option is ISOMAP [117]. This extends multidimensional scaling [118], using the shortest path between observations to capture intrinsic similarity.
Autoencoders, as discussed in Section 3.1, can be used both for dimension reduction, synthetic data generation, and semi-supervised learning. The latter is relevant when we have data sets with many unlabeled (but also some labeled) observations. Indeed, we may train an autoencoder with all the observations. Lower layers can then be reused in a network trained to classify labeled observations. A seminal type of autoencoder was proposed by Kingma and Welling [119]: the variational autoencoder. This is a probabilistic, generative model that seeks to minimize a loss function with two parts. The first part employs the normal reconstruction error. The second part employs the Kullback-Leibler divergence to push latent feature representations toward a Gaussian distribution. An extension, conditional variational autoencoders [120] take class labels into account, modeling a conditional latent variable distribution. This allows us to generate class specific observations. Generative adversarial networks [83] are another option. Here, two neural networks compete against each other; a generative network produces synthetic observations, while a discriminative network tries to separate these from real observations. In analogy with conditional variational autoencoders, conditional generative adversarial nets [121] take class labels into account. Specifically, class labels are fed as inputs to both the discriminator and generator. This may, again, be used to generate class specific observations. While most generative adversarial network methods have been designed to work with visual data, methods applicable to time-series data have recently been proposed [122,123].

Neural Networks, Deep Learning, and Transfer Learning
The neural networks used in current AML research are generally small and shallow. Deep neural networks, by contrast, employ multiple layers. The motivating idea is to derive higher-level features directly from data. This has, in particular, proved successful for computer vision [124,125], natural language processing [126,127], and high-frequency financial time-series analysis [128][129][130]. Some authors have also proposed to use the approach to check KYC image information (e.g., driver's licenses) [131,132] or ease alarm inquiries with sentiment analysis [133].
State-of-the-art deep neural networks use multiple methods to combat unstable gradients. This includes rectified [134,135] and exponential [136,137] linear units. Weight initialization is done with Xavier [138], He [139], or LeCun [140] initialization. Batch normalization [141] is used to standardize, re-scale, and shift inputs. For recurrent neural networks (introduced below), gradient clipping [142] and layer normalization [143] are often used. Finally, residual or skip connections [144,145] feed intermediate outputs multiple levels up a network hierarchy.
State-of-the-art networks also use regularization techniques to combat overfitting. Dropout [146,147] temporarily removes neurons during training, forcing non-dropped neurons to capture more robust relationships. Regularization [148] limits network weights by adding penalty terms to a model's loss function. Finally, max-norm regularization [149] restricts network weights directly during training.
Multiple deep learning methods have been proposed for transfer learning. We refer to the work by Weiss et al. [150] for an extensive review. The general idea is to utilize knowledge across different domains or tasks. One common approach starts by training a neural network on some source problem. Weights (usually from lower layers) are subsequently transferred to a new neural network that is fine-tuned (i.e., re-trained) on another target problem. This may work well when the first neural network learns to extract features that are relevant to both the source and target problem [151]. A sub-category of transfer learning, domain adaption explicitly tries to alleviate distributional differences across domains. To this end, both unsupervised and supervised methods may be employed (depending on whether or not labeled target data is available). For example, Ganin and Lempitsky [152] propose an unsupervised technique that employs a gradient reversal layer and backpropagation to learn shift invariant features. Tzeng et al. [153] consider a semi-supervised setup where little labeled target data is available. With unlabeled target data, the authors first optimize feature representations to minimize the distance between a source and target distribution. Next, a few labeled target observations are used as reference points to adjust similarity structures among label categories. Finally, we refer to the work by Hedegaard et al. [154] for a discussion and critique of the generic test setup used in the supervised domain adaptation literature and a proposal of a fair evaluation protocol.
Deep neural networks can, like their shallow counterparts, model sequential data. Here, we provide brief descriptions of simple instantiations of such networks. We use the notation introduced in Section 3.1 and only describe single layers. To form deep learning models, one stacks multiple layers; each layer receives as input the output of its predecessor. Parameters (across all layers) are then jointly optimized by an iterative optimization scheme, as described in Section 3.1. Recurrent neural networks are one approach to modeling sequential data. Let x (t) ∈ R d denote some layer input at time t = 1, .., T . We can describe the time t output of a basic recurrent neural network layer with m neurons by where W x ∈ R d×m is an input weight matrix, W y ∈ R m×m is an output weight matrix, b ∈ R m is a bias vector, and φ(·) is an activation function. Advanced architectures use gates to regulate the flow of information. Long short-term memory (LSTM) cells [155][156][157] are one option. Let denote the Hadamard product and σ the standard sigmoid function. At time t, an LSTM layer with m neurons is described by

a main transformation
g (t) = tanh(W T x,g x (t) + W T y,g y (t−1) + b g ), 5. a long-term state l (t) = f (t) l (t−1) + i (t) g (t) , and 6. an output y (t) = o (t) tanh(l (t) ), where W x,i , W x,f , W x,g , and W x,g in R d×m denote input weight matrices, W y,i , W y,f , W y,g , and W y,g in R m×m denote output weight matrices, and b i , b f , b o , and b g in R m denote biases. Cho et al. [158] propose a simpler architecture based on gated recurrent units (called GRUs). An alternative to recurrent neural networks, the temporal neural bag-of-features architecture has proved successful for financial time series classification [159]. Here, a radial basis function layer with k = 1, ..., K neurons is used where v k and w k in R d are weights that describe the k'th neuron's center and width, respectively. Next, an accumulation layer is used to find a constant length representation in R K , Bilinear neural networks may also be used to model time domain information. Let X = [x (1) , . . . , x (T ) ] be a matrix with columns x (t) ∈ R d for t = 1, ..., T . A temporal bilinear layer with m neurons can then be described as where W 1 ∈ R m×d and W 2 ∈ R T ×T are weight matrices and B ∈ R m×T is a bias matrix. Notably, W 1 models feature interactions at fixed time points while W 2 models feature changes over time.
Attention mechanisms have recently become state-of-the-art. These allow neural networks to dynamically focus on relevant sequence elements. Bahdanau et al. [160] consider a bidirectional recurrent neural network [161] and propose a mechanism known as additive or concatenative attention. The mechanism assumes an encoder-decoder architecture. During decoding, it computes a context vector by weighing an encoder's hidden states. Weights are obtained by a secondary feedforward neural network (called an alignment model) and normalized by a softmax layer (to obtain attention scores). Notably, the secondary network is trained jointly with the primary network. Luong attention [162] is another popular mechanism, using the dot product between an encoder's and a decoder's hidden states as a similarity measure (the mechanism is also called dot-product attention). Vaswani et al. [163] propose the seminal transformer architecture. Here, an encoder first applies self-attention (i.e., scaled Luong attention). As before, let X = [x (1) , . . . , x (T ) ] denote our matrix of sequence elements. We can describe a self-attention layer as where 1. Q ∈ R T ×d k , called the query matrix, is given by Q = X T W Q with a weight matrix W Q ∈ R d×d k , 2. K ∈ R T ×d k , called the key matrix, is given by K = X T W K with a weight matrix W K ∈ R d×d k , and 3. V ∈ R T ×dv , called the value matrix, is given by Note that the softmax function is applied row-wise. It outputs a T × T matrix. Here, every row t = 1, . . . , T measures how much attention we pay to x (1) , . . . , x (T ) in relation to x (t) . During decoding, the transformer also applies self-attention. Here, the key and value matrices are taken from the encoder. In addition, the decoder is only allowed to attend to earlier output sequence elements (future elements are masked, i.e., set to − inf before softmax is applied). Notably, the authors apply multiple parallel instances of self-attention. The approach, known as multi-head attention, allows attention over many abstract dimensions. Finally, positional encoding, residual connections, layer normalization, and supplementary feedforward layers are used. As a last attention mechanism, we highlight temporal attention augmented bilinear layers [164].
With the notation used to introduce temporal bilinear layers above, we may express a temporal attention augmented bilinear layer as where a (i,j) and e (i,j) denote the (i, j)'th element of A ∈ R m×T and E ∈ R m×T , respectively, W ∈ R T ×T is a weight matrix with fixed diagonal elements equal to 1/T , and λ ∈ [0, 1] is a scalar allowing soft attention.
In particular, E is used to express the relative importance of temporal feature instances (learned through W), while A contains our attention scores.

Interpretable and Fair Machine Learning
Advanced machine learning models often outperform their simple statistical counterparts. Their behavior can, however, be much harder to understand, interpret, and explain. While some supervisory authorities have shown a fair amount of leeway regarding advanced AML models [165], this is a potential problem. "Fairness" is an ambiguous concept in machine learning with many different and overlapping definitions [166]. The equalized odds definition states that different protected groups (e.g., genders or races) should have equal true and false positive rates. The conditional statistical parity definition takes a set of legitimate discriminative features into account, stating that the likelihood of a positive prediction should be the same across protected groups given the set of legitimate discriminative features. Finally, the counterfactual fairness definition is based on a notation that a prediction is fair if it remains unchanged in a counterfactual world where some features of interest are changed. Approaches for fair machine learning also vary greatly. In an exemplary paper, Louizos et al. [167] consider the use of variational autoencoders to ensure fairness. The authors treat sensitive features as nuisance or noise variables, encouraging separation between these and (informative) latent features by using factorized priors and a maximum mean discrepancy penalty term [168]. In another exemplary paper, Zhang et al. [169] propose the use of adversarial learning. Here, a primary model tries to predict an outcome variable while minimizing an adversarial model's ability to predict protected feature values. Notably, the adversarial model takes as inputs both the primary model's predictions and other relevant features, depending on the fairness definition of interest.
Regarding interpretability, we follow Du et al. [170] and distinguish between intrinsic and post-hoc interpretability. Intrinsically interpretable models are, by design, easy to understand. This includes simple decision trees and linear regression. Notably, attention mechanisms also exhibit some intrinsic interpretability; we may investigate attention scores to see what part of a particular input sequence a neural network focuses on. Other models and architectures work as "black boxes" and require post-hoc interpretability methods. Here, it is useful to distinguish between global and local interpretability. The former is concerned with overreaching model behavior; the latter with individual predictions. One possible technique for local interpretability is LIME [171]. Consider a situation where a black box model and a single observation are given. The method then first generates a set of permutated observations (relative to the original observation) with black box model outputs. An intrinsically interpretable model is then trained on the synthetic data. Finally, this model is used to explain the original observation's black box prediction. Gradient-based methods [172] use the gradients associated with a particular observation and black box model to capture importance. The fundamental idea is that larger gradients (either positive or negative) imply larger feature importance. Individual conditional expectation plots [173] are another option. These illustrate what happens to a particular black box prediction if we vary one feature value of the underlying observation. Similarly, partial dependence plots [174] may be used for global interpretability. Here, we average results from feature variations over all observations in a data set. This may, however, be misleading if input features are highly correlated. In this case, accumulated local effects plots [175] present an attractive alternative. These rely on conditional feature distributions and employ prediction differences. For counterfactual observation generation, numerous methods have been proposed [176][177][178]. While these generally need to query an underlying model multiple times, efficient methods utilizing invertible neural networks have also been proposed [179]. A related problem concerns the quantitative evaluation of counterfactual examples; see the work by Hvilshøj et al. [180] for an in-depth discussion. Finally, we highlight Shapley additive explanations (SHAP) by Lundberg and Lee [181]. The approach is based on Shapley values [182] with a solid game-theoretical foundation. For a given observation, SHAP values record average marginal feature contributions (to a black box model's output) over all possible feature coalitions. The approach allows both local and global interpretability. Indeed, every observation is given a set of SHAP values (one for each input feature). Summed over the entire data set, the (numerical) SHAP values show accumulated feature importance. Although SHAP values are computationally expensive, polynomial time estimation is possible for tree-based models [183].

Conclusion
Inspired by FATF's recommendations, we propose a terminology for AML in banks structured around two central tasks: (i) client risk profiling and (ii) suspicious behavior flagging. The former assigns general risk scores to clients (e.g., for use in KYC operations) while the latter raises alarms on clients, accounts, or transactions (e.g., for use in transaction monitoring). Our review reveals that the literature on client risk profiling is characterized by diagnostics, i.e., efforts to find and explain risk factors. The literature on suspicious behavior flagging, on the other hand, is characterized by non-disclosed features and hand-crafted risk indices.
In general, we find that the literature on AML in banks is plagued by a number of problems. Two challenges are class imbalance and a lack of public data sets. To address class imbalance, a multitude of different data augmentation methods may be used. Motivated by the sensitivity of bank data, synthetic data generation may be a viable way to address the lack of public data sets. Synthetic public data sets would, in particular, facilitate better evaluation and reproducibility of, as well as comparisons between, new and existing methods. Other directions for future research include methods for dimension reduction, semi-supervised learning, data visualization, deep learning, and interpretable and fair machine learning. Finally, we strongly advise against the use of accuracy as an evaluation metric for AML applications, instead emphasizing ROC or PR curves.