Multi-Objective Predictive Taxi Dispatch via Network Flow Optimization

In this paper, we discuss a large-scale fleet management problem in a multi-objective setting. We aim to seek a receding horizon taxi dispatch solution that serves as many ride requests as possible while minimizing the cost of relocating vehicles. To obtain the desired solution, we first convert the multi-objective taxi dispatch problem into a network flow problem, which can be solved using the classical minimum cost maximum flow (MCMF) algorithm. We show that a solution obtained using the MCMF algorithm is integer-valued; thus, it does not require any additional rounding procedure that may introduce undesirable numerical errors. Furthermore, we prove the time-greedy property of the proposed solution, which justifies the use of receding horizon optimization. For computational efficiency, we propose a linear programming method to obtain an optimal solution in near real time. The results of our simulation studies using real-world data for the metropolitan area of Seoul, South Korea indicate that the performance of the proposed predictive method is almost as good as that of the oracle that foresees the future.


Introduction
On-demand ride-hailing services, such as Uber, Lyft, Didi, Grab, and Kakao T, have handled millions of ride-hailing orders per day in 2019. The increasing demand for ride-hailing services introduces various new technologies and challenges to service providers [1][2][3]. The most challenging problem is to maintain a proper number of ride service suppliers (taxis or drivers) in each service area. If there are not enough vehicles located nearby when a customer requests a ride, then the service provider is unable to serve the request. This leads to bad user experience and potential revenue loss. Therefore, many ride-hailing platforms focus on developing ways to align the spatial distribution of vehicles (or drivers) to that of ride requests by using, for example, dynamic surge pricing [4] and pooling [5], to improve the order fulfillment rate of each platform.
In taxi dispatching, the major concern is to consider the future demand for ride requests, which is difficult to predict reliably. The demand is often modeled as a random process with an estimated distribution, and a dispatch solution is obtained by modeling a mobility-on-demand system of interest as a queueing network [19,20], and a Markov decision process [21]. However, incorporating the data collected in real time into these offline solutions is a nontrivial task. This issue has been alleviated by using (deep) reinforcement learning (RL) [22,23]. RL is intended to capture the interactions between a large volume of vehicles in an adaptive manner. However, due to the curse of dimensionality, in practice, it is used in conjunction with an approximation technique, which often degrades the performance of this approach in large-scale fleet management [24,25]. RL methods also require a substantial amount of data to learn an efficient dispatch policy by capturing how to utilize various factors in a given transportation system [18,[26][27][28]. Moreover, it is difficult to incorporate constraints into RL methods, particularly when the constraints change over time, although several methods have recently been proposed to address this issue [29][30][31].
On the other hand, a model predictive control (MPC) method can also be used to deal with the dispatching problems in the mobility-on-demand system. The idea of the MPC method is to solve an optimization problem at each time step for a fixed time horizon, obtain a sequence of controls, and use only the first control for the current time step. In the next time step, the same procedure is performed, but with the shifted time horizon. MPC methods have demonstrated several theoretical and empirical advantages in solving large-scale taxi dispatch problems. In MPC, online observations and constraints can simply be considered in a receding-horizon optimization problem in each time step. Several MPC-based dispatch methods have been proposed in previous literature. To name a few, the MPC-based method is applied to the problem of rebalancing a bike-sharing system that ensures the feasibility of imposed constraints [32], but they considered a degree of alignment between demand and supply as a constraint, not as an objective function. A taxi service planning is designed using the MPC method so that monotonic improvement of iterative solutions is guaranteed [33], but they only minimize cruising distance of the vehicle. Other than them, the MPC methods have been applied to provide a worst-case performance guarantee [34], the asymptotic performance guarantee that the number of unserved ride requests converges to zero [35], or combined with long short-term memory (LSTM) neural networks which forecast a future demand [36]. Another variant of the MPC approach, which is closely related to our method, was proposed in [37]. They considered a multi-objective framework similar to our method, which tries to make a ratio between supply and demand on vehicle homogeneous over a spatial region with a minimal idle driving distance. They introduced a stochastic taxi dispatching model and achieved consistency between driver and passenger distributions. On the other hand, we consider a deterministic taxi dispatching model based on the flow problem. Our method has additional advantageous theoretical properties that remove rounding errors and provide a performance guarantee regardless of prediction accuracy.
Finally, we mention that there have been several previous works about allocating the ride requests to the currently idle taxis. For example, the allocation of the ride request can be modeled by a combinatorial optimization problem and the acceptance of the allocated request can be modeled by the probability distribution [13]. Moreover, an independent and cooperative ride request allocating algorithm was also proposed [24]. In [38], an RL-based algorithm that can allocate the large-scale ride requests in real-time was proposed. However, departing from the above references, we focus on repositioning drivers and the allocation of the ride request is automatically done by allocating from the most expensive requests to idle drivers.
In this paper, we propose an efficient, yet very simple method for large-scale taxi dispatch by exploiting the aforementioned advantages of MPC in a multi-objective setting. The proposed taxi dispatch solution aims to serve as many ride requests as possible while minimizing the total reposition cost. We model an urban transportation system of interest as a flow network, which is a useful approach introduced to manage mobility-on-demand platforms [32,38,39]. In particular, we model the transportation system as a grid-world consisting of homogeneous regular hexagonal cells as in [18,27,40]. Each hexagonal cell represents the coverage of a vehicle in a unit time interval.
To develop a tractable solution method, we convert the multi-objective taxi dispatch problem in a receding time window into a minimum cost maximum flow (MCMF) problem, which can be solved using well-known algorithms [41,42]. We also investigate the theoretical properties of the proposed method. We first show that the optimal solution obtained by the proposed method is a vector of integers. Thus, our method provides a physically meaningful solution because the decision variable represents the number of relocating vehicles, and it does not require any rounding, which may introduce undesirable numerical errors. Moreover, we show the time-greedy property that justifies the use of receding horizon optimization. The time-greedy property of the solution means that the proposed dispatch solution maximizes the number of served ride requests at least at the initial time step, even under uncertainty on future demand. In other words, the number of served ride requests at the initial time step using the proposed dispatch solution is always greater than equal to that of any other dispatch solution. The time-greedy property provides a worst-case performance guarantee and therefore, the proposed predictive method limits the undesirable negative impact of future uncertainty in demand. To further improve computational efficiency for large-scale taxi dispatch problems, we also develop a linear programming (LP) method by reformulating the MCMF problem as a linear optimization problem without loss of optimality. According to the results of our numerical test, the proposed LP method is approximately 880 times faster than the classical MCMF algorithm [41,42].
The performance of the proposed taxi dispatch method is demonstrated using real-world data for the metropolitan area of Seoul, South Korea, and it is compared with the performance of the oracle, a deep RL method, and two rule-based algorithms. The performance of the proposed method is comparable to that of the oracle although the oracle foresees the future. Specifically, it achieves 97.16% of the oracle's performance. Consequently, our dispatch solution significantly outperforms all the other methods.
We summarize the main contribution of our paper as follows.
• We present the flow network problem equivalent to the multi-objective taxi dispatch problem. The solution of the flow network problem is automatically guaranteed as an integer solution.
• We propose a model predictive control method to consider potential future ride requests. The MPC method satisfies time-greedy property which guarantees that the method takes a maximal number of requests, at least at the current time step regardless of the accuracy of prediction.
• We further convert a flow network problem into the equivalent LP formulation, which has significantly less computational cost than the flow network problem.
• Finally, we make a simulation and compare the performance of our proposed algorithm with the other algorithms, using the real-world data of Seoul. The proposed algorithm shows almost the same performance as the oracle and outperforms a deep RL-based method and two rule-based algorithms that are used to be compared. The rest of the paper is organized as follows. In Section 2, the taxi dispatch problem is formulated as a multi-objective optimization problem. In Section 3, we convert this dispatch problem into the MCMF problem, and introduce the classical MCMF algorithm. We also provide the theoretical properties of the optimal solution obtained using the MCMF algorithm. For computational efficiency, we also propose an equivalent LP formulation in this section. In Section 4, we present the results of our numerical experiments using real-world data, and discuss comparisons with other methods.

The Setup
In this paper, we consider a global taxi controller that assigns ride requests to homogeneous taxis and relocates vehicles to another area, if necessary. The goal of this controller is to maximize the gross merchandise volume (GMV), the sum of the fares of the served requests, while minimizing the repositioning costs. A controller can choose to reposition a vehicle from a low-demand area to a high-demand area to maximize GMV, but this repositioning implies that a vehicle would use gas without a passenger, which is a cost. Therefore, we model the objective of the global controller as maximizing the difference between GMV and the total repositioning costs.
At each time step, this fleet controller can command three actions to each taxi: • To move a taxi to a nearby area (repositioning); • To assign a nearby ride request (order assignment); • To wait (idle).
We note that, if there are not enough ride requests nearby, taxis can travel a long distance to pick up passengers by repeating repositioning action for several time steps.
Here, we assume that all taxis are homogeneous and always follow the command without exception. When the controller assigns a ride request, which consists of origin, destination, fare, and estimated time of arrival to the destination, to a taxi, the vehicle goes offline at the origin, and it becomes available to follow the command after arriving at the destination (goes online). Finally, when the controller decides to make a taxi idle, the vehicle is available to follow the control actions in the next time step at the vehicle's current location. Throughout the paper, we interchangeably use the terms, taxi and vehicle.

Fleet Management System Model
We partition an area of interest into regular hexagons of the same size, and we also uniformly partition a day into several intervals (usually by 10 minutes). Although one can consider a more accurate model, we will focus on this simple hexagonal model for several reasons. First, the size of the hexagonal grid is chosen so that the median time to travel from the center of one cell to its boundary is approximately 10 minutes, so that vehicles can move to the neighboring cell in a single time interval. Moreover, as we will see in the numerical experiment, the efficiency of the random method is reasonably close to our method. These imply that our model well approximates reality. Moreover, accurate simulation requires a massive amount of computing resources, which is infeasible. For this reason, previous researches also rely on the same idea of hexagonal discretization [18], [38]. Figure 1 shows a hexagonal grid map of the Seoul metropolitan area. We use this spatial discretization to model the dynamics of vehicle and order distributions. Aside from the map boundary, each cell in the aforementioned hexagonal grid map has 6 adjacent neighboring cells. To model this connectivity in the hexagonal grid, we construct a graph, where a vertex is a cell in the hexagonal grid, denoted by N i , i = 1, . . . , n, and an edge exists between two vertices if the corresponding two cells are adjacent to each other in the hexagonal grid map. More precisely, we define L as an n × n adjacency matrix of nodes, given by otherwise. Note that we allow the self-loop so that L ii = 1 for all i, which implies that the movement from one hexagonal region to itself is also considered. Thus, if N i is located in the interior of the map, the ith row and the ith column of L have exactly 7 ones, and the remaining rows and columns have less than 7 ones. For example, consider the 7 hexagonal cells shown in Figure 2. The adjacency matrix L for this grid is given by because N 7 is adjacent to all the other cells. The time interval for two consecutive dispatch decisions is chosen as 10 minutes. Thus, one day is discretized into T = 144 time steps. We use the time index t ∈ {0, 1, ..., T − 1}. It is assumed that, in each time step, a vehicle can stay in the current cell or move to its neighboring cell.   In the beginning of time step t, the fleet controller has information about the number of ride requests r i,t and the driver distribution d i,t . Given this information, it computes a taxi dispatch solution x t . Then, the vehicle redistribution process begins based on the dispatch solution, as illustrated in Figure 3. The process of redistributing vehicles in each time step consists of three stages, as presented below: 1 1. In the first stage, a vehicle located in N i is capable of moving to N j if L ij = 1. Thus, every vehicle can stay in the same hexagonal cell or move to the neighboring cells. Let d i,t denote the number of drivers or vehicles in N i in the beginning of time step t. Then, the controller decides the number of vehicles that move from N i to N j in time step t, which is denoted as x i→j,t . If L ij = 0, then clearly x i→j,t = 0. Finally, the reposition cost c i→j,t is imposed when a vehicle moves from N i to N j in time step t.
2. In the second stage, drivers take ride requests in the current cell. Let r i,t denote the number of ride requests in N i in time step t. Whenever a driver in N i serves a request, it is excluded from d i,t . After the travel is completed, the driver is added to the destination N j .
3. In the third stage, the total number of vehicles in N i is adjusted based on each vehicle's status. Some vehicles may go offline due to the end of service, while some other vehicles may go online to start the service. Moreover, any vehicle with a ride request just completed becomes online in the destination cell.
In the second stage, after vehicles are repositioned, the controller automatically matches drivers and ride requests from the most expensive ones to maximize the income from the served requests. The controller neglects other factors, such as the expected travel time. We will justify this matching Table 1: Notation strategy in Section 4 by empirically showing that the performance of our solution is similar to that of the oracle, which uses all data about ride requests. We note that this process achieves fairness among drivers in the same grid by randomly assigning ride requests to them. Under this automated matching assumption, maximizing GMV is equivalent to serving as many ride requests as possible. We consider the latter as an objective function in the optimization problem to be introduced. The notation used throughout the paper is summarized in Table 1. We also use the following notation for vectors and matrices: d t := (d 1,t , . . . , d n,t ) ∈ R n , and d := d 0 · · · d T −1 ∈ R n×T . Similar notation is used for vectors and matrices of r i,t . Moreover, we let x t := (x 1→1,t , . . . , x 1→n,t , . . . , x n→1,t , . . . , x n→n,t ) ∈ R n 2 , and x := x 0 · · · x T −1 ∈ R n 2 ×T . Similar notation is used for vectors and matrices of c i→j,t .

Multi-Objective Taxi Dispatch Problem
The objective of taxi dispatch is to maximize GMV or, equivalently, to serve as many ride requests as possible, with minimal relocation costs. Thus, the goal of the fleet controller is twofold: (i) to maximize the number of served requests, and (ii) to minimize the total cost incurred by relocating vehicles. Therefore, it is natural to consider a multi-objective optimization problem. We first note that the number of served requests in N i cannot be larger than the number of drivers or the number of ride requests in the same cell. The number of drivers in N i in time step t is given by Thus, the total number of served requests in N i in time step t is given by Hence, the first objective function, which represents the total number of served requests throughout the entier day, is defined byf On the other hand, the cost of repositioning vehicles in time step t is simply given by Thus, the second objective function, which computes the total repositioning cost, is defined bȳ Note also that the following constraints must be satisfied: Here, the first constraint describes the evolution of the number of vehicles in N i . The second constraint represents that the vehicles moving out of N i is limited by the number of drivers in the cell.
Putting all the pieces together, we formulate the taxi dispatch problem as the following multiobjective optimization problem: The major concern in solving this problem is the lack of knowledge about the number of ride requests r i,t for future time steps. As time goes by, information about r i,t 's unfolds. Thus, instead of directly solving this problem offline, we optimize the objective functions in a receding horizon fashion. In the next section, we explain the receding horizon optimization problem in detail, and we solve it by using an equivalent minimum cost maximum flow problem and a linear program.

Converting to A Network Flow Problem
We first consider the simplest case when T = 1. In this case, the constraint (2.3a) has no effect.
Recall that the fleet controller redistributes the vehicles in each hexagonal cell to its adjacent cells following the three-stage procedure. We interpret this procedure as controlling the flow between the nodes. Let V i and W i be the copies of nodes N i , i = 1, . . . , n. We consider the movement of vehicles at the first stage as the flow from the nodes V i to W j . Here, the flow capacity between V i and W j is defined as +∞ if L ij = 1 and 0 otherwise. Moreover, since no vehicles can move from the future to the past, the flow capacity from W j to V i is set to be 0, regardless of the value of L ij . To complete the network flow problem formulation, we introduce an artificial source, S, and a sink, T . The flow capacity from S to V i corresponds to the number of vehicles in the beginning of the first stage, which is equal to d i,0 , and the flow capacity from W j to T corresponds to the number of requests, which is r i,0 . Finally, the cost of moving from V i to W j is c i→j,0 , and the cost of moving from S to V i or from W j to T is 0. Figure 4 shows the flow diagram converted from the original problem with the 7 hexagonal regions illustrated in Figure 2. The one-to-one relationships between the taxi dispatch problem and the network flow problem are summarized in Table 2.
In this network flow setup, we can use the minimum cost maximum flow (MCMF) algorithm introduced in [41,42]. To describe the MCMF algorithm, we introduce several concepts about directed flow networks. Let G = (V, E) be a digraph with a set of vertices V = {v 1 , v 2 , . . . , v n } that includes a source, S, and a sink, T , and the set of edges E ⊆ V × V . Assume that the capacity s ij and cost c ij associated with each edge (v i , v j ) are given. Let f ij denote the flow from v i to v j . We now define the residual network and augmented path as follows:

Taxi dispatch
Network flow problem d i,t # of drivers flow capacity from S to V i r i,t # of requests flow capacity from W i to T c i→j,t reposition costs flow costs Table 2: Relationship between the taxi dispatch problem and the network flow problem In the residual network G f , a path from the source to sink p : Here, s f ij represents the capacity of edge (v i , v j ) in the residual network. The MCMF algorithm (Algorithm 1) computes the maximum flow, while minimizing the total cost. It is well known that the MCMF algorithm has time complexity O(|V | × |E| × F ), where F denotes the maximum amount of flow from S to T [42].

Algorithm 1 Minimum cost maximum flow (MCMF) algorithm
end for 9: end while By the equivalence between the taxi dispatch problem (2.4) and the network flow problem, the former can be solved using Algorithm 1. For example, consider the network shown in Figure  4. Here, the vertex set is V = {S, T , V i , W i | i = 1, . . . , 7}, and the edge set is given by . . , 7 such that L i j = 1}. The capacities of (S, V i ) and (W j , T ) are set to be d i,0 and r i,0 , respectively, and the capacity of (V i , W j ) ∈ E is set to be +∞. Finally, the cost of sending flow is set to be c i→j,0 for (V i , W j ) ∈ E and zero for all the other edges. Then, the following proposition is a direct result of the MCMF algorithm, applied to the taxi dispatch problem.

Receding Horizon Optimization
In this subsection, we extend the idea introduced in the previous subsection to the receding horizon optimization problems for taxi dispatch. To set up the receding horizon optimization problem (with prediction horizon K) associated with our multi-objective taxi dispatch problem, let which are modified from the original objective functions (2.1) and (2.2), respectively. The objective functions f +,t andf −,t represent the number of served requests and the repositioning cost, respectively, from time step t during the prediction horizon. With a slight abuse of notation, we let x := (x t , . . . , x t+K−1 ) in the receding horizon setting. Since the number of ride requests for future time steps is not available, in time step t we let This choice is reasonable when the distribution r t of ride requests does not change much during the prediction horizon. We further modify the second objective functionf −,t by adding a regularizer to serve as many current requests as possible, rather than expecting uncertain future requests: Here, α(k − t) denotes an artificial cost for serving a request in time step k. 2 Given the observations d t , r t in the beginning of time step t, the receding horizon optimization problem for taxi dispatch can then be formulated as follows: where all the constraints must hold for the prediction horizon. After obtaining an optimal solution x := (x t , . . . , x t+K−1 ), we use only x t to dispatch the currently available vehicles, as in the model predictive control (MPC) methods. In fact, this problem can be considered to be an MPC problem with state d t , control input x t , and system dynamics (2.3a) and constraints (2.3b) and (2.3c). We now explain the equivalence between (3.5) and the network flow problem, which is similar to that seen in the stationary case. Let V i,k , W i,k be the copies of ith node N i for time step k ∈ {t, . . . , t + K − 1}. As before, the flow can move from V i,k to W j,k and from W i,k to V i,k+1 without limitation, and it can move from S to V i,k with a flow capacity d i,k . However, since the ride requests can be matched in any of the time steps, the flow can move from W i,k to T with capacity r i,k . Lastly, we again attach the reposition cost c i→j,k to the edge between V i,k and W j,k and the artificial cost α(k − t) to the edge between W i,k and T . For example, the extended flow diagram for K = 2 is illustrated in Figure 5. Recall that a single time step consists of three stages, as illustrated in Figure 3. We can connect the third stage of each time step to the first stage of the next time step to obtain the extended digraph. The flow from S to V i,t represents the current distribution of idle drivers (in the beginning of time step t). In the subsequent time steps in the prediction horizon, when vehicles are unused in a time step, they become available in the next step. This relationship is shown as the flow from W i,k to V i,k+1 . Furthermore, as previously discussed, the flow from V i,k to W j,k represents the vehicles' motion in the first stage, and the flow from W i,k to T presents taking the ride requests. Thus, the receding horizon taxi dispatch problem (3.5) is equivalent to an MCMF problem considered in the extended digraph. Again, we can obtain an optimal dispatch solution by applying Algorithm 1 to the network flow problem with the extended graph. In the following two subsections, we examine two significant properties of the optimal dispatch solution obtained using the MCMF algorithm.

Integer Property
The first property of the optimal dispatch solution x is that all of its entries are integers. This integer property is an important advantage of the proposed taxi dispatch method. In practice, it is physically impossible to use a non-integer dispatch solution. Thus, when using other methods that cannot guarantee the integer property, an additional rounding procedure is required to convert the solution into a vector of integers. However, this procedure will introduce an undesirable error and lead to a sub-optimal dispatch solution.
The following theorem holds for a general network flow problem with integer capacity.
Theorem 1. Suppose the capacities in the graph G = (V, E) are integers. Then, the optimal flow plan constructed by the MCMF algorithm is a vector of integers.
Proof. We use an induction on the iteration of augmented paths. Let p 1 , . . . , p N be minimum cost augmented paths that are selected in each iteration of Algorithm 1. We will show that the flow vector remains as an integer vector after the nth iteration is finished for all n. In Algorithm 1, we initialize the flow vector as 0. During the first iteration, we update the flow f ij and f ji on the edges of p 1 by s f = min{s f ij | (v i , v j ) ∈ p 1 }. Since the capacities s ij of the edges and the initial flow quantities f ij ≡ 0 are assumed to be integers, the residual capacities s f ij of p 1 are also integers. Therefore, the updated entries f ij + s f and f ji − s f of the flow vector are also integers after the first iteration. Now, suppose the flow vector f is an integer vector after the nth iteration is finished. During the (n + 1)th iteration, we update the flow f ij and f ji on the edges of p n+1 by By using the same argument as p 1 , the residual capacities s f ij of p n+1 are also integers. Therefore, the updated entries f ij + s f and f ji − s f of the flow vector are again integers. Since we repeat the procedure until the algorithm terminates, the resulting optimal flow plan should also be a vector of integers.
As a corollary, we have the integer property for the proposed dispatch solution x . Corollary 1. Let x be the optimal dispatch plan obtained by applying the MCMF algorithm to the receding horizon optimization problem (3.5). Then, all the entries of x are integers.

Time-Greedy Property
We now highlight another important property of the optimal dispatch solution constructed using the MCMF algorithm. Recall that the motivation of introducing the regularizer in (3.4) is to serve as many requests as possible sooner rather than later in order to reduce the negative impacts of future uncertainty. In fact, we can show that the optimal dispatch solution from the MCMF algorithm of the receding horizon optimization problem maximizes the number of served requests in the current time step t if the reposition costs are negligible. To prove this time-greedy property, we need the following lemma. By using Lemma 1, we can show the following time-greedy property of our optimal dispatch solution.
Theorem 2. Suppose there is no reposition cost, i.e., c = 0, and that α > 0. Let x := (x t , . . . , x t+K−1 ) be the optimal dispatch solution that is obtained by applying the MCMF algorithm to the receding horizon optimization problem (3.5). Then, x t maximizes the number of served requests in time step t.
Proof. Let N be a total number of iterations in Algorithm 1 and p 1 , . . . , p N be the selected augmented paths with minimum cost at each iteration. We first show that there is no edge of the form (T , W i,k ) in every p n . Suppose there exists (i, k) such that (T , W i,k ) ∈ p n for some n. On the other hand, by definition, every augmented path should end with T . Hence, there exists another index pair (j, l) such that p n ends with the edge (W j,l , T ). Therefore, the path p n should have a cycle of the form (T , W i,k , . . . , W j,l , T ). This contradicts to Lemma 1; therefore any minimum cost augmented path p n should not have an edge of the form (T , W i,k ).
Next, we show that the cost of p n is nonnegative. Suppose that the cost of p n is negative for some n. Since c = 0, the cost is imposed only on the edges of the form (W i,k , T ) where t < k ≤ t + K − 1. Therefore, in order for p n to have a negative cost, it should have at least one edge of the form (T , W i,k ). This contradicts to the fact that p n does not have an edge of the form (T , W i,k ). Thus, the cost of p n is nonnegative for all n = 1, . . . , N .
Since the costs of p n are nonnegative, two cases are possible. The first case is that every p n has zero cost, and the second case is that there exists 1 ≤ n 0 ≤ N such that the costs of p 1 , . . . , p n 0 −1 are 0 and the cost of p n 0 is positive.
For the first case, we will show, by induction, that every p n can only include V i,t or W j,t , i.e., the nodes in time step t. Let us first consider p 1 . Since the flow was initialized as 0 in Algorithm 1, and the capacities of (W j,k , V i,k ) are set to be 0, p 1 cannot contain an edge of the form (W j,k , V i,k ) because s f W j,k V i,k = 0. Recall that the cost of p 1 was 0, which implies (W j,t , T ) ∈ p 1 . Therefore, the only possible form of p 1 is (S, V i,t , W j,t , T ) for some i, j, and the first induction step holds for p 1 . We now assume that p 1 , . . . , p n only include V i,t or W j,t and do not include V i,t+1 . Since the edge (W j,t , V i,t+1 ) is not contained in any of p 1 , . . . , p n , the flow on it is still 0. Hence, p n+1 cannot have any edge of the form (V i,t+1 , W j,t ), which implies that once p n+1 arrives at V i,t+1 , it cannot move back to W j,t . However, since p n+1 also has zero cost, it should end with the edge (W j,t , T ). Therefore, p n+1 can only include V i,t or W j,t and the induction holds for all n = 1, . . . , N . Until now, we have shown that every minimal cost augmented path p n contains only the vertices and edges of time step t. Therefore, in this case, the optimal dispatch solution is x = (x t , 0, . . . , 0), and x t is nothing but the optimal dispatch solution when K = 1. Since the optimal dispatch solution when K = 1 maximizes the number of served requests in time step t, the optimal dispatch solution x t also maximizes the number of served requests in time step t.
For the second case. since p n 0 is the first minimal cost augmented path with a positive cost, after the (n 0 − 1)th iteration is finished in Algorithm 1, there is no augmented path with zero cost. As in the first case, this implies that the dispatch solution after (n 0 −1)th iteration should maximize the number of served requests in time step t. However, since the minimum cost augmented paths p n do not contain the edge of the form (T , W j,t ), the number of served requests in time step t cannot decrease during the iteration n = n 0 , . . . , N . Therefore, at the end of the iteration, the optimal dispatch solution x also maximizes the total number of served requests in time step t. Remark 1. The time greedy property provides a worst-case performance guarantee in the following sense: the proposed solution x t at least maximizes the number of served requests in time step t even when the number of ride requests for future time steps is very different from the data used in the receding horizon optimization problem (3.5). On the other extreme in which the data well approximates the requests for future time steps in the prediction horizon, the proposed solution should perform even better based on the definition of the receding horizon optimization problem (3.5). These features justify the effectiveness of the proposed MPC-based method.

Linear Programming Method
To implement the proposed taxi dispatch method, the receding horizon optimization problem (3.5) must be solved within a fraction of the interval between two consecutive time steps. However, the running time for the MCMF algorithm is often long, particularly for the large-scale fleet management with a nontrivial prediction horizon. To reduce the computation time to solve the taxi dispatch problem, we propose a linear programming (LP) approach.
We first note that the receding horizon optimization problem (3.5) is a multi-objective optimization problem. Since maximizing the number of served ride requests f +,t (x), and minimizing the total cost f −,t (x) are conflicting objectives, the set of optimal points of this problem forms a Pareto frontier, at which there is no other solution that serves a greater number of requests with less cost. Among the points on the Pareto frontier, we focus on the point at which the number of served requests is maximized and the cost is minimized (see Figure 6). This belongs to the so-called a priori method for solving multi-object optimization problems. The scalarizing technique is often used to find the desired optimal point with the a priori selected preference [43]. Instead of considering the object functions f +,t and −f −,t , we form the following optimization problem with a weighted sum of the two objectives (γ > 0): where all the constraints are satisfied for i, j = 1, . . . , n and k = t, . . . , t + K − 1. If the positive scaling parameter γ is sufficiently small, an optimal solution of the reformulated problem (3.6) is the desired optimal dispatch plan.
Let X be the feasible set of (3.6), and let x be a maximizer of (f +,t (x) − γf −,t (x)) in X , i.e., x ∈ arg max Then, Moreover, if we let X + := arg max x∈X {f +,t (x)}, then x ∈ arg min Therefore, x is the desired dispatching solution that minimizes the reposition cost among the solutions maximizing the number of served requests.
Proof. Recall that f +,t (x) is the total number of served requests, which is an integer. We choose anyx ∈ X + and suppose that x ∈ X \ X + . We then have Therefore, x does not maximize f +,t (x) − γf −,t (x) over X . This is a contradiction. Hence, we have x * ∈ X + . Moreover, for anyx ∈ X + , we have Since both x * andx maximize f +,t (x) over X , we have f +,t (x ) = f +,t (x). Thus, we conclude that This implies x minimizes f −,t (x) over X + .
By introducing a slack variable z, we can reformulate the problem (3.6) as a linear program.
Then, the scalarized problem (3.6) is equivalent to the following LP: where all the constraints are satisfied for i, j = 1, . . . , n and k = t, . . . , t + K − 1, and d i,t+K is defined as 0.
It is the standard result of equivalent forms using slack variables in optimization; thus, we omitted the proof.
By Theorem 3 and Proposition 2, the LP (3.7) with γ ∈ (0, min{ is equivalent to the original multi-objective receding horizon optimization problem. 3 The reformulated optimization problem (3.7) can be solved using several existing LP algorithms, such as simplex and interior point methods (see, e.g., [44][45][46] and the references therein). In this present work, we used CPLEX among various LP solvers. 4 It is difficult to efficiently accelerate the MCMF algorithm, whereas we can parallelize the LP when it is implemented with CPLEX [47]. As seen in Table  3, we compared the running time of the MCMF algorithm with that of the proposed LP method  Table 3: Computation time (in seconds) of the MCMF algorithm (Algorithm 1) and the LP method for solving (3.5) with different prediction horizon lengths.
when solving the receding horizon optimization problem (3.5) for a single time step with multiple prediction horizon lengths K. The LP method scales better with the prediction horizon length than the MCMF algorithm. When K = 16, the LP method is approximately 880 times faster than the MCMF algorithm. This result demonstrates the computational efficiency of the proposed LP approach, which enables the implementation of (3.5) in a receding horizon fashion.

Experiment Setup
We conducted case studies using the data of taxi trips in the Seoul metropolitan area in 2018. 5 The data includes information about the ride requests and the drivers' status in each cell of the hexagonal grid in two consecutive weeks in October. The ride request information includes the estimated fare, the estimated travel time, and the indices of the departing and destination cells of each ride request. Moreover, the drivers' status information is used to estimate how many drivers go online and how many go offline in each cell. In our simulator, the number of ride requests r i,t is given by the number of ride requests for which the departing cell is N i and the starting time is t. The simulator computes the number of idle drivers d i,t as the number of remaining drivers in the previous time step t − 1 plus the number of drivers going online, minus the number of drivers going offline: When the number of drivers going offline is large, so that the left-hand side is negative, we simply define d i,t as 0.
In addition to the realistic case where idle drivers can go online or go offline, we also consider the case where they cannot go online or go offline. In the latter case, the total number of drivers is preserved during the simulation. A single simulation lasts 24 hours from midnight of the studied day. In the beginning of every single simulation, the number of idle drivers d i,0 on each node is initialized with that of the actual data.
We generated a hexagonal grid of the map of Seoul and put an index on each grid by using H3 [48]. The total number and the location of the nodes are fixed for all experiments ( Figure  Figure 7: Simulator interface with arbitrary data. The numbers in each node represent the number of ride requests and the number of idle drivers, respectively. Each arrow indicates the vehicles' flow, and its width represents the amount of flow. 7). There are 321 valid nodes in total, but some nodes are located in unreachable regions. We ignored these nodes when controlling the vehicles. All the numerical experiments were conducted on a 64-bit Ubuntu 18.04.2 LTS with Intel Core i7-8700K CPU @3.70GHz. The simulator was implemented by Python, and the algorithm for solving the LP problem was implemented by C and CPLEX Studio 129.

The Oracle
As a first comparison, we introduce the oracle, which uses all the information about ride requests, even in the future, to determine an optimal dispatching plan. The oracle also manages the movement of the fleet in three stages for each single time step, as seen in Figure 3. In the first stage, the oracle moves the idle drivers to neighboring nodes. During the second stage, the oracle decides whether or not each ride request is served. Finally, in the third stage, we assume that the existing vehicles are not allowed to go offline or new vehicles cannot go online. However, the vehicles that complete the ride request can go online at their destination cells.
The oracle's dispatch solution maximizes the total profit from the ride-hailing service throughout the day. To set up the exact optimization problem for the oracle, let m be the total number of the requests during a day. We denote the ath request as a 5-tuple R a = (R a dep , R a dest , R a start , R a dur , R a price ), where each component of R a has the following information: • R a dep : the grid index of departing node of the request • R a dest : the grid index of destination node of the request • R a start : the start time of the request • R a end : the end time of the request • R a price : the price of the request.
Moreover, we define R dep i,k as the set of requests whose departing node and time are N i and k, respectively: Similarly, we define R dest i,k as the set of requests whose destination node and time are N i and k, respectively: Note that the cardinality of R dep i,k is r i,k , the number of requests in cell N i and in time step k. Finally, since the oracle has to decide whether or not each request is served, we introduce the decision variables whose values are 0 or 1. Precisely, for each (i, k), we introduce the vector , 1} r i,k denoting the decision of the oracle on the requests in R dep i,k , where a 1 , . . . , a r i,k ∈ R dep i,k . Here, b a i,k = 1 implies that the oracle decides to serve the request a ∈ R dep i,k . We denote the entire vector of b i,k as Given all the data of ride requests (R 1 , . . . , R m ), the optimization problem for the oracle can be formulated as the following mixed-integer linear program: where all the constraints are satisfied for all t = 0, 1, . . . , T − 1 and i = 1, 2, . . . , n, except the second constraint which holds for t ≥ 1. The first constraint represents the balance law of taxi flow. More precisely, a b a i,t implies the number of served requests in node N i in time step t, and s i,t denotes the number of remaining drivers in node N i after the requests in time step t are served. This quantity should be the same as the number of drivers before the requests are served, which is the right-hand side of the first constraint. As specified in the second constraint, the number of idle drivers d i,t should be the same as the sum of the number of remaining drivers s i,t−1 and the number of drivers going online after serving the requests, whose destination and end time are N i and t, respectively.
To solve (4.1), we relax the integer constraint b ∈ {0, 1} m to the constraint 0 ≤ b a i,t ≤ 1. Then, the relaxed problem is an LP. However, the optimal b of the relaxed LP, obtained by CPLEX, is an integer vector in all of our numerical experiments. 6 Thus, this solution is optimal for the original problem (4.1). Note that the performance of the oracle has to be better than that of any other algorithm, since the oracle uses all of the ride request data and foresees the future. For example, the oracle can handle unpredictable ride requests caused by special events, while other algorithms cannot.

Other algorithms
To compare our method with other algorithms, we considered a deep reinforcement learning (RL) algorithm, the random-move algorithm, and the proportional-to-demand algorithm. The deep RL algorithm is an actor-critic method in which drivers in the particular grid cell use information within the radius of three hexagons [27,49]. The random-move algorithm equally distributes the idle drivers at a particular node to the adjacent nodes. The proportional-to-demand algorithm distributes the idle drivers at a particular node to the adjacent nodes, proportional to the number of requests in each node. We provide the details of these algorithms in Appendix.

Results with the original driver data
We compared the performance of the aforementioned algorithms with that of the proposed flow network-based LP method, which we refer to as FlowOpt throughout this subsection. We first used the original initial driver data d i,0 ; simulations with the modified initial driver data will be discussed in the following subsection. We set the prediction horizon K = 30 for FlowOpt, since 30 is slightly larger than the diameter of the directed graph generated by the map of Seoul. Therefore, any vehicle starting at a particular node can reach all the other nodes during the prediction horizon of FlowOpt. We used two performance measures. The first is the relative profit, given by (Actual GMV) − (Total reposition cost) (Maximal GMV) .  Table 4: The relative profit and income (on average) obtained by the algorithms without allowing the drivers to go offline when not serving a ride or new drivers to be online.
The second is the relative income, given by where the gross merchandise volume (GMV) represents the sum of served requests' fares, as discussed in the beginning of Section 2. The maximal GMV is achieved if all the ride requests are served. Figure 8 displays the performance of FlowOpt, the oracle, the deep RL algorithm, and the two rule-based algorithms without allowing the drivers to go offline when not serving a ride or new drivers to be online. The result indicates that FlowOpt performs almost as good as the oracle, and it significantly outperforms the two rule-based algorithms. This is a surprising result, because the oracle uses all the data and foresees the future, while FlowOpt only uses the data of the current time step, together with the receding horizon approximation. This implies that the receding horizon approximation not only simplifies the problem, it also provides a reasonable approximation. One interesting feature shown in Figure 8 is that the performance of all the methods is degraded on Day 4 and Day 13. These unexpected sudden drops in performance are due to special events held in Seoul on those days. Table 4 summarizes the relative profit and income averaged over a period of two weeks. On average, the performance of FlowOpt was found to be 97.16% of the oracle's performance. Furthermore, we note that the computational time of FlowOpt for determining a taxi dispatching plan for a single time step is around 3 seconds in maximum. Since the time interval of the single time step is 10 minutes, the computational time for FlowOpt is sufficiently small and therefore, FlowOpt can be used in real-time taxi dispatching. Figure 9 shows the performance of all the algorithms, except the oracle, when allowing drivers to go online and go offline. We excluded the oracle in this simulation experiment, since it results in an undesirable outcome. Because the oracle uses the status data of real drivers, it knows in which hexagonal cell the real drivers go offline. Thus, the oracle tends not to move the idle drivers to the cells where the real drivers go offline. This prevents drivers from going offline; therefore, whether the drivers go online and offline becomes meaningless to the oracle. Again, FlowOpt demonstrated the best performance, although the overall performance was decreased by approximately 5% from the previous case. We note that the performance on Day 3 is significantly lower than it was in the previous case. According to the driver data on Day 3, a large number of drivers go offline in the early morning (around 5AM). Thus, when allowing drivers to go online/offline, there is a shortage of drivers during the morning commutes, so the performance decreases. In Table 5, we report the relative profit and income averaged over the two-week period when allowing drivers to go online/offline.   Table 5: The relative profit and income (on average) obtained by the algorithms when allowing drivers to go online/offline.

Results with reduced driver data
In the data, there are a sufficient number of taxi drivers to serve almost all ride requests. Thus, we tested if FlowOpt can perform as good as the oracle, even when there are not enough drivers. We modulated the initial number of idle drivers d i,0 by multiplying the "driver multiplier" so that the algorithms were evaluated using fewer initial drivers. Figure 10 displays the performances of the algorithms with fewer drivers on Day 1 without allowing the drivers to go offline when not serving a ride or new drivers to be online. Obviously, as the number of drivers decreases, it is less likely that an arbitrary ride request is served. This leads to a decrease in the GMV. This explains why the relative profit reduces as the driver multiplier getting smaller. Nevertheless, FlowOpt maintains the best performance for every choice of driver multiplier among the other algorithms, except the oracle.

Reposition costs
We then considered the reposition costs incurred by the algorithms. Figure 11 shows the ratio between the total reposition cost to the maximal GMV for each algorithm without allowing the drivers to go offline when not serving a ride or new drivers to be online. We observed that FlowOpt, the oracle and the deep RL algorithm maintain a reposition cost that is almost negligible. The oracle has the lowest reposition cost, and FlowOpt and the deep RL algorithm have reposition costs that  are similar to that of the oracle. The two rule-based algorithms have significantly higher reposition costs than FlowOpt since they do not have any explicit mechanism to reduce the reposition cost. Moreover, the large fluctuations of two rule-based algorithms are also due to the absence of a mechanism to control the reposition cost. Since those algorithms just send idle vehicles to adjacent nodes without considering the reposition cost, they cannot control the reposition cost. Therefore, the ratio between reposition cost and maximal GMV severely depends on the dynamics of maximal GMV. On the other hand, the other algorithms can control the reposition cost by considering it as a part of an objective function of the optimization problem. Thus, although the data changes day to day, those algorithms can stably maintain the ratio between the reposition cost and maximal GMV.

Effect of the Prediction Horizon
Lastly, we examined the effect of the prediction horizon length K on the performance of FlowOpt. Figure 12 displays the performance with various prediction horizon lengths K without allowing the drivers to go offline when not serving a ride or new drivers to be online. The performance tends to increase with the prediction horizon length. When K is small, FlowOpt works well for ordinary days, including Day 1, Day 2, Day 9 and Day 14. However, on days when there are special events in Seoul, such as Day 3, Day 4, and Day 13, there is a significant benefit of using a large K.

Conclusion
In this paper, we proposed a predictive taxi dispatch method to serve as many ride requests as possible while minimizing the cost of repositioning vehicles. By converting the multi-objective receding horizon optimization problem into a MCMF problem, we obtained two desired theoretical properties: the integer property and the time-greedy property. For a near real-time dispatch solution, an LP method was also developed. According to the experiment results using the data for Seoul, South Korea, the performance of the proposed MPC method is almost as good as that of the oracle, and it outperforms the other methods that were evaluated. The performance of our solution also increases with the prediction horizon length. The proposed taxi dispatch method can be further extended in many interesting ways. First, it can be used in conjunction with statistical learning methods that predict demand. Second, a robust version of the proposed MPC method can be considered to overcome the issues that arise from prediction errors and uncertainties. Third, by using real-time traffic information, the travel time of vehicles can also be considered in the proposed multi-objective setting.

A.1 Deep RL algorithm
This subsection provides details about the deep RL algorithm used in our simulation studies. The RL algorithm uses the advantage actor-critic method that consists of one actor and one critic, both of which are parameterized by deep neural networks [50]. This actor-critic pair is optimized to yield the profit-maximizing dispatch solution for an arbitrary node. To be specific, our RL agent provides a generic solution to the distribution of requests and drivers, without considering the spatio-temporal information about a node (e.g., time, node ID, or geographic coordinate). Given a node index i ∈ {1, . . . , N }, the goal of the optimization is to maximize the discounted sum of the net incomes earned in N i throughout all time steps where γ is the discount factor set to be 0.9. Note that the original problem is to optimize the net income for one day, but the RL agent solves the infinite horizon problem because it neglects the time limit. The RL agent assumes that the simulation runs forever, and the distribution of ride requests is the same for each day. The state is given by a vector that includes the number of requests r ·,t and the number of drivers d ·,t in a node and its neighboring nodes. We define a neighboring node as a node within the first, second, and third layer in a hexagonal grid [49]. In other words, no more than two hexagons are in between the given node to a neighboring node. Thus, the dimensionality of a state is 2 × (1 + 6 + 12 + 18) = 74. The action is a 7-dimensional vector where each entry indicates the number of drivers to move towards the specific direction (including staying in the current node). Ideally, an actor should provide a dispatch solution for a fleet in the node; however, it is difficult to implement such an actor in a neural network due to the high dimensionality of the action space. Thus we construct the actor that computes a policy for a single driver. That is, the actor of our RL agent yields the probability of choosing the directions. If some directions are inaccessible, the probability of reaching them is set to 0, then the probability vector is normalized to 1. Finally, each driver selects its direction by sampling from the probability vector.
The actor and the critic are implemented in fully connected networks. Each network consists of three hidden layers and an output layer. The hidden layers have output dimensions of 64, 32, and 16, from the first layer to the third layer, respectively, but the actor and the critic do not share any weights. For all layers, including the output layer, we use ReLU as the activation function.
We construct multiple RL agents, as described above, and optimize each agent for each day of the ride-hailing service. Each agent repeats the learning episode 100 times. In each episode, we run the simulation once using the actor, store the simulation data related to learning in the memory, and train the actor-critic with the memory. The memory contains the number of drivers, the number of ride requests, and the revenue along with the reposition cost in each node. The information is kept intact throughout an episode, but it is removed from the memory after an episode is finished. For each episode, mini-batch gradient descent is applied to the actor-critic, 4000 times. A mini-batch has the size of 3000 samples. We use Adam [51] to minimize the loss function in training the neural networks. The learning rates for the actor and the critic are 0.001 and 0.005, respectively.

A.2 Proportional-to-demand and Random-move algorithms
This subsection describes the two rule-based algorithms, namely the proportional-to-demand and the random-move algorithms. For any grid cell N i , let N i 1 , . . . , N i l be its neighboring hexagonal cells. In each time step t, the proportional-to-demand algorithm sends n i j drivers from N i to N i j , where n i j is proportional to the number of requests in N i j . Precisely, the proportional-to-demand algorithm sends ω i,j d i,t drivers to the adjacent cell N i j , where the weight ω i,j is given by ω i,j := r i j ,t r i,t + l k=1 r i k ,t , and x is the largest integer that does not exceed x. The remaining d i,t − l j=1 ω i,j d i,t drivers stay in N i .
On the other hand, the random-move algorithm does not consider the number of requests in the neighboring cells. It sends ω i,j d i,t drivers from N i to the adjacent cell N i j , where the weight is chosen as ω i,j = 1 l+1 .