ON THE POWER OF (EVEN A LITTLE) RESOURCE POOLING

We propose and analyze a multi-server model that captures a performance trade-off between centralized and distributed processing. In our model, a fraction p of an available resource is deployed in a centralized manner (e.g., to serve a most-loaded station) while the remaining fraction 1 − p is allocated to local servers that can only serve requests addressed specifically to their respective stations. Using a fluid model approach, we demonstrate a surprising phase transition in the steady-state delay scaling, as p changes: in the limit of a large number of stations, and when any amount of centralization is available (p > 0), the average queue length in steady state scales as log 1 1−p 1 1−λ when the traffic intensity λ goes to 1. This is exponentially smaller than the usual M/M/1-queue delay scaling of 1 1−λ , obtained when all resources are fully allocated to local stations (p = 0). This indicates a strong qualitative impact of even a small degree of resource pooling. We prove convergence to a fluid limit, and characterize both the transient and steady-state behavior of the actual system, in the limit as the number of stations N goes to infinity. We show that the sequence of queue-length processes converges to a unique fluid trajectory (over any finite time interval, as N → ∞), and that this fluid trajectory converges to a unique invariant state v , for which a simple closed-form expression is obtained. We also show that the steadystate distribution of the N-server system concentrates on v as N goes to infinity.

1−λ when the traffic intensity λ goes to 1. This is exponentially smaller than the usual M M 1-queue delay scaling of 1 1−λ , obtained when all resources are fully allocated to local stations (p = 0). This indicates a strong qualitative impact of even a small degree of resource pooling.
We prove convergence to a fluid limit, and characterize both the transient and steady-state behavior of the actual system, in the limit as the number of stations N goes to infinity. We show that the sequence of queue-length processes converges to a unique fluid trajectory (over any finite time interval, as N → ∞), and that this fluid trajectory converges to a unique invariant state v I , for which a simple closed-form expression is obtained. We also show that the steadystate distribution of the N -server system concentrates on v I as N goes to infinity.
1. Introduction. The tension between distributed and centralized processing seems to have existed ever since the inception of computer networks. Distributed processing allows for simple implementation and robustness, while a centralized scheme guarantees optimal pooling of processing resources, at the cost of implementation complexity and communication overhead. A natural question is how performance varies with the degree of centralization, or resource pooling. Such understanding is of great interest in the context of, for example, infrastructure planning (static) or task scheduling (dynamic) in large server farms or cloud computing clusters, and can provide insights on the trade-off between performance (e.g., delay) and cost (e.g., communication infrastructure, energy consumption, etc.).
It is well known that resource pooling can drastically improve performance, as exemplified by the comparison of M M 1 and M M n queueing systems with the same total arrival and service rates. The main message of this paper is that even a small degree of resource pooling can deliver significant benefits. We capture this effect by formulating and analyzing a multi-server model with a limited level of centralization. We begin by describing informally two motivating applications.
1.1. Primary Motivation: Server farm with local and central servers. Consider a server farm consisting of N stations, depicted in Figure 1. Each station is fed with an independent stream of tasks, arriving at a rate of λ tasks per second, with 0 < λ < 1, 1 and is equipped with a local server with identical performance; these servers are local in the sense that each one can only serve its own station. All stations are also connected to a single centralized server that always serves a task (if one exists) at a station with the longest queue.
We consider an N -station system. The system designer is granted a total amount N of divisible computing resources (e.g., a collection of processors). In a loose sense (to be formally defined in Section 2.1), this means that the system is capable of processing N tasks per second when fully loaded. The system designer allocates computing resources to local and central servers. Specifically, for some p ∈ (0, 1), each of the N local servers is able to process tasks at a maximum rate of 1 − p tasks per second, while the centralized server, equipped with the remaining computing power, is capable of processing tasks at a maximum rate of pN tasks per second. The parameter p captures the amount of centralization in the system. Note that since the total arrival rate is λN , with 0 < λ < 1, the system is underloaded for any value p ∈ (0, 1). When the arrival processes and task processing times are random, there will be times when some stations are empty while others are loaded. Since a local server cannot help another station process tasks, the total computational resources will be better utilized if a larger fraction is allocated to the central server. However, a greater degree of centralization (corresponding to a larger value of p) entails more frequent communications and data transfers between the local stations and the central server, resulting in higher infrastructure and energy costs. How should the system designer choose the coefficient p? Alternatively, we can ask an even more fundamental question: is there a significant difference between having a small amount of centralization (a small but positive value of p), and complete decentralization (no central server and p = 0)? 1.2. Secondary Motivation: Partially centralized scheduling. Consider a system with N stations, depicted in Figure 2. The arrival assumptions are the same as in Section 1.1. However, there is no local server associated with a station; all stations are served by a single central server. Whenever the central server becomes free, it chooses a task to serve as follows. With probability p, it processes a task from a most loaded station, with an arbitrary tie-breaking rule. Otherwise, it processes a task from a station selected uniformly at random; if the randomly chosen station has an empty queue, the current round is in some sense wasted (to be formalized in Section 2.1).
This second interpretation is intended to model a scenario where resource allocation decisions are made at a centralized location on a dynamic basis, but communications between the decision maker (central server) and local stations are costly or simply unavailable from time to time. While it is intu- itively obvious that longest-queue-first (LQF) scheduling is more desirable, up-to-date state information (i.e., queue lengths at all stations) may not always be available to the central server. Thus, the central server may be forced to allocate its service blindly. In this setting, a system designer is interested in a judicious choice of the frequency (p) at which global state information is collected, so as to balance performance and communication costs.
As we will see in the sequel, the system dynamics in the two applications are captured by the same mathematical structure under appropriate stochastic assumptions on task arrivals and processing times, and hence will be addressed jointly in this paper.
1.3. Overview of main contributions. We provide here an overview of the main contributions. Exact statements of the results will be provided in Section 3 after the necessary terminology has been introduced.
Our goal is to study the performance implications of varying degrees of centralization (or resource pooling), as expressed by the coefficient p. To accomplish this, we use a so-called fluid approximation, whereby the queue length dynamics at the local stations are approximated, as N → ∞, by a deterministic fluid model, governed by a system of ordinary differential equations (ODEs).
Fluid approximations typically involve results of two flavors: qualitative results derived from the fluid model that give insights into the performance of the original finite stochastic system, and technical convergence results (often mathematically involved) that justify the use of such approximations. We summarize our contributions along these two dimensions: 1. On the qualitative end, we derive an exact expression for the invariant state of the fluid model, for any given traffic intensity λ and centralization coefficient p, thus characterizing the steady-state distribution of the queue lengths in the system as N → ∞. This enables a system designer to use any performance metric and analyze its sensitivity with respect to p. In particular, we show a surprising exponential phase transition in the scaling of average system delay as the load approaches capacity (λ → 1) (Corollary 3 in Section 3.2): when an arbitrarily small amount of centralized computation is applied (p > 0), the average queue length in the system scales as 2 as the traffic intensity λ approaches 1. This is drastically smaller than the 1 1−λ scaling obtained if there is no centralization (p = 0). 3 This suggests that for large systems, even a small degree of resource pooling provides significant improvements in the system's delay performance, in the heavy traffic regime. 2. On the technical end, we show that: (a) Given any finite initial queue sizes, and with high probability, the evolution of the queue length process can be approximated (over any finite time interval, and as N → ∞) by the unique solution to a fluid model.
(b) All solutions to the fluid model converge to an invariant state, as t → ∞, which is the same for all finite initial conditions (uniqueness and global stability).
(c) The steady-state distribution of the finite system converges to the invariant state of the fluid model as N → ∞.
The most notable technical challenge comes from the fact that the longest-queue-first policy used by the centralized server causes discontinuities in the drift in the fluid model (see Section 3.1 for details).
In particular, the classical approximation results for Markov processes (see, e.g., [2]), which rely on a Lipschitz-continuous drift in the fluid model, are hard to apply. Thus, in order to establish the finite-horizon approximation result (a), we employ a sample-path based approach: we prove tightness of sample paths of the queue length process and characterize their limit points. Establishing the convergence of steady-state distributions in (c) also becomes non-trivial due to the presence of discontinuous drifts. To derive this result, we first establish the uniqueness of solutions to the fluid model and a uniform (over a compact set of initial conditions) speed of convergence of stochastic sample paths to the solution of the fluid model. 1.4. Related work. To the best of our knowledge, the proposed model for the splitting of processing resources between local and central servers has not been studied before. However, the fluid model approach used in this paper is closely related to, and partially motivated by, the so-called supermarket model of randomized load-balancing. In that literature, it is shown that by routing tasks to the shorter queue among a small number (d ≥ 2) of randomly chosen queues, the probability that a typical queue has at least i tasks (denoted by s i ) decays as λ d i −1 d−1 (super-geometrically), as i → ∞ ( [3,4]); see also the survey paper [8] and references therein. However, this sampling approach to load-balancing seems to offer little improvement when adapted to scheduling. In [5], a variant of the randomized load-balancing policy was applied to a scheduling setting with channel uncertainties, where the server always schedules a task from a longest queue among a finite number of randomly selected queues. It was observed that s i no longer exhibits supergeometric decay and only moderate performance gain can be harnessed from sampling more than one queue.
In our setting, the system dynamics causing the exponential phase transition in the average queue length scaling are significantly different from those for the randomized load-balancing scenario. In particular, for any p > 0, the steady-state tail probabilities s i become zero for sufficiently large finite i, which is markedly faster than the super-geometric decay in the supermarket model.
On the technical side, arrivals and processing times used in supermarket models are often memoryless (Poisson or Bernoulli) and the drifts in the fluid model are typically continuous with respect to the underlying system state. Hence convergence results can be established by invoking classical approximation results, based on the convergence of the generators of the associated Markov processes. An exception is [7], where the authors generalize the supermarket model to arrival and processing times with general distributions. Since the queue length process is no longer Markov, the authors rely on an asymptotic independence property of the limiting system and use tools from statistical physics to establish convergence.
Our system is Markov with respect to the queue lengths, but a significant technical difference from the supermarket model lies in the fact that the longest-queue-first service policy introduces discontinuities in the drifts. For this reason, we need to use a more elaborate set of techniques to establish the connection between stochastic sample paths and the fluid model. Moreover, the presence of discontinuities in the drifts creates challenges even for proving the uniqueness of solutions for the deterministic fluid model. (Such uniqueness is used to establish convergence of steady-state distributions.) Our approach is based on a state representation that is different from the one used in the popular supermarket models, and which turns out to be surprisingly more convenient to work with for establishing the uniqueness of solutions to the fluid model.
Besides the queueing-theoretic literature, similar fluid model approaches have been used in many other contexts to study systems with large populations. Recent results in [6] establish convergence for finite-dimensional symmetric dynamical systems with drift discontinuities, using a more probabilistic (as opposed to sample path) analysis, carried out in terms of certain conditional expectations. We believe that it is possible to prove our results using the methods in [6], with additional work. However, the coupling approach used in this paper provides strong physical intuition on the system dynamics, and avoids the need for additional technicalities from the theory of multi-valued differential inclusions.
Resource pooling is known to improve performance [15][16][17][18], but much less is known on the impact of various degrees of pooling, or about scaling behaviors in large-system limits. Some recent work in this area [19] that studies limited pooling in a large-system limit is closer to our work in spirit, but still differs significantly in terms of critical modeling assumptions and dynamics. The notion of limited flexibility has also been studied in manufacturing systems, such as the celebrated Long Chain design [9,10] and its variants [11][12][13][14]. However, models considered in this literature are typically applied to static allocation problems (with a single or small number of stages), whereas our system involves non-trivial queueing dynamics, where resource allocation decisions have to be made repeatedly overtime.
Finally, there has been some work on the impact of service flexibility in routing problems, motivated by applications such as multilingual call centers. These date back to the seminal work of [20], with a more recent numerical study in [21]. These results show that the ability to route a portion of customers to a least-loaded station can lead to a constant-factor improvement in average delay under diffusion scaling. This line of work is very different from ours, but in a broader sense, both are trying to capture the notion that system performance in a random environment can benefit significantly from even a small amount of centralized coordination.
1.5. Organization of the paper. Section 2 introduces the precise model to be studied, our assumptions, and the notation to be used throughout. The main results are summarized in Section 3, where we also discuss their implications along with some numerical results. The remainder of the paper is devoted to establishing the technical results, and the reader is referred to Section 4.1 for an overview of the proofs. The steps of some of the more technical proofs are outlined in the main text, with complete proofs relegated to Appendix A. The procedure and parameters used for numerical simulations are described in Appendix D.
2. Model and notation. This section covers the modeling assumptions, system state representations, and mathematical notation that will be used throughout the paper. We provide some intuition behind our modeling choices and assumptions whenever possible, but if the ideas involved cannot be made transparent at this stage, we point the reader to explanations that will appear later in the paper.
2.1. Model. We present our model using terminology that corresponds to the server farm application in Section 1.1. Time is assumed to be continuous.
1. System. The system consists of N parallel stations. Each station is associated with a queue which stores the tasks to be processed. The queue length (i.e., number of tasks) at station n at time t is denoted by Q n (t), n ∈ {1, 2, . . . , N }, t ≥ 0. 2. Arrivals. Stations receive streams of incoming tasks according to independent Poisson processes with a common rate λ ∈ [0, 1). 3. Task Processing. We fix a centralization coefficient p ∈ [0, 1].
(a) Local Servers. The local server at station n is modeled by an independent Poisson clock with rate 1 − p (i.e., the times between two clock ticks are independent and exponentially distributed with mean 1 1−p ). If the clock at station n ticks at time t, we say that a local service token is generated at station n. If Q n (t) ≠ 0, exactly one task from station n "consumes" the service token and leaves the system immediately. Otherwise, the local service token is "wasted" and has no impact on the future evolution of the system.
(b) Central Server. The central server is modeled by an independent Poisson clock with rate N p. If the clock ticks at time t at the central server, we say that a central service token is generated. If the system is non-empty at time t (i.e., if ∑ N n=1 Q n (t) > 0), exactly one task from some station n, chosen uniformly at random out of the stations with a longest queue at time t, consumes the service token and leaves the system immediately. If the whole system is empty, the central service token is wasted.
Physical interpretation of service tokens. We interpret Q n (t) as the number of tasks whose service has not yet started. For example, if there are four tasks at station n, one being served and three that are waiting, then Q n (t) = 3. The use of local service tokens can be thought of as an approximation to a work-conserving 4 server with exponential service time distribution in the following sense. Let t k be the kth tick of the Poisson clock at the server associated with station n. If Q n (t k −) > 0, 5 the ticking of the clock can be thought of as the completion of a previous task, so that the server "fetches" a new task from the queue to process, hence decreasing the queue length by 1. Therefore, as long as the queue remains non-empty, the time between two consecutive clock ticks can be interpreted as the service time for a task. On the other hand, if the local queue is currently empty, i.e., Q n (t k −) = 0, then our modeling assumption implies that the local server does nothing until the next clock tick at t k+1 , even if some task arrives during the period (t k , t k+1 ). Alternatively, this can be thought of as the server creating a "virtual task" whenever it sees an empty queue, and pretending to be serving the virtual task until the next clock tick. In contrast, a work-conserving server would start serving the next task immediately upon its arrival. We have chosen to use the service token setup, mainly because it simplifies analysis, and because it can also be justified in the following ways.
1. Because of the use of virtual tasks, one would expect the resulting queue length process under our setup to provide an upper bound on the queue length process under a work-conserving server. We do not formally prove such a dominance relation in this paper, but note that a similar dominance result in GI GI n queues was proved recently (Proposition 1 of [26]). 2. Since the discrepancy between the two setups only occurs when the server sees an empty queue, one would also expect that the queue length processes under the two cases become comparable as the traffic intensity λ approaches 1, in which case the queue at a local server will be non-empty most of the time.
The same physical interpretation applies to the central service tokens.
Mathematical equivalence between the two motivating applications. We note here that the scheduling application in Section 1.2 corresponds to the same mathematical model. The arrival statistics to the stations are obviously identical in both models. For task processing, note that we can equally imagine all service tokens as being generated from a single Poisson clock with rate N . Upon the generation of a service token, a coin is flipped to decide whether the token will be directed to fetch a task from a random station for processing (corresponding to a local service token), or from a station with a longest queue (corresponding to a central service token). Due to the Poisson splitting property, this produces identical statistics for the generation of local and central service tokens as for the server farm application.

2.2.
System state. Let us fix N . Since all events (arrivals of tasks and service tokens) are generated according to independent Poisson processes, the queue length vector at time t, (Q 1 (t), Q 2 (t), . . . , Q N (t)), is Markov. Moreover, the system is fully symmetric, in the sense that all queues have identical and independent statistics for the arrivals and local service tokens, and the assignment of central service tokens does not depend on the specific identity of stations besides their queue lengths. Hence we can use a Markov process S N i (t) ∞ i=0 to describe the evolution of a system with N stations, where Each coordinate S N i (t) represents the fraction of queues with at least i tasks. Note that S N 0 (t) = 1, for all t and N , according to this definition. We call S N (t) the normalized queue length process. We also define the aggregate queue length process as Note that In particular, this means that is equal to the average queue length in the system at time t. More generally, V N i (t) can be interpreted as the average of excess queue lengths above i − 1 at time t, in the sense that 6 where X t is a random variable distributed according to the empirical queue length distribution in the system at time t: P (X t ≥ i) = S N i (t), for all i ≥ 0. When the total number of tasks in the system is finite (hence all coordinates of V N are finite), there is a straightforward bijection between S N and V N . Hence V N (t) is Markov and also serves as a valid representation of the system state. While the S N representation admits a more intuitive interpretation as the "tail" probability of a typical station having at least i tasks, it turns out the V N representation is significantly more convenient to work with, especially in proving uniqueness of solutions to the associated fluid model; the detailed reasons will become clear in the sequel (see Appendix B for an extensive discussion on this topic). For this reason, we will be working mostly with the V N representation, but will in some places state results in terms of S N , if doing so provides a better physical intuition.

2.3.
Notation. Let Z + be the set of non-negative integers. The following sets will be used throughout the paper (where M is a positive integer): We define the weighted L 2 norm ⋅ w on R Z+ as In general, we will be using bold letters to denote vectors and ordinary letters for scalars, with the exception that a bold letter with a subscript (e.g., v i ) is understood as a (scalar-valued) component of a vector. Upper-case letters are generally reserved for random variables (e.g., V (0,N ) ) or stochastic processes (e.g., V N (t)), and lower-case letters are used for constants (e.g., v 0 ) and deterministic functions (e.g., v(t)). Finally, a function is in general denoted by x(⋅), but is sometimes written as x(t) to emphasize the type of its argument.
3. Summary of main results. In this section, we provide the exact statements of our main results. The main approach of our work is to first derive key performance guarantees using a simpler fluid model, and then apply probabilistic arguments (e.g., Functional Laws of Large Numbers) to formally justify that such guarantees also carry over to sufficiently large finite stochastic systems. Section 3.1 gives a formal definition of the core fluid model used in this paper, along with its physical interpretation. Section 3.2 contains results that are derived by analyzing the dynamics of the fluid model, and Section 3.3 contains the more technical convergence theorems that justify the accuracy of approximating a finite system using the fluid model approach. The proofs for the theorems stated here will be developed in later sections.
(c) for almost all t ∈ [0, ∞), and for every i ≥ 1, v i (t) is differentiable and satisfies We can write Eq. (15) more compactly as We call F (v) the drift at point v.
Interpretation of the fluid model. A solution to the fluid model, v(t), can be thought of as a deterministic approximation to the sample paths of V N (t) for large values of N . Conditions (a) and (b) in Definition 1 correspond to initial and boundary conditions, respectively. The boundary conditions reflect the physical constraints of the finite system. For example, the condition that v 0 (t) − v 1 (t) = 1 corresponds to the fact that where S N 0 (t) is the fraction of queues with a non-negative queue length, which is, by definition, 1. Similarly, the condition that is the fraction of queues at time t with exactly i tasks, a number between 0 and 1.
We now provide some intuition for each of the drift terms in Eq. (15): This term corresponds to arrivals. When a task arrives at a station with i − 1 tasks, the system has one more queue with i tasks, and S N i increases by 1 N . However, the number of queues with at least j tasks, for j ≠ i, does not change. Thus, S N i is the only component of S N that gets incremented. Since V N i △ = ∑ ∞ k=i S N k , this implies that V N i is increased by 1 N if and only if a task arrives at a queue with at least i−1 tasks. Since all stations have an identical arrival rate λ, the probability of V N i being incremented upon an arrival to the system is equal to the fraction of queues with at least . We take the limit as N → ∞, multiply by the total arrival rate, N λ, and then multiply by the increment due to each arrival, 1 N , to obtain the term λ (v i−1 − v i ).
This term corresponds to the completion of tasks due to local service tokens. The argument is similar to that for the first term.
III. g i (v): This term corresponds to the completion of tasks due to central service tokens.
If i > 0 and v i > 0, then there is a positive fraction of queues with at least i tasks. Hence the central server is working at full capacity, and the rate of decrease in v i due to central service tokens is equal to the (normalized) maximum rate of the central server, namely p.
which is the rate at which v i increases due to arrivals. Here the central server serves queues with at least i tasks whenever such queues arise, trying to keep v i at zero. Thus, the total rate of central service tokens dedicated to v i tries to match the rate of increase of v i due to arrivals. 7 Here, both v i and v i−1 are zero and there are no queues with i − 1 or more tasks. Hence there is no positive rate of increase in v i due to arrivals. Accordingly, the rate at which central service tokens are used to serve stations with at least i tasks is zero.
Note that, as was mentioned in the introduction, the discontinuities in the fluid model come from the term g(v), which reflects the presence of a central server.

3.2.
Analysis of the fluid model and exponential phase transition. The following theorem characterizes the invariant state for the fluid model. It will be used to demonstrate an exponential improvement in the rate of growth of the average queue length as λ → 1 (Corollary 3). = v I i − v I i+1 for all i ≥ 0, the exact expressions for the invariant state are as follows: (3) If 0 < p < λ and λ = 1 − p, then 8 Proof. The proof consists of simple algebra to compute the solution to F(v I ) = 0. The proof is given in Section 6.1.
Case (4) in the above theorem is particularly interesting, as it reflects the system's heavy-traffic performance (λ close to 1) for any given value of p. Note that since s I 1 represents the probability of a typical queue having at least i tasks, the quantity represents the average queue length. The following corollary, which characterizes the average queue length in the invariant state for the fluid model, follows from Case (4) in Theorem 2 by some straightforward algebra.
Corollary 3 (Phase Transition in Average Queue Length Scaling). If 0 < p < λ and λ ≠ 1 − p, then In particular, for any fixed p > 0, v I 1 scales as The scaling of the average queue length in Eq. (25) with respect to the arrival rate λ is contrasted with (and is exponentially better than) the familiar 1 1−λ scaling when no centralized resource is available (p = 0).
Intuition for exponential phase transition. Taking a closer look at the expressions for s I , we notice that that for any p > 0, the tail probabilities s I have a finite support: s I i "dips" down to 0 as i increases to i * (p, λ), which is even faster than a super-geometric decay.
s I i is upper-bounded by i * (p, λ), which scales as log 1 1−p 1 1−λ as λ → 1. Note that such tail probabilities with finite-support imply that the fraction of stations with more than i * (p, λ) tasks decreases to zero as N → ∞. For example, we may have a strictly positive fraction of stations with, say, 10 tasks, but stations with more than 10 tasks hardly exist. While this may appear counterintuitive, it is a direct consequence of centralization in the resource allocation schemes. Since a fraction p of the total resource is constantly going after the longest queues, it is able to prevent long queues (i.e., queues with more than i * (p, λ) tasks) from even appearing. The thresholds i * (p, λ) increasing to infinity as λ → 1 reflects the fact that the central server's ability to annihilate long queues is compromised by the heavier traffic loads; our result essentially shows that the increase in i * (λ, p) is surprisingly slow.
Numerical results. Figure 3 compares the invariant state vectors for the case p = 0 (stars) and p = 0.05 (diamonds). When p = 0, s I i decays exponentially as λ i , while when p = 0.05, s I i decays much faster, and reaches zero at around i = 40. Figure 4 demonstrates the exponential phase transition in the average queue length as the traffic intensity approaches 1, where the solid curve, corresponding to a positive p, increases significantly slower than the usual 1 1−λ delay scaling (dotted curve). Simulations show that the  Traffic Intensity (λ) Average Queue Length p=0, by Theorem 2 (numerical) p=0.05, by Theorem 2 (numerical) p=0.05, N=100, by simulation with 95% confidence intervals  theoretical model offers good predictions for even a moderate number of servers (N = 100). The detailed simulation setup can be found in Appendix B. Table 1 gives examples of the values for i * (p, λ); note that these values in some sense correspond to the maximum delay an average customer could experience in the system. Theorem 2 characterizes the invariant state of the fluid model, without revealing whether and how a solution of the fluid model reaches it. The next two results state that given any finite initial condition, the solution to the fluid model is unique and converges to the unique invariant state as time goes to infinity.
Theorem 4 (Uniqueness of Solutions to Fluid Model). Given any initial Proof. See Section 6.2.
Theorem 5 (Global Stability of Fluid Solutions). Given any initial condition v 0 ∈ V ∞ , and with v(v 0 , t) the unique solution to the fluid model, we where v I is the unique invariant state of the fluid model given in Theorem 2.

3.3.
Convergence to a fluid solution -finite horizon and steady state. The two theorems in this section justify the use of the fluid model as an approximation to the finite stochastic system. The first theorem states that as N → ∞ and with high probability, the evolution of the aggregate queue length process V N (t) is uniformly close, over any finite time horizon [0, T ], to the unique solution of the fluid model.

Theorem 6 (Convergence to Fluid Solutions over a Finite Horizon).
Consider a sequence of systems, with the number of servers N increasing to infinity. Fix any where v v 0 , t is the unique solution to the fluid model given initial condition v 0 .
Note that if we combine Theorem 6 with the convergence of v(t) to v I in Theorem 5, we see that the finite system (V N ) is approximated by the invariant state of the fluid model v I after a fixed time period. In other words, we now have (29) lim If we switch the order in which the limits over t and N are taken in Eq. (29), we are then dealing with the limiting behavior of the sequence of steady-state distributions (if they exist) as the system size grows large. Indeed, in practice it is often of great interest to obtain a performance guarantee for the steady state of the system, if it were to run for a long period of time. In light of Eq. (29), we may expect that The following theorem shows that this is indeed the case, i.e., that a unique steady-state distribution of v N (t) (denoted by π N ) exists for all N , and that the sequence π N concentrates on the invariant state of the fluid model (v I ) as N grows large.
Theorem 7 (Convergence of Steady-state Distributions to v I ). For any N , the process V N (t) is positive recurrent and admits a unique steady-state distribution π N . 9 Moreover, Proof. The proof is based on the tightness of the sequence of steadystate distributions π N , and a uniform rate of convergence of V N (t) to v(t) over any compact set of initial conditions. The proof is given in Section 7.  4. Probability space and coupling. Starting from this section, the remainder of the paper will be devoted to proving the results summarized in Section 3. We begin by giving an outline of the main proof techniques, as well as the relationships among them, in Section 4.1. The remainder of the current section focuses on constructing probability spaces and appropriate couplings of stochastic sample paths, which will serve as the foundation for later analysis.

4.1.
Overview of technical approach. We begin by coupling the sample paths of processes of interest (e.g., V N (⋅)) with those of two fundamental processes that drive the system dynamics (Section 4.2). This approach allows us to link deterministically the convergence properties of the sample paths of interest to those of the fundamental processes, on which probabilistic arguments are easier to apply (such as the Functional Law of Large Numbers). Using this coupling framework, we show in Section 5 that almost all sample paths of V N (⋅) are "tight" in the sense that, as N → ∞, they are uniformly approximated by a set of Lipschitz-continuous trajectories, which we refer to as the fluid limits, and that all such fluid limits are valid solutions to the fluid model. This result connects the finite stochastic system with the deterministic fluid solutions. Section 6 studies the properties of the fluid model, and provides proofs for Theorem 4 and 5. Note that Theorem 6 (convergence of V N (⋅) to the unique fluid solution, over a finite time horizon) now follows from the tightness results in Section 5 and the uniqueness of fluid solutions (Theorem 4). The proof of Theorem 2 stands alone, and will be given in Section 6.1. Finally, the proof of Theorem 7 (convergence of steady state distributions to v I ) is given in Section 7.
The goal of the current section is to formally define the probability spaces and stochastic processes that we will be working with in the rest of the paper. Specifically, we begin by introducing two fundamental processes, from which all other processes of interest (e.g., V N (⋅)) can be derived on a per sample path basis.

Definition 8 (Fundamental Processes and Initial Conditions).
(1) The Total Event Process, {W (t)} t≥0 , defined on a probability space (Ω W , F W , P W ), is a Poisson process with rate λ + 1, where each jump marks the time when an "event" takes place in the system.
and uniformly distributed in [0, 1]. This process, along with the current system state, determines the type of each event (i.e., whether it is an arrival, a local token generation, or a central token generation).
For the rest of the paper, we will be working with the product space With a slight abuse of notation, we use the same symbols W (t), U (n) and where ω ∈ Ω and ω = (ω W , ω U , ω 0 ). The same holds for U and V (0,N ) .

4.3.
A coupled construction of sample paths. Recall the interpretation of the fluid model drift terms in Section 3.1. Mimicking the expression forv i (t) in Eq. (15), we would like to decompose V N i (t) into three non-decreasing right-continuous processes, , and C N i (t) correspond to the cumulative changes in V N i due to arrivals, local service tokens, and central service tokens, respectively. We will define processes A N (t), L N (t), C N (t), and V N (t) on the common probability space (Ω, F, P), and couple them with the sample paths of the fundamental processes W (t) and U (n), and the value of V (0,N ) , for each sample ω ∈ Ω. First, note that since the N -station system has N independent Poisson arrival streams, each with rate λ, and an exponential server with rate N , the total event process for this system is a Poisson process with rate N (1 + λ). Hence, we define W N (ω, t), the N th normalized event process, as Note that W N (ω, t) is normalized so that all of its jumps have a magnitude of 1 N . The coupled construction is intuitive: whenever there is a jump in W N (ω, ⋅), we decide the type of event by looking at the value of the corresponding selection variable U (ω, n) and the current state of the system V N (ω, t). Fix ω in Ω, and let t k , k ≥ 1, denote the time of the kth jump in W N (ω, ⋅).
We first set all of A N , L N , and C N to zero for t ∈ [0, t 1 ). Starting from k = 1, repeat the following steps for increasing values of k. The partition of the interval [0, 1] used in the procedure is illustrated in Figure 6.
the event corresponds to the completion of a task at a station with at least i tasks due to a local service token. We increase L N i (ω, t) by 1 N at all such i. Note that i = 0 is not included here, reflecting the fact that if a local service token is generated at an empty station, it is immediately wasted and has no impact on the system.
, the event corresponds to the generation of a central service token. Since the central service token is alway sent to a station with the longest queue length, we will have a task completion at a most-loaded station, unless the system is empty. Let i * (t) be the last positive coordinate of To finish, we set V N (ω, t) according to Eq. (33), and keep the values of all processes unchanged between t k and t k+1 . We set 5. Fluid limits of stochastic sample paths. In this section, we establish the connections between the stochastic sample paths (V N (⋅)) and the solutions to the fluid model (v(⋅)). Through two important technical results (Propositions 11 and 12), we show that, as N → ∞ and almost surely, any subsequence of V N (⋅) N ≥1 contains a further subsequence that convergences uniformly to a solution of the fluid model, over any finite horizon [0, T ]. However, note that the results presented in this section do not imply the converse, that any solution to the fluid model corresponds to a limit point of some sequence of stochastic sample paths. This issue will be resolved in the next section where we show the uniqueness of fluid solutions, which, together with the results in this section, establishes that the fluid solutions fully characterize the transient behavior of V N (⋅), for sufficiently large N , over any finite time horizon [0, T ].
In the sample-path wise construction in Section 4.3, all randomness is attributed to the initial condition V (0,N ) and the two fundamental processes W (⋅) and U (⋅). Everything else, including the system state V N (⋅) that we are interested in, can be derived from a deterministic mapping, given a particular realization of V (0,N ) , W (⋅), and U (⋅). With this in mind, the approach we will take to prove convergence to a fluid limit (i.e., a limit point of V N (⋅) N ≥1 ), over a finite time interval [0, T ], can be summarized as follows.
(1) (Lemma 9) We define a subset C of the sample space Ω, such that P (C) = 1 and the sample paths of W and U are sufficiently "nice" for every ω ∈ C. (2) (Proposition 11) We show that for all ω in this nice set, the derived sample paths V N (⋅) are also "nice", and contain a subsequence converging to a Lipschitz-continuous trajectory v(⋅), as N → ∞. (3) (Proposition 12) We characterize the derivative at any regular point 11 of v(⋅) and show that it is identical to the drift in the fluid model. Hence v(⋅) is a solution to the fluid model.
The proofs will be presented according to the above order.
5.1. Tightness of sample paths over a nice set. We begin by proving the following lemma which characterizes a "nice" set C ⊂ Ω whose elements have desirable convergence properties.
Proof. Based on the Functional Law of Large Numbers for Poisson processes, we can find C W ⊂ Ω W , with P W (C W ) = 1, over which Eq. (35) holds. For Eq. (36), we invoke the Glivenko-Cantelli lemma 12 , which states that the empirical measures of a sequence of i.i.d. random variables converge uniformly almost surely, i.e., This implies the existence of some Definition 10. We call the 4-tuple, , the N th system. Note that all four components are infinite-dimensional processes. 13 Consider the space of functions from [0, T ] to R that are right-continuouswith-left-limits (RCLL), denoted by D[0, T ], and let it be equipped with the uniform metric, d (⋅, ⋅): with ⋅ w defined in Eq. (12).
The following proposition is the main result of this section. It shows that for sufficiently large N , the sample paths are sufficiently close to some absolutely continuous trajectory. 12 For an introduction to the Glivenko-Cantelli lemma, see [27] and references therein. 13 If necessary, X N can be enumerated by writing it explicitly as for all ω ∈ C. Then for all ω ∈ C, any subsequence of X N (ω, ⋅) contains a further subsequence, X N i (ω, ⋅) , that converges to some coordinate-wise Lipschitz-continuous function where L > 0 is a universal constant, independent of the choice of ω, x, and T .
Here the convergence refers to d For the rest of the paper, we will refer to such a limit point x, or any subset of its coordinates, as a fluid limit.
Proof outline. Here we outline the main steps of the proof; interested readers are referred to Appendix A.1 for a complete proof. We first show that for all ω ∈ C, and for every coordinate i, any subsequence of X N i (ω, ⋅) has a convergent subsequence with a Lipschitz-continuous limit. We then use the coordinate-wise limit to construct a limit point in the space D Z+ . To establish coordinate-wise convergence, we use a tightness technique previously used in the literature on multiclass queueing networks (see, e.g., [1]). A key realization in this case, is that the total number of jumps in any derived process A N , L N , and C N cannot exceed that of the event process W N (t) for any particular sample path. Since A N , L N , and C N are non-decreasing, we expect their sample paths to be "smooth" for large N , due to the fact that the sample path of W N (t) does become "smooth" for large N , for all ω ∈ C (Lemma 9). More formally, it can be shown that for all ω ∈ C and T > 0, there exist diminishing positive sequences M N ↓ 0 and γ N ↓ 0, such that the sample path along any coordinate of X N is γ N -approximately-Lipschitz continuous with a uniformly bounded initial condition, i.e., for all i, where L is the Lipschitz constant, and T < ∞ is a fixed time horizon. Using a linear interpolation argument, we then show that sample paths of the above form can be uniformly approximated by a set of L-Lipschitz-continuous function on [0, T ]. We finish by using the Arzela-Ascoli theorem (sequential compactness) along with closedness of this set, to establish the existence of a convergent further subsequence along any subsequence (compactness) and that any limit point must also be L-Lipschitz-continuous (closedness). This completes the proof for coordinate-wise convergence. With the coordinatewise limit points, we then use a diagonal argument involving nested subsequences to construct the limit points of X N in the space D Z+ [0, T ], and this completes the proof.

5.2.
Derivatives of the fluid limits. The previous section established that any sequence of "good" sample paths ( X N (ω, ⋅) with ω ∈ C) eventually stays close to some Lipschitz-continuous, and therefore absolutely continuous, trajectory. In this section, we will characterize the derivatives of v(⋅) at all regular (differentiable) points of such limiting trajectories. We will show, as we expect, that they are the same as the drift terms in the fluid model (Definition 1). This means that all fluid limits of V N (⋅) are in fact solutions to the fluid model.

Proposition 12 (Fluid Limits and Fluid Model).
Fix ω ∈ C and T > 0. Let x be a limit point of some subsequence of X N (ω, ⋅), as in Proposition 11. Let t be a point of differentiability of all coordinates of x. Then, for all i ∈ N, where g was defined in Eq. Proof. We fix some ω ∈ C and for the rest of this proof we will suppress the dependence on ω in our notation. The existence of Lipschitzcontinuous limit points for the given ω ∈ C is guaranteed by Proposition 11. Let X N k (⋅) ∞ k=1 be a convergent subsequence such that lim k→∞ d Z+ (X N k (⋅), x) = 0. We now prove each of the three claims (Eqs. (42)-(44)) separately. The index i is always fixed unless otherwise stated.
. Therefore, we can write, for any given ǫ > 0, and t N j is defined to be the time of the jth jump in W N (⋅), i.e., Note that by the definition of a fluid limit, we have that The following lemma bounds the change in a i (t) on a small time interval.
Lemma 13. Fix i and t. For all sufficiently small ǫ > 0 Proof. While the proof involves heavy notation, it is based on the fact that ω ∈ C: using Lemma 9, Eq. (48) follows from Eq. (45) by applying the convergence properties of W N (t) (Eq. (35)) and U (n) (Eq. (36)).
For the formal proof, fix some ω ∈ C. Also, fix i ≥ 1, t > 0, and ǫ > 0. Since the limiting function x is L-Lipschitz-continuous on all coordinates by Proposition 11, there exists a non-increasing sequence γ n ↓ 0 such that for all s ∈ [t, t + ǫ] and all sufficiently large k, The above leads to: 14 for all sufficiently large k.
We have for all sufficiently large k, and any l such that 1 ≤ l ≤ N k , where the first inequality follows from the second containment in Eq. (50), and the second inequality follows from the monotonicity of {η n (t)} in Eq. (52).
We would like to show that for all sufficiently small ǫ > 0, To prove the above inequality, we first claim that for any interval To see this, rewrite the left-hand side of the equation above as Because the magnitude of the indicator function I{⋅} is bounded by 1, we have that Since ω ∈ C, by Lemma 9 we have that for any t < ∞. Combining Eqs. (56)−(59), we have that which establishes Eq. (55). By the same argument, Eq. (60) also holds when t is replaced by t + ǫ. Applying this result to Eq. (53), we have that for all l ≥ 1, where the last inequality is due to the fact that λ < 1. Taking l → ∞ and using the fact that γ l ↓ 0, we have established Eq. (54). Similarly, changing the definition of η n (t) to we can obtain a similar lower bound which together with Eq. (54) proves the claim. Note that if v i (t) = v i−1 (t), the lower bound trivially holds because A N k i (t) is a cumulative arrival process and is hence non-decreasing in t by definition.
Since by assumption a(⋅) is differentiable at t, Lemma 13 implies thaṫ , which establishes Claim 1.
. Claim 2 can be proved using an identical approach to the one used to prove Claim 1. The proof is hence omitted.
Claim 3.ċ i (t) = g i (v (t)). We prove Claim 3 by considering separately the three cases in the definition of v.
We calculate each of the three terms on the right-hand side of the above equation. By Claim 1, To obtain the value forv i (t), we use the following trick: since v i (t) = 0 and v i is non-negative, the only possibility In this case, the fraction of queues with at least i tasks is zero, hence v i receives no drift from the local portion of the service capacity by Claim 2. First consider the case v i−1 (t) ≤ p λ . Here the line of arguments is similar to the one in Case 1. By Claim 1,ȧ i (t) = λ(v i−1 (t) − v i (t)) = λv i−1 (t), and by Claim 2,l i (t) = λ(v i (t) − v i+1 (t)) = 0. Using again the same trick as in Case 1, the non-negativity of v i (t) and the fact that v i (t) = 0 together imply that we must havev i (t) = 0. Combining the expressions forȧ i (t),l i (t), andv i (t), we have Intuitively, here the drift due to random arrivals to queues with i − 1 tasks, λv i−1 (t), is "absorbed" by the central portion of the service capacity.
Since there is a positive fraction of queues with more than i tasks, it follows that V N i is decreased by 1 N whenever a central token becomes available. Formally, for some small enough ǫ, there exists K such that V N k i (s) > 0 for all k ≥ K, s ∈ [t, t + ǫ]. Given the coupling construction, this implies for all k ≥ K, s ∈ [t, t + ǫ], Using the same arguments as in the proof of Lemma 13, we see that the right-hand side of the above equation converges to (s − t)p + o(ǫ) as Similarly, the boundary condition (14) is automatically satisfied. This concludes the proof of Proposition 12.
6. Properties of the fluid model. In this section, we establish several key properties of the fluid model. We begin by proving Theorem 2 in Section 6.1, which states that the fluid model admits a unique invariant state for each pair of p and λ. Section 6.2 is devoted to proving that the fluid model admits a unique solution v(⋅) for any initial condition v 0 ∈ V ∞ . As a corollary, we show that the fluid solution, v(⋅), depends continuously on the initial condition v 0 , which will be used for proving the steady-state convergence theorem in the next section. Using the uniqueness of fluid solutions and the results from the last section, Theorem 6 is proved in Section 6.3, which establishes the convergence of stochastic sample paths to the unique solution of the fluid model over any finite time horizon, with high probability. Finally, in Section 6.4 we prove that the solutions to the fluid model are globally stable (Theorem 5): any fluid solution v(t) converges to the unique invariant state v I as t → ∞. This suggests that the qualitative properties derived from the invariant state v I serve as a good approximation for the transient behavior of the system, as t → ∞. We note that by the end of this section, we will have established all transient approximation results, which correspond to the path as was illustrated in Figure 5 of Section 3. The other path in Figure 5, namely, the approximation of the steady-state distributions of V N (⋅) by v I , will be dealt with in the next section.
6.1. Invariant state of the fluid model. In this section we prove Theorem 2, which gives explicit expressions for the (unique) invariant state of the fluid model.
Proof of Theorem 2. In this proof we will be working with both v I and s I , with the understanding that s I It can be verified that the expressions given in all three cases are valid invariant states, by checking that F(v I ) = 0. We show they are indeed unique.
First, note that if p ≥ λ, then F 1 (v) < 0 whenever v 1 > 0. Since v I 1 ≥ 0, we must have v I 1 = 0, which by the boundary conditions implies that all other v I i must also be zero. This proves case (2) of the theorem. Now suppose that 0 < p < λ. We will first prove case (4). We observe that F 1 (v) > 0 whenever v 1 = 0. Hence v I 1 must be positive. By Eq. (16) this implies that g 1 (v I ) = p. Substituting g 1 (v I ) in Eq. (15), along with the boundary condition which yields that s I 1 = λ−p 1−p . Repeating the same argument, we obtain the recursion that s I i = where i * (p, λ) △ = ⌊ log λ 1−p p 1−λ ⌋ marks the last coordinate where s I i remains non-negative. This proves uniqueness of s I i up to i ≤ i * (p, λ). We can then use the same argument as in case (2), to show that s I i must be equal to zero for all i > i * (p, λ). Cases (1) and (3) can be established using similar arguments as those used in proving case (4). This completes the proof.
Remark (a finite-support property of v(⋅)). As was discussed in Section 3.2, Theorem 2 shows that for all p > 0, the unique invariant state v I admits a finite support (i.e., v I i = 0 for all i ≥ i * for some i * < ∞). It turns out that, when p > 0, this finite-support property also holds for the fluid solution v(t) at all t > 0. For a more elaborate discussion on this topic, see Appendix C.

Uniqueness of fluid limits & continuous dependence on initial conditions.
We now prove Theorem 4, which states that given an initial condition v 0 ∈ V ∞ , a solution to the fluid model exists and is unique. As a direct consequence of the proof, we obtain an important corollary, that the unique solution v(⋅) depends continuously on the initial condition v 0 . The uniqueness result justifies the use of the fluid approximation, in the sense that the evolution of the stochastic system is close to a single trajectory. The uniqueness along with the continuous dependence on the initial condition will be used to prove convergence of steady-state distributions to v I (Theorem 7).
We note that, in general, the uniqueness of solutions is not guaranteed for a differential equation with a discontinuous drift (see, e.g., [22]). In our case, F(⋅) is discontinuous on the domain V ∞ due to the drift associated with central service tokens (Eq. (18)).
Proof of Theorem 4. The existence of a solution to the fluid model follows from the fact that V N has a limit point (Proposition 11) and that all limit points of V N are solutions to the fluid model (Proposition 12). We now show uniqueness. Define i p (v) 15 Let v(t), w(t) be two solutions to the fluid model such that v(0) = v 0 and w(0) = w 0 , with v 0 , w 0 ∈ V ∞ . At any regular point t ≥ 0, where all coordinates of v(t), w(t) are differentiable, without loss of generality, assume i p (v(t)) ≤ i p (w(t)), with equality if both are infinite. Let a v (⋅) and a w (⋅) be the arrival trajectories corresponding to v(⋅) and w(⋅), respectively, and similarly for l and c. Since v 0 (t) = v 1 (t) + 1 for all t ≥ 0 by the boundary condition (Eq. (13)), anḋ v 1 =ȧ v 1 −l v 1 −ċ v 1 , for notational convenience we will write The same notation will be used forẇ(t). We have, 16 . We first justify the existence of the derivative for all i and all sufficiently small ǫ. Then, where µ N is a measure on Z + defined by µ N (i) = 2 −i , i ∈ Z + . By Eq. (72) and the dominated convergence theorem, we can exchange the limit and integration in Eq. (73) and obtain which justifies step (a) in Eq. (71).
The specific value of C can be derived after some straightforward algebra, which we isolate in the following claim: Now suppose that v(0) = w(0). By Gronwall's inequality 17 and Eq. (71), we have which establishes the uniqueness of fluid limits on [0, ∞).
The following corollary is an easy, but important, consequence of the uniqueness proof.
Corollary 15 (Continuous Dependence on Initial Conditions). Denote by v(v 0 , ⋅) the unique solution to the fluid model given initial condition v 0 ∈ V ∞ . If w n ∈ V ∞ for all n, and w n − v 0 w → 0 as n → ∞, then for all t ≥ 0, Proof. The continuity with respect to the initial condition is a direct consequence of the inequality in Eq. (76): if v(w n , ⋅) is a sequence of fluid solutions with initial conditions w n ∈ V ∞ and if w n − v 0 2 w → 0 as N → ∞, then for any t ∈ [0, ∞), This completes the proof.
Remark (v(⋅) versus s(⋅), and the Uniqueness of Fluid Limits). As was mentioned in Section 2.2, we have chosen to work primarily with the aggregate queue length process, V N (⋅) (Eq. (2)), instead of the normalized queue length process, S N (⋅) (Eq. (3)). Recall that for any finite N , the two processes are related by simple transformations, namely, for all i ≥ 0, . Therefore, there seems to be no obvious reason to favor one representation over the other when N is finite. However, in the limit of N → ∞, it turns out that the fluid model associated with V N (⋅) is much easier to work with in establishing uniqueness of fluid solutions (Theorem 4).
A key to the the proof of Theorem 4 is a contraction of the drifts (Eq. (71)), which, surprisingly, would have failed if we had used the the alternative state representation The uniqueness result should still hold, but the proof would be much more difficult. The intuitive reason is that the sum of the drifts of the s i 's provided by the centralized service remains constant as long as the system is non-empty; hence, by adding up all the coordinates of s i , we eliminate many of the drift discontinuities. The fact that such a simple linear transformation can greatly simplify the analysis of an otherwise much more complex dynamical system may be of independent interest.
A more elaborate discussion on this topic, along with a counterexample, is provided in Appendix B.
6.3. Convergence to a fluid solution over a finite horizon. We now prove Theorem 6.
Proof of Theorem 6. The proof follows from the sample-path tightness in Proposition 11 and the uniqueness of fluid limits from Theorem 4. By assumption, the sequence of initial conditions V (0,N ) converges to some v 0 ∈ V ∞ , in probability. Since the space V ∞ is separable and complete under the ⋅ w metric, by Skorohod's representation theorem, we can find a probability space (Ω 0 , F 0 , P 0 ) on which V (0,N ) → v 0 almost surely. By Proposition 11 and Theorem 4, for almost every ω ∈ Ω, any subsequence of V N (ω, t) contains a further subsequence that converges to the unique fluid limit v(v 0 , t) uniformly on any compact interval [0, T ]. Therefore for all T < ∞, which implies convergence in probability, and Eq. (28) holds.

6.4.
Convergence to the invariant state v I . We will prove Theorem 5 in this section. We switch to the alternative state representation, s(t), where to study the evolution of a fluid solution as t → ∞. It turns out that a nice monotonicity property of the evolution of s(t) induced by the drift structure will help establish the convergence to the invariant state. We recall that s 0 (t) = 1 for all t, and note that for all points where v is differentiable, . Throughout this section, we will use both representations v(t) and s(t) to refer to the same fluid solution, with their relationship specified in Eq. (79).
The approach we will be using is essentially a variant of the convergence proof given in [3]. The idea is to partition the space S ∞ into dominating classes, and show that (i) dominance in initial conditions is preserved by the fluid model, and (ii) any solution s(t) to the fluid model with an initial condition that dominates, or is dominated by, the invariant state s I converges to s I as t → ∞. Properties (i) and (ii) together imply the convergence of the fluid solution s(t) to s I , as t → ∞, for any finite initial condition. It turns out that such dominance in s is much stronger than a similarly defined relation for v. For this reason we do not use v but instead rely on s to establish the result.
Let t 1 be the first time that there exists a coordinate for which s 1 (t) and s 2 (t) are equal and positive: one of the following must be true: (1) s 1 (t) ≻ s 2 (t), for all t ≥ 0, in which case the claim holds.
Hence, we assume t 1 < ∞. Let k be the smallest coordinate index such that s 1 (t 1 ) and s 2 (t 1 ) are equal at k, but differ on at least one of the two adjacent coordinates, k − 1 and k + 1: Since s 1 (t) ≻ s 2 (t), at all regular points t < t 1 that are close enough to t 1 , and where the last equality comes from the fact that s 1 k (t) = s 2 k (t) by the definition of k. Because s 1 (t) and s 2 (t) are continuous functions of t in every coordinate, we can find a time t 0 < t 1 such that s 1 k (t 0 ) > s 2 k (t 0 ) and for all regular t ∈ (t 0 , t 1 ). Since s 1 k (t)) dt, this contradicts the fact that s 1 k (t 1 ) = s 2 k (t 1 ), and hence proves the claim.
We are now ready to prove Theorem 5.
Recall, from the expressions for s I i in Theorem 2, that s I i+1 ≥ λs I i −p 1−p , ∀i ≥ 0. From Eq. (85) and the fact that s u 0 = s I 0 = 1, we have for all regular t ≥ 0. To see why the above inequality holds, note that whenever s u 1 (t) > 0, and To show this, we use the following technical lemma. The proof is elementary and is omitted.

t).
Since v u 1 (t) is non-negative for all t, we have that f 1 (t) is integrable. We then invoke Lemma 18, which yields that We now continue by induction. Let f i (t) where the last inequality follows by an argument parallel to the one used in Eqs. (86)-(88). Since v u i+1 (t) is non-negative for all t, by Eq. (91), we have is integrable by the induction hypothesis, this implies that f i+1 (t) is also integrable. Now, we again invoke Lemma 18, and we have that This establishes the convergence of s u (t) to s I along all coordinates. Using also the fact that 0 ≤ s u i (t) ≤ 1 for all i and t, it is not hard to show that this coordinate-wise convergence also implies that Using the same set of arguments, we can show that lim t→∞ s l (t) − s I w = 0. This completes the proof.
7. Convergence of steady-state distributions. We will prove Theorem 7 in this section, which states that, for all N , the Markov process V N (t) converges to a unique steady-state distribution, π N , as t → ∞, and that the sequence {π N } N ≥1 concentrates on the unique invariant state of the fluid model, v I , as N → ∞. This result is of practical importance, as it guarantees that key quantities, such as the average queue length, as derived from the expressions for v I , also serve as accurate approximations for the steady state of the actual (finite) stochastic system.
Note that by the end of this section, we will have established our steadystate approximation results, i.e., as illustrated in Figure 5 of Section 3. Together with the transient approximation results established in the previous sections, these conclude the proofs of all approximation theorems in this paper. Before proving Theorem 7, we first give an important proposition which strengthens the finite-horizon convergence result stated in Theorem 6: we establish a uniform speed of convergence over any compact set of initial conditions. This proposition will be critical to the proof of Theorem 7 which will appear later in the section. 7.1. Uniform rate of convergence to the fluid limit. Let the probability space (Ω 1 , F 1 , P 1 ) be the product space of (Ω W , F W , P W ) and (Ω U , F U , P U ).
Intuitively, (Ω 1 , F 1 , P 1 ) captures all exogenous arrival and service information. Fixing ω 1 ∈ Ω 1 and v 0 ∈ V M ∩Q N , denote by V N (v 0 , ω 1 , t) the resulting sample path of V N given the initial condition V N (0) = v 0 . Also, denote by v v 0 , t the solution to the fluid model for a given initial condition v 0 . We have the following proposition.

Proposition 19 (Uniform Rate of Convergence to the Fluid Limit). Fix
where the metric d Z+ (⋅, ⋅) was defined in Eq. (39).
Proof. The proof highlights the convenience of the sample-path based approach. By the same argument as in Lemma 9, we can find sets C W ⊂ Ω W and C U ⊂ Ω U such that the convergence in Eqs. (35) and (36) holds over C W and C U , respectively, and such that To prove the claim, it suffices to show that We start by assuming that the above convergence fails for someω 1 ∈ C 1 , which amounts to having a sequence of "bad" sample paths of V N that are always a positive distance away from the corresponding fluid solution with the same initial condition, as N → ∞. We then find nested subsequences within this sequence of bad sample paths, and construct two solutions to the fluid model with the same initial condition, contradicting the uniqueness of fluid model solutions.
Assume that there existsω 1 ∈ C 1 such that for all i ∈ N. We make the following two observations: By the continuous dependence of fluid solutions on initial conditions (Corollary 15),ṽ a (⋅) must be the unique solution to the fluid model with initial conditionṽ a (0), i.e., 2. Since ω 1 ∈ C 1 , by Propositions 11 and 12, there exists a further sub- By the definition ofω 1 (Eq. (97)) and the fact thatω 1 ∈ C 1 , we must have sup t∈[0,T ] ṽ a (t) −ṽ b (t) w > ǫ, which, in light of Eqs. (100) and (101), contradicts the uniqueness of the fluid limit (Theorem 4). This completes the proof.
The following corollary, stated in terms of convergence in probability, follows directly from Proposition 19. The proof is straightforward and is omitted.
Proof of Theorem 7. We first state a tightness result that will be needed in the proof of Theorem 7.
Proposition 21. For every N < ∞ and p ∈ (0, 1], V N (t) is positiverecurrent and V N (t) converges in distribution to a unique steady-state distribution π N,p as t → ∞. Furthermore, the sequence {π N,p } ∞ N =1 is tight, in the sense that for any ǫ > 0, there exists M > 0 such that Proof Sketch. The proposition is proved using a stochastic dominance argument, by coupling with the case p = 0. While the notation may seem heavy, the intuition is simple: when p = 0, the system degenerates into a collection of M M 1 queues with independent arrivals and departures (but possibly correlated initial queue lengths), and it is easy to show that the system is positive recurrent and that the resulting sequence of steady-state distributions is tight as N → ∞. The bulk of the proof is to formally argue that when p > 0, the system behaves "no worse" than when p = 0 in terms of positive recurrence and tightness of steady-state distributions. See Appendix A.3 for a complete proof using this stochastic dominance approach.
Remark. It is worth mentioning that the tightness of π N,p could alternatively be established by defining a Lyapunov function on V N and checking its drift with respect to the underlying embedded discrete-time Markov chain. By applying the Foster-Lyapunov stability criterion, one should be able to prove positive recurrence of V N and give an explicit upper-bound on the expected value of V N 1 in steady state. 19 If this expectation is bounded as N → ∞, we will have obtained the desirable result by the Markov inequality. We do not pursue this direction in this paper, because we believe that the stochastic dominance approach adopted here provides more insight by exploiting the monotonicity in p of the steady-state queue length distribution.
Proof of Theorem 7. For the rest of the proof, since p is fixed, we will drop p in the super-script of π N,p . By Proposition 21, the sequence of distributions π N is tight, in the sense that for any ǫ > 0, there exists M (ǫ) ∈ N such that for all M ≥ M (ǫ), π N V M ≥ 1 − ǫ, for all N. (This is the same as the usual notion of tightness, because the set V M is compact.) The rest of the proof is based on a classical technique based on continuous test functions (see Section 4 of [2]). The continuous dependence on initial conditions and the uniform rate of convergence established previously will be used here. Let C be the space of bounded continuous functions from V ∞ to R. Define the mappings T N (t) and T (t) on C by: (v 0 , t)), for f ∈ C.
With this notation, π N being a steady-state distribution for the Markov process V N (t) is equivalent to having for all t ≥ 0, f ∈ C, Since {π N } is tight, it is sequentially compact under the topology of weak convergence, by Prokhorov's theorem. Let π be the weak limit of some subsequence of π N . We will show that for all t ≥ 0, f ∈ C, In other words, π is also a steady-state distribution for the deterministic fluid limit. Since by Theorem 2, the invariant state of the fluid limit is unique, Eq. (105) will imply that π v I = 1, and this will prove the theorem.
To show Eq. (105), we write We will show that all three terms on the right-hand side of Eq. (106) are zero. Since v(v 0 , t) depends continuously on the initial condition v 0 (Corollary 15), we have T (t)f ∈ C, ∀t ≥ 0, which along with π N ⇒ π implies that the first term is zero. For the third term, since π N is the steady-state Since π N ⇒ π, this implies that the last term is zero.
To bound the second term, fix some M ∈ N and let K = V M . We have . The inequality (a) holds because T (t) and T N (t) are both conditional expectations and are hence contraction mappings with respect to the sup-norm f .
can be shown using an argument involving interchanges of the order of integration, which essentially follows from the uniform rate of convergence to the fluid limit over the compact set K of initial conditions (Corollary 20). We isolate equality (b) in the following claim: where K δ is the δ-extension of K, and ω f (X, δ) is the modulus of continuity of f restricted to set X: To see why inequality (a) holds, recall that by Corollary 20, starting from a compact set of initial conditions, the sample paths of a finite system stay uniformly close to a fluid limit on a compact time interval, with high probability. Inequality (a) then follows from Eq. (102) and the fact that f is bounded. Because K is a compact set, it is not difficult show that K δ 0 is also compact for any fixed δ 0 > 0. Hence f is uniformly continuous on K δ 0 , and we have which establishes the claim.
Going back, since Eq. (107) holds for any K = V M , M ∈ N, we have, by the tightness of π N , that the middle term in Eq. (106) is also zero. This shows that any limit point π of π N is indeed the unique invariant state of the fluid model (v I ). This completes the proof of Theorem 7.
8. Conclusions and future work. The overall objective of this paper is to study how the degree of centralization in allocating computing or processing resources impacts performance. This investigation was motivated by applications in server farms, cloud centers, as well as more general scheduling problems with communication constraints. Using a fluid model and associated convergence theorems, we showed that any small degree of centralization induces an exponential performance improvement in the steady-state scaling of system delay, for sufficiently large systems. Simulations show good accuracy of the model even for moderately-sized finite systems (N = 100).
There are several interesting and important questions which we did not address in this paper. We have left out the question of what happens when the central server adopts a scheduling policy different from the Longest-Queue-First (LQF) policy considered in this paper. Since scheduling a task from a longest queue may require significant global communication overhead, other scheduling policies that require less global information may be of great practical interest. Some alternatives include 1. (Random k-Longest-Queues) The central server always serves a task from a queue chosen uniformly at random among the k most loaded queues, where k ≥ 2 is a fixed integer. Note that the LQF policy is a sub-case, corresponding to k = 1. 2. (Random Work-Conserving) The central server always serves a task from a queue chosen uniformly at random among all non-empty queues.
It will be interesting to see whether a similar exponential improvement in the delay scaling is still present under these other policies. Based on the analysis done in this paper, as well as some heuristic calculations using the fluid model, we conjecture that in order for the phase transition phenomenon to occur, a strictly positive fraction of the central service tokens must be used to serve a longest queue. Hence, between the two policies listed above, the former is more likely to exhibit a similar delay scaling improvement than the latter. Assuming that the LQF policy is used, another interesting question is whether a non-trivial delay scaling can be observed if p, instead of being fixed, is a function of N and decreases to zero as N → ∞. This is again of practical relevance, because having a central server whose processing speed scales linearly with N may be expensive or infeasible for certain applications.
On the modeling end, some of our current assumptions could be restrictive for practical applications. For example, the transmission delays between the local and central stations are assumed to be negligible compared to processing times; this may not be true for data centers that are separated by significant geographic distances. Also, the arrivals and services are modeled by Poisson processes, while in reality more general traffic distributions (e.g., heavy-tailed traffic) are observed. Further work to extend the current model by incorporating these realistic constraints could be of great interest, although obtaining theoretical characterizations seems quite challenging.
Last, the surprisingly simple expressions in our results make it tempting to ask whether similar performance characterizations can be obtained for other stochastic systems with partially centralized control laws; insights obtained here may find applications beyond the realm of queueing theory.

APPENDIX A: ADDITIONAL PROOFS
A.1. Proof of Proposition 11. Here we will follow a line of argument in [1] to establish the existence of fluid limits. We begin with some definitions. Recall the uniform metric, d(⋅, ⋅), defined on D[0, T ]: where the distance from a point to a set is defined as y) .
if it is a cluster point of some {x N } N ≥1 such that: Proof. Suppose that the first claim is false. Then, there exists a subse- However, since E is asymptotically close to E c by assumption, there exists a sequence {y i } ⊂ E c such that Since E c is compact, {y i } has a convergent subsequence with limitỹ. By Eq. (118),ỹ is a cluster point of {x N i }, and hence a cluster point of E, contradicting Eq. (117). This proves the first claim. The second claim is an easy consequence of the closedness of E c . Letx be any point in C (E). There exists a subsequence We now put the above definition into our context. Define E = {E N } N ≥1 to be a sequence of subsets of D[0, T ] such that We have the following property of E c . Proof. The proof involves an elementary but tedious interpolation argument; see Appendix A.1 of [29] for the details of this argument.
Finally, the following lemma states that all sample paths X N (ω, ⋅) with ω ∈ C belong to E N , with appropriately chosen {M N } N ≥1 and {γ N } N ≥1 .
for someM N ↓ 0. Then for all ω ∈ C and i ∈ Z + , there exist L > 0 and sequences M N ↓ 0 and γ N ↓ 0 such that where E N is defined as in Eq. (119).
Proof. Intuitively, the lemma follows from the uniform convergence of scaled sample paths of the event process W N (ω, t) to (1 + λ) t (Lemma 9), because jumps along any coordinate of the sample path have a magnitude of 1 N , and because all coordinates of X N are dominated in terms of W and the total number of jumps.
Based on our coupling construction (cf. Section 4.3), each coordinate of A N , L N , and C N is monotonically non-decreasing, with a positive jump at time t of magnitude 1 N only if there is a jump of same size at time t in W (ω, ⋅). Hence for all i ≥ 1, The same inequalities hold for L N and C N . Since by construction, we have, for all i ≥ 1, Since ω ∈ C, W N (ω, ⋅) converges uniformly to (λ + 1) t on [0,T] by Lemma 9. This implies that there exists a sequenceγ N ↓ 0 such that for all N ≥ 1, which, in light of Eq. (125), implies that Finally, note that all coordinates of X N (ω, 0) except for V N (ω, 0) are equal to 0 by definition. The proof is completed by setting M N = 2 iM N , γ N = 3γ N , and L = 3 (λ + 1).
We are now ready to prove Proposition 11.
Proof of Proposition 11. Let us first summarize the key results we have so far: The rest is straightforward: Pick any ω ∈ C. By the above statements, for any i ∈ Z + one can find a subsequence X N j (ω, ⋅) ∞ j=1 and a sequence {y j } ∞ j=1 ⊂ E c such that Since by Lemma 26 (statement 1 above), E c is compact and closed, {y j } ∞ j=1 has a limit point y * in E c , which implies that a further subsequence of X converges to y * . Moreover, since V N (ω, 0) → v 0 and This proves the existence of a L-Lipschitz-continuous limit point y * (⋅) at any single coordinate i of X N (⋅).
Starting with the coordinate-wise limit points, we then use a diagonal argument to construct the limit points of X N in the D Z+ [0, T ] space. Let v 1 (t) be any L-Lipschitz-continuous limit point of V N 1 , so that a subsequence V are the indices for the ith subsequence. Finally, define We claim that v is indeed a limit point of V N in the d Z+ (⋅, ⋅) norm. To see this, first note that for all N , Since we constructed the limit point v by repeatedly selecting nested subsequences, this property extends to v, i.e., Since v 1 (0) = v 0 1 and v 1 (t) is L-Lipschitz-continuous, we have that Set N 1 = 1, and let Note that the construction of v implies N k is well defined and finite for all k. From Eqs. (132)-(133), we have for all k ≥ 2, The existence of the limit points a(t), l(t), and c(t) can be established by an identical argument. This completes the proof.

A.2. Proof of Claim 14.
Proof of Claim 14. Let m i For i = 0, by Eq. (70), we have Combining Eqs. (135) and (136), we have This proves the claim.
Proof of Proposition 21. Fix N > 0 and p ∈ (0, 1]. Let {V N [n]} n≥0 be the discrete-time embedded Markov chain for V N (t), defined by where t n , n ≥ 1, as defined previously, is the time of the nth event taking place in the system (i.e., the time of the nth jump in W N (⋅)), with the convention that t 0 = 0. Let also U N (t) be the sample path obtained if we set p to zero, and let U N [n] be the corresponding embedded chain. We have the following lemma: Proof. We will first interpret the system with p > 0 as that of an optimal scheduling policy with a time-varying channel. The result will then follow from the classical result in Theorem 3 of [23], with a slightly modified arrival assumption, but almost identical proof steps. Recall the Secondary Motivation described in Section 1.2. Here we will use a similar but modified interpretation: instead of thinking of the central server as deciding between serving a most-loaded station versus serving a random station, imagine that the central server always serves a most-loaded station among the ones that are connected to it. The channel between the central server and local stations, represented by a set of connected stations, evolves according to the following dynamics and is independent across different time slots: 1. With probability p, all N stations are connected to the central server. 2. Otherwise, only one station, chosen uniformly at random from the N stations, is connected to the central server.
It is easy to see that, under the above channel dynamics, a system in which a central server always serves a most-loaded stations among connected stations will produce the same distribution for V N [n] as our original system. Note that the case p = 0 is equivalent to scheduling tasks under the same channel condition just described, but with a server that servers a station chosen uniformly at random among all connected stations. The advantage of the above interpretation is that it allows us to treat V N [n] and U N [n] as the resulting aggregate queue length processes by applying two different scheduling policies to the same arrival, token generation, and channel processes. In particular, V N 1 [n] corresponds to the resulting normalized total queue length process ( , when a longest-queue-first policy is applied, and U N 1 [n] corresponds to the normalized total queue length process, when a fully random scheduling policy is applied. Theorem 3 of [23] states that when the arrival and channel processes are symmetric with respect to the identities of stations, the total queue length process under a longest-queue-first policy is stochastically dominated by all other causal policies (i.e., policies that use only information from the past). Since the arrival and channel processes are symmetric in our case, and a random scheduling policy falls under the category of causal policies, the statement of Theorem 3 of [23] implies the validity of our claim.
There is, however, a minor difference in the assumptions of Theorem 3 of [23] and our setup that we note here. In [23], it is possible that both arrivals and service occur during the same slot, while in our case, each event corresponds either to an arrival to a queue or the generation of a service token, but not both. This technical difference can be handled easily by discussing separately, whether the current slot corresponds to an arrival or a service. The structure of the proof for Theorem 3 in [23] remains unchanged after this modification, and is hence not repeated here.
Having established a stochastic dominance relation between the two (discretetime) embedded Markov chains, and using the fact that the continuous-time chains V(t) and U(t) have transitions at the jump times of the common underlying Poisson process W (t), it is elementary to check that V(t) is stochastically dominated by U(t); see Appendix A.2 in [29] for the details of this argument.
We now look at the behavior of U N (t). When p = 0, only local service tokens are generated. Hence, it is easy to see that the system degenerates into N individual M M 1 queues with independent and identical statistics for arrivals and service token generation. In particular, for any station i, the arrival follows a Poisson process of rate λ and the generation of service tokens follows a Poisson process of rate 1. Since λ < 1, it is not difficult to verify that the process U N (t) is positive recurrent, and admits a unique steady-state distribution, denoted by π N,0 , which satisfies: is a set of i.i.d. geometrically distributed random variables, with Since the process V N (t) is dominated by U N (t), it is easily verified that the former is also positive recurrent. In particular, V N (t) converges in distribution to a unique steady-state distribution π N,p as t → ∞. Combining this with the dominance relation between the two processes, we have that for any initial distribution ofV N (0), where the last equality follows from Eq. (139). Since the E i 's are i.i.d. geometric random variables, by Markov's inequality, In this section, we offer some justification for having chosen to work primarily with the aggregate queue length process, V N (⋅) (Eq. (2)), instead of the normalized queue length process, S N (⋅) (Eq. (3)). The high-level reason is that the fluid model corresponding to the aggregate queue length process, expressed in terms of v(⋅), admits a nice contraction property which does not hold for the fluid model expressed in s(⋅).
A key to the proof of Theorem 4 (uniqueness of fluid solutions) is a contraction property of the drift associated with v(⋅) (Eq. (71)), also known as the one-sided Lipschitz continuity (OSL) condition in the dynamical systems literature (see, e.g., [22]). We first give a definition of OSL that applies to our setting.
Definition 31. Let the coordinates of R ∞ be indexed by Z + so that x = (x 0 , x 1 , x 2 , . . .) for all x ∈ R ∞ . A function H ∶ R ∞ → R ∞ is said to be one-sided Lipschitz-continuous (OSL) over a subset D ⊂ R ∞ , if there exists a constant C, such that for every x, y ∈ D, where the inner product ⟨ ⋅ , ⋅ ⟩ w on R ∞ is defined by What is the usefulness of the above definition in the context of proving uniqueness of solutions to a fluid model? Recall that F(⋅) is the drift of the fluid model, as in Eq. (18), i.e., whenever v(⋅) is differentiable at t. Let v(⋅) and w(⋅) be two solutions to the fluid model such that both are differentiable at t, as in the proof of Theorem 4. We have For the state representation based on s(⋅), can one use the same proof technique to show the uniqueness of s(⋅) by working directly with the drift associated with s(⋅)? Recall that so that at all t where v(t) is differentiable, the driftṡ(t) is given by Interestingly, it turns out that the drift H(⋅), defined in Eq. (148), does not satisfy the one-sided Lipschitz continuity condition in general. We show this fact by inspecting a specific example. To keep the example as simple as possible, we consider a degenerate case.
Claim 32. If λ = 0 and p = 1, then H(⋅) is not one-sided Lipschitzcontinuous on its domain S ∞ , where S ∞ was defined in Eq. (8) as Proof. We will look at a specific instance where the condition (143) cannot be satisfied for any C.
To prove the claim, it suffices to show that for any value of C, there exist some values of α, β, and ǫ such that Since λ = 0 and p = 1, by the definition of H(⋅) (Eqs. (148) and (149) we have that for all C and all ǫ < 1 C , for all sufficiently small β, which proves Eq. (151). This completes the proof of the claim.
Claim 32 indicates that a direct proof of uniqueness of fluid solutions using the OSL property of the drift will not work for s(⋅). The uniqueness of s(⋅) should still hold, but the proof can potentially be much more difficult, requiring an examination of all points of discontinuity of H(⋅) on the domain S ∞ .
We now give some intuition as for why the discontinuity in Claim 32 occurs for H(⋅), but not for F(⋅). The key difference lies in the expressions of the drifts due to central service tokens in two fluid models, namely, g(⋅) (Eq. (16)) for v(⋅) and g s (⋅) (Eq. (149)) for s(⋅). For g s (⋅), note that In other words, the ith coordinate of s(t), s i (t) receives no drift due to the central service tokens if there is a strictly positive fraction of queues in the system with at least i+1 tasks, that is, if s i+1 (t) > 0 (Eq. (156)). However, as soon as s i+1 (t) becomes zero, s i (t) immediately receives a strictly positive amount of drift due to the central service tokens (Eq. (157)), as long as λs i (t) < p. Physically, since the central server always targets the longest queues, this means that when s i+1 (t) becomes zero, the set of queues with exactly i tasks becomes the longest in the system, and begins to receive a positive amount of attention from the central server. Such a sudden change in the drift of s i (t) as a result of s i+1 (t) hitting zero is a main cause of the failure of the OSL condition, and this can be observed in Eq. (155) as β → 0. In general, the type of discontinuity that was exploited in the proof of Claim 32 can happen at infinitely many points in S ∞ . The particular choices of λ = 0 and p = 1 were non-essential, and were only chosen to simplify the calculations.
We now turn to the expression for g(⋅), the drift of v(⋅) due to the central service tokens. We have that (158) g i (v) = p, whenever v i > 0.
Note that the above-mentioned discontinuity in g s (⋅) is not present in g(⋅). This is not surprising: since v i (t) △ = ∑ ∞ j=i s j (t), v i (t) receives a constant amount of drift from the central service token as long as v i (t) > 0, regardless of the values of v j (t), j ≥ i + 1. By adding up the coordinates s j (⋅), j ≥ i, to obtain v i (⋅), we have effectively eliminated many of the drift discontinuities in s(⋅). This is a key reason for the one-sided Lipschitz continuity condition to hold for F(⋅).
We then have This implies that for all C ≥ 0, for all β ≥ 0. Contrasting Eq. (161) with Eq. (155), notice that the 1 2 ǫ term is no longer present in the expression for the inner product, as a result of the additional "smoothness" of F(⋅). Therefore, unlike in the case of H(⋅), the OSL condition for F does not break down at v a and v b .
The difference in drift patterns described above can also be observed in finite systems. The two graphs in Figure 7 display the same sample path of the embedded discrete-time Markov chain, in the representations of S N and V N , respectively. Here N = 10000, p = 1, and λ = 0.5, with an initial condition S N In summary, the difficulty of proving the uniqueness of fluid solutions is greatly reduced by choosing an appropriate state representation, v(⋅). The fact that such a simple (linear) transformation from s(⋅) to v(⋅) can create one-sided Lipschitz continuity and greatly simplify the analysis may be of independent interest.

APPENDIX C: A FINITE-SUPPORT PROPERTY OF FLUID SOLUTION AND ITS IMPLICATIONS
In this section, we discuss a finite-support property of the fluid solution v(⋅). Although this property is not directly used in the proofs of other results in our work, we have decided to include it here because it provides important, and somewhat surprising, qualitative insights into the system dynamics.
Proposition 33. Let v 0 ∈ V ∞ , and let v(v 0 , ⋅) be the unique solution to the fluid model with initial condition v(v 0 , 0) = v 0 . If p > 0, then v(v 0 , t) has a finite support for all t > 0, in the sense that Before presenting the proof, we observe that the finite-support property stated in Proposition 33 is independent of the size of the support of the initial condition v 0 ; even if all coordinates of v(t) are strictly positive at t = 0, the support of v(t) immediately "collapses" to a finite number for any t > 0.
A critical assumption in Proposition 33 is that p > 0, i.e., that the system has a non-trivial central server. The "collapse" of v(⋅) into a finite support is essentially due to the fact that the central server always allocates its service capacity to the longest queues in the system. Proposition 33 illustrates that the worst-case queue-length in the system is well under control at all times, thanks to the power of the central server.
Proposition 33 also sheds light on the structure of the invariant state of the fluid model, v I . Recall from Theorem 2 that v I has a finite support whenever p > 0. Since by the global stability of fluid solutions (Theorem 4), we have that the fact that v(t) admits a finite support for any t > 0 whenever p > 0 provides strong intuition for and partially explains the finite-support property of v I . We now prove Proposition 33.
Proof of Proposition 33. We fix some v 0 ∈ V ∞ , and for the rest of the proof we will write v(⋅) in place of v(v 0 , ⋅). It is not difficult to show, by directly inspecting the drift of the fluid model in Eq. (4), that if we start with an initial condition v 0 with a finite support, then the support remains finite at all times. Hence, we now assume v 0 i > 0 for all i. First, the fact that v 0 ∈ V ∞ (i.e., v 0 1 < ∞) implies (164) lim This is because all coordinates of the corresponding vector s 0 are nonnegative, and where the second term on the right-hand side converges to v 0 1 . Assume that v i (t) > 0 for all i, over some small time interval t ∈ [0, s]. Since the magnitude of the drift on any coordinate v i is uniformly bounded from above by λ + 1, and lim i→∞ v 0 i = 0, for any ǫ > 0 we can find s ′ , N > 0 such that for all i ≥ N and t ∈ [0, s ′ ], 1. With probability λ 1+λ , a queue is chosen uniformly at random from all queues, and one new task is added to this queue. This corresponds to an arrival to the system. 2. With probability 1−p 1+λ , a queue is chosen uniformly at random from all queues, and one task is removed from the queue if the queue is nonempty. If the chosen queue is empty, no change is made to the queue length vector. This corresponds to the generation of a local service token. 3. With probability p 1+λ , a queue is chosen uniformly at random from the longest queues, and one task is removed from the chosen queue if the queue is non-empty. If all queues are empty, no change is made to the queue length vector. This corresponds to the generation of a central service token.
To make the connection between the above discrete-time Markov chain Q[n] and the continuous-time Markov process Q(t) considered in this paper, one can show that Q(t) is uniformized and hence the steady-state distribution of Q(t) coincides with that of the embedded discrete-time chain Q[n].
To measure the steady-state queue length distribution seen by a typical task, we sampled from the chain Q[n] in the following fashion: Q[n] was first run for a burn-in period of 1,000,000 time steps, after which 500,000 samples were collected with 20 time steps between adjacent samples, where each sample recorded the current length of a queue chosen uniformly at random from all queues. Denote by S the set of all samples. The average queue length, as marked by the symbol "×" in Figure 4, was computed by taking the average over S. The upper (UE) and lower (LE) ends of the 95% confidence intervals were computed by: U E △ = min{x ∈ S ∶ there are no more than 2.5% of the elements of S that are strictly greater than x}, LE △ = max{x ∈ S ∶ there are no more than 2.5% of the elements of S that are strictly less than x}.
Note that this notion of confidence interval is meant to capture the concentration of S around the mean, and is somewhat different from that used in the statistics literature for parameter estimation.
A separate version of the above experiment was run for each value of λ marked in Figure 4, while the the level of centralization p was fixed at 0.05 across all experiments.