papers / 20241102 /2405.15545v2.json
yilunzhao's picture
Add files using upload-large-folder tool
8f5b997 verified
{
"title": "Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations",
"abstract": "In practical distributed systems, workers are typically not homogeneous, and due to differences in hardware configurations and network conditions, can have highly varying processing times. We consider smooth nonconvex finite-sum (empirical risk minimization) problems in this setup and introduce a new parallel method, Freya PAGE, designed to handle arbitrarily heterogeneous and asynchronous computations. By being robust to \u201cstragglers\u201d and adaptively ignoring slow computations, Freya PAGE offers significantly improved time complexity guarantees compared to all previous methods, including Asynchronous SGD, Rennala SGD, SPIDER, and PAGE, while requiring weaker assumptions. The algorithm relies on novel generic stochastic gradient collection strategies with theoretical guarantees that can be of interest on their own, and may be used in the design of future optimization methods. Furthermore, we establish a lower bound for smooth nonconvex finite-sum problems in the asynchronous setup, providing a fundamental time complexity limit. This lower bound is tight and demonstrates the optimality of Freya PAGE in the large-scale regime, i.e., when where is # of workers, and is # of data samples.",
"sections": [
{
"section_id": "1",
"parent_section_id": null,
"section_name": "Introduction",
"text": "In real-world distributed systems used for large-scale machine learning tasks, it is common to encounter device heterogeneity and variations in processing times among different computational units. These can stem from GPU computation delays, disparities in hardware configurations, network conditions, and other factors, resulting in different computational capabilities and speeds across devices (Chen et al., 2016 ###reference_b4###; Tyurin and Richt\u00e1rik, 2023 ###reference_b28###). As a result, some clients may execute computations faster, while others experience delays or even fail to participate in the training altogether.\nDue to the above reasons, we aim to address the challenges posed by device heterogeneity in the context of solving finite-sum nonconvex optimization problems of the form\nwhere can be viewed as the loss of a machine learning model on the th example in a training dataset with samples. Our goal is to find an -stationary point, i.e., a (possibly random) point such that .\nWe focus on the homogeneous distributed setup:\nthere are workers/clients/devices able to work in parallel,\neach worker has access to stochastic gradients , ,\nworker calculates in less or equal to seconds for all\nWithout loss of generality, we assume that . One can think of as an upper bound on the computation time rather than a fixed deterministic time.\nLooking ahead, iteration complexity can be established even if for all (Theorem 1 ###reference_orem1###). We also provide results where the bounds are dynamic and change with every iteration (Section 4.4 ###reference_###). For simplicity of presentation, however, we assume that for , unless explicitly stated otherwise."
},
{
"section_id": "1.1",
"parent_section_id": "1",
"section_name": "Assumptions",
"text": "We adopt two weak assumptions, which are standard for the problem (1 ###reference_###) (Fang et al., 2018 ###reference_b6###).\nThe function is -smooth and lower-bounded by .\nsuch that\nWe also consider Assumption 3 ###reference_umption3###. Note that this assumption does not restrict the class of considered functions \nIndeed, if Assumption 2 ###reference_umption2### holds with then Assumption 3 ###reference_umption3### holds with some . If one only wants to rely on Assumptions 1 ###reference_umption1### and 2 ###reference_umption2###, it is sufficient to take . However, Assumption 3 ###reference_umption3### enables us to derive sharper rates, since can be small or even , even if and are large (Szlendak et al., 2021 ###reference_b27###; Tyurin et al., 2023 ###reference_b29###; Kovalev et al., 2022 ###reference_b12###).\nThere exists such that"
},
{
"section_id": "1.2",
"parent_section_id": "1",
"section_name": "Gradient oracle complexities",
"text": "Iterative algorithms are traditionally evaluated based on their gradient complexity. Let us present a brief overview of existing theory.\nThe classical result of Gradient Descent (GD) says that in the smooth nonconvex regime, the number of oracle calls needed to solve problem (1 ###reference_###) is because GD converges in iterations, and calculates the full gradient in each iteration.\nThis was improved to by several variance-reduced methods, including SVRG and SCSG (Allen-Zhu and Hazan, 2016 ###reference_b1###; Reddi et al., 2016 ###reference_b25###; Lei et al., 2017 ###reference_b15###; Horv\u00e1th and Richt\u00e1rik, 2019 ###reference_b7###).\nSince then, various other algorithms, such as SNVRG, SARAH, SPIDER, SpiderBoost, PAGE and their variants, have been developed (Fang et al., 2018 ###reference_b6###; Wang et al., 2019 ###reference_b31###; Nguyen et al., 2017 ###reference_b23###; Li et al., 2021 ###reference_b16###; Zhou et al., 2020 ###reference_b32###; Horv\u00e1th et al., 2022 ###reference_b8###). These methods achieve a gradient complexity of , matching the lower bounds (Fang et al., 2018 ###reference_b6###; Li et al., 2021 ###reference_b16###).\nThat said, in practical scenarios, what often truly matters is the time complexity rather than the gradient complexity (Tyurin and Richt\u00e1rik, 2023 ###reference_b28###). Although the latter metric serves as a natural benchmark for sequential methods, it seems ill-suited in the context of parallel methods."
},
{
"section_id": "1.3",
"parent_section_id": "1",
"section_name": "Some previous time complexities",
"text": "Let us consider some examples to provide intuition about time complexities for problem (1 ###reference_###).\nGD with worker (Hero GD). In principle, each worker can solve the problem on their own. Hence, one approach would be to select the fastest client (assuming it is known) and delegate the task to them exclusively.\nA well-known result says that for -smooth objective function (Assumption 1 ###reference_umption1###), GD converges in iterations, where and is the starting point. Since at each iteration the method computes gradients , , the time required to find an -stationary point is seconds.\nGD with workers and equal data allocation (Soviet GD). The above strategy leaves the remaining workers idle, and thus potentially useful computing resources are wasted. A common approach is to instead divide the data into equal parts and assign one such part to each worker, so that each has to compute gradients (assuming for simplicity that is divisible by ). Since at each iteration the strategy needs to wait for the slowest worker, the total time is . Depending on the relationship between and , this could be more efficient or less efficient compared to Hero GD. This shows that the presence of stragglers can eliminate the potential speedup expected from parallelizing the training (Dutta et al., 2018 ###reference_b5###).\nSPIDER/PAGE with worker or workers and equal data allocation (Hero PAGE and Soviet PAGE).\nAs mentioned in Section 1.2 ###reference_###, SPIDER/PAGE can have better gradient complexity guarantees than GD. Using the result of Li et al. (2021 ###reference_b16###), the equal data allocation strategy with workers leads to the time complexity of\nseconds. We refer to this method as Soviet PAGE. In practical regimes, when is small and , this complexity can be better than that of GD. Running PAGE on the fastest worker (which we will call Hero PAGE), we instead get the time complexity\nGiven these examples, the following question remains unanswered: what is the best possible time complexity in our setting? This paper aims to answer this question.\nMethod\nWorst-Case Time Complexity\nComment\n\n \n\n\nHero GD \u2009 (Soviet GD)\n\n \n\n\n \u2003()\n\nSuboptimal\n\n \n\n\nHero PAGE \u2009 (Soviet PAGE)\n\n(Li et al., 2021 ###reference_b16###)\n\n \n\n\n \u2003()\n\nSuboptimal\n\n \n\n\nSYNTHESIS\n\n(Liu et al., 2022 ###reference_b17###)\n\n \n\n\n\u2014\n\n \n\n\nLimitations:\n\nbounded gradient assumption,\n\ncalculates the full gradients(a),\n\nsuboptimal.(b)\n\n\n \n\n\nAsynchronous SGD\n\n(Koloskova et al., 2022 ###reference_b11###)\n\n(Mishchenko et al., 2022 ###reference_b19###)\n\n \n\n\n\n\n \n\n\nLimitations:\n\n\u2013bounded variance assumption,\n\nsuboptimal when is small.\n\n\n \n\n\nRennala SGD\n\n(Tyurin and Richt\u00e1rik, 2023 ###reference_b28###)\n\n \n\n\n\n\n \n\n\nLimitations:\n\n\u2013bounded variance assumption,\n\nsuboptimal when is small.\n\n\n\n\n\nFreya PAGE\n\n(Theorems 4.2 ###reference_### and 2 ###reference_orem2###)\n\n\n\n\n\n\n(c)\n\n\n\n\nOptimal in the large-scale regime,\n\ni.e., (see Section 5 ###reference_###)\n\n\n\n\n\nLower bound\n\n(Theorem 4 ###reference_orem4###)\n\n\n\n\n\n\n\n\n\n\n\n\u2014\n\n\n \n\n\nFreya PAGE has universally better guarantees than all previous methods: the dependence on is (unlike Rennala SGD and Asynchronous SGD),\n\nthe dependence on is harmonic-like and robust to slow workers (robust to ) (unlike Soviet PAGE and SYNTHESIS),\n\nthe assumptions are weak, and the time complexity of Freya PAGE is optimal when .\n\n\n\n(a)\n\nIn Line of their Algorithm , they calculate the full gradient, assuming that it can be done for free and not explaining how.\n\n(b)\n\nTheir convergence rates in Theorems and depend on a bound on the delays which in turn depends on the performance of the slowest worker. Our method does not depend on the slowest worker if it is too slow (see Section 4.3 ###reference_###), which is required for optimality.\n\n(c)\n\nWe prove better time complexity in Theorem 4.1 ###reference_###, but this result requires the knowledge of in advance, unlike Theorems 4.2 ###reference_### and 2 ###reference_orem2###."
},
{
"section_id": "2",
"parent_section_id": null,
"section_name": "Contributions",
"text": "We consider the finite-sum optimization problem (1 ###reference_###) under weak assumptions and develop a new method, Freya PAGE.\nThe method works with arbitrarily heterogeneous and asynchronous computations on the clients without making any assumptions about the bounds on the processing times .\nWe show that the time complexity of Freya PAGE is provably better than that of all previously proposed synchronous/asynchronous methods (Table 1 ###reference_###). Moreover, we prove a lower bound that guarantees optimality of Freya PAGE in the large-scale regime ().\nThe algorithm leverages new computation strategies, ComputeGradient (Alg. 2 ###reference_###) and ComputeBatchDifference (Alg. 3 ###reference_###), which are generic and can be used in any other asynchronous method. These strategies enable the development of our new SGD method (Freya SGD); see Sections 6 ###reference_### and H ###reference_###.\nExperiments from Section A ###reference_### on synthetic optimization problems and practical logistic regression tasks support our theoretical results."
},
{
"section_id": "3",
"parent_section_id": null,
"section_name": "The Design of the New Algorithm",
"text": "It is clear that to address the challenges arising in the setup under consideration and achieve optimality, a distributed algorithm has to adapt to and effectively utilize the heterogeneous nature of the underlying computational infrastructure. With this in mind, we now present a new algorithm, Freya PAGE, that can efficiently coordinate and synchronize computations across the devices, accommodating arbitrarily varying processing speeds, while mitigating the impact of slow devices or processing delays on the overall performance of the system.\n(note): is a set of i.i.d. indices that are sampled from , uniformly with replacement,\nFreya PAGE is formalized in Algorithm 1 ###reference_###.\nThe update rule is just the regular PAGE (Li et al., 2021 ###reference_b16###) update: at each iteration, with some (typically small) probability , the algorithm computes the full gradient , and otherwise, it samples a minibatch of size and reuses the gradient estimator from the previous iteration, updated by the cheaper-to-compute adjustment .\nWithin Algorithm 1 ###reference_###, at each iteration we call one of two subroutines: ComputeGradient (Alg. 2 ###reference_###, performing the low-probability step), and ComputeBatchDifference (Alg. 3 ###reference_###, performing the high-probability step). Let us focus on ComputeGradient, designed to collect the full gradient: it takes a point as input and returns \nThere exist many strategies for implementing this calculation, some of which were outlined in Section 1.3 ###reference_###. The most naive one is to assign the task of calculating the whole gradient to a single worker , resulting in a worst-case running time of seconds for ComputeGradient. Another possible strategy is to distribute the functions evenly among the workers; in this case, calculating takes seconds in the worst case.\nClearly, we could do better if we knew in advance. Indeed, let us allocate to each worker a number of functions inversely proportional to . This strategy is reasonable \u2013 the faster the worker, the more gradients it can compute. We can show that such a strategy finds in\nseconds in the worst case (see the proof of Theorem 4 ###reference_umption4###). This complexity is better than and (Theorem 21 ###reference_orem21###).\nHowever, this approach comes with two major limitations: i) it requires knowledge of the upper bounds , ii) even if we have access to , the computation environment can be adversarial: theoretically and practically, it is possible that at the beginning the first worker is the fastest and the last worker is the slowest, but after some time, their performances swap. Consequently, the first worker might end up being assigned the largest batch, despite now having the lowest performance. Thus, this strategy is not robust to time-varying speeds.\nNotes: i) the workers can aggregate locally, and the algorithm can call AllReduce once to collect all calculated gradients. ii) By splitting into blocks, instead of one we can ask the workers to calculate the sum of one block in Alg. 2 ###reference_### (and use a similar idea in Alg. 3 ###reference_###)."
},
{
"section_id": "4",
"parent_section_id": null,
"section_name": "Time Complexities and Convergence Rates",
"text": "Formulas (3 ###reference_###) and (4 ###reference_###) will be used frequently throughout the paper. To lighten up the heavy notation, let us define the following mapping.\nA mapping defined by\nis called the equilibrium time.\nWe let when considering from Section 1 ###reference_###.\nReturning to the algorithm, we guarantee the following iteration complexity.\n{restatable}[Iteration complexity]theoremTHMPAGERENNALAINDEPITER\n\nLet Assumptions 1 ###reference_umption1###, 2 ###reference_umption2### and 3 ###reference_umption3### hold.\nConsider any minibatch size , any probability and let the stepsize be .\nThen, after\niterations of Algorithm 1 ###reference_###, we have , where is sampled uniformly at random from the iterates .\nTheorem 1 ###reference_orem1### states that the iteration complexity is the same as in the optimal PAGE method (Li et al., 2021 ###reference_b16###). Note that we can guarantee convergence even if the upper bounds are unknown or infinite (as long as there exists some worker that can complete computations within a finite time).\nWe now derive time complexity guarantees. With probability , the workers need to supply to the algorithm stochastic gradients at each of the data samples, which by Theorem 3 ###reference_.SSS0.Px1### can be done in at most seconds (up to a log factor). Otherwise, they compute differences of stochastic gradients, which by Theorem 4 ###reference_umption4### takes at most seconds (up to a constant factor).\nThe resulting time complexity is given in the theorem below.\n{restatable}[Time complexity with free parameters and ]theoremTHMPAGERENNALATIME\n\nConsider the assumptions and the parameters from Theorem 1 ###reference_orem1###, plus Assumption 4 ###reference_umption4###. The expected time complexity of Algorithm 1 ###reference_### is at most\nThe first term comes from the preprocessing step, where the full gradient is calculated to obtain . Here, we use Assumption 4 ###reference_umption4### that The result (LABEL:eq:compl_p_S_indep) is valid even without this assumption, but at the cost of extra logarithmic factors."
},
{
"section_id": "4.1",
"parent_section_id": "4",
"section_name": "Optimal parameters and",
"text": "The time complexity (LABEL:eq:compl_p_S_indep) depends on two free parameters, and The result below (following from Theorems 6 ###reference_orem6### and 7 ###reference_orem7###) determines their optimal choice.\n[Main result]theoremTHMOPTPSMAIN\n\nConsider the assumptions and parameters from Theorem 1 ###reference_orem1###, plus Assumption 4 ###reference_umption4###. Up to a constant factor, the time complexity (LABEL:eq:compl_p_S_indep) is at least\nand this lower bound is achieved with and where\nResult (9 ###reference_###) is the best possible time complexity that can be achieved with the Freya PAGE method.\nUnfortunately, the final time complexity has non-trivial structure, and the optimal parameters depend on in the general case. If we have access to all parameters and times then (9 ###reference_###), and can be computed efficiently. Indeed, the main problem is to find which can be solved, for instance, by using the bisection method, because is non-decreasing and is non-increasing in ."
},
{
"section_id": "4.2",
"parent_section_id": "4",
"section_name": "Optimal parameters and in the large-scale regime",
"text": "Surprisingly, we can significantly simplify the choice of the optimal parameters and in the large-scale regime, when . This is a weak assumption, since typically the number of data samples is much larger than the number of workers.\n[Main result in the large-scale regime]theoremTHEOREMSIMPLETIMECOMPLEXITYCHOICE\nConsider the assumptions and parameters from Theorem 1 ###reference_orem1###, plus Assumption 4 ###reference_umption4###. Up to a constant factor and smoothness constants, if then the optimal choice of parameters in (LABEL:eq:compl_p_S_indep) is and . For this choice, the expected time complexity of Algorithm 1 ###reference_### is at most\nseconds. The iteration complexity with and is .\nWe cannot guarantee that and is the optimal pair when but it is a valid choice for all Note that (10 ###reference_###) is true if , and it is true up to a log factor if .\nIn light of Theorem 8 ###reference_orem8###, we can further refine Theorem 4.2 ###reference_### if the ratio is known:\nConsider the assumptions and parameters from Theorem 1 ###reference_orem1###, plus Assumption 4 ###reference_umption4###. The expected time complexity of Algorithm 1 ###reference_### is at most seconds, where and\nFor brevity reasons, we will continue working with the result from Theorem 4.2 ###reference_### in the main part of this paper. Note that the optimal parameters do not depend on , and can be easily calculated since the number of functions is known in advance. Hence, our method is fully adaptive to changing and heterogeneous compute times of the workers.\nEven if the bounds are unknown and for all our method converges after iterations, and calculates the optimal number of stochastic gradients equal to ."
},
{
"section_id": "4.3",
"parent_section_id": "4",
"section_name": "Discussion of the time complexity",
"text": "Let us use Definition 1 ###reference_orem1### and unpack the second term in the complexity (10 ###reference_###), namely,\nA somewhat similar expression involving and harmonic means was obtained by Tyurin and Richt\u00e1rik (2023 ###reference_b28###); Tyurin et al. (2024 ###reference_b30###) for minimizing expectation under the bounded variance assumption. The term is standard in optimization (Nesterov, 2018 ###reference_b21###; Lan, 2020 ###reference_b13###) and describes the difficulty of the problem (1 ###reference_###). The term represents the average time of one iteration and has some nice properties. For instance, if the last worker is slow and then\n\nso the complexity effectively ignores it. Moreover, if is an index that minimizes then The last formula, again, does not depend on the slowest workers , which are automatically excluded from the time complexity expression. The same reasoning applies to the term Let us now consider some extreme examples which are meant to shed some light on our time complexity result (10 ###reference_###):\nexampleEXAMPLESAMEMAIN[Equally Fast Workers]\n\nSuppose that the upper bounds on the processing times are equal, i.e., for all . Then\nThe complexity in Example 4.3 ###reference_### matches that in (2 ###reference_###), which makes sense since Soviet PAGE is a reasonable method when are equal. Note that the reduction happens without prior knowledge of .\n{restatable}exampleEXAMPLEINFFASTMAIN[Infinitely Fast Worker]\n\nIf , then \n{restatable}exampleEXAMPLEINFSLOWMAIN[Infinitely Slow Workers]\n\nIf , then \n{restatable}exampleEXAMPLESLOWMAIN[Extremely Slow Workers]\n\nSuppose that the times are fixed and for some large enough. Then\nExample 4.3 ###reference_x13### says that the workers whose processing time is too large are ignored,\nwhich supports the discussion preceding the examples."
},
{
"section_id": "4.4",
"parent_section_id": "4",
"section_name": "Dynamic bounds",
"text": "It turns out that the guarantees from Theorem 4.2 ###reference_### can be generalized to the situation where the compute times are allowed to dynamically change throughout the iterations. Consider the th iteration of Algorithm 1 ###reference_### and assume that worker calculates one in at most seconds \nClearly, (where are upper bounds from the preprocessing step), but can be arbitrarily smaller than (possibly and ).\nConsider the assumptions and parameters from Theorem 1 ###reference_orem1###, plus Assumption 4 ###reference_umption4###. Up to a constant factor, the expected time complexity of Algorithm 1 ###reference_### with and is at most\nwhere is a permutation such that for all \n(This theorem follows from Theorem 14 ###reference_orem14### with the chosen parameters).\nHence, our algorithm is adaptive to the dynamic compute times Let us consider an example with workers. Assume that the first worker is stable: for all , and the second worker is unstable: is small in the first iteration, and in the second iteration. For explanation purposes, we ignore the preprocessing term which is not a factor if is small.\nThen,\nbecause when The time complexity in the second iteration depends on the first (stable) worker only, which is reasonable since and this happens automatically. At the same time, the first term depends on both workers, and this iteration will be faster because"
},
{
"section_id": "4.5",
"parent_section_id": "4",
"section_name": "Comparison with previous strategies from Section 1.3",
"text": "Our time complexities (9 ###reference_###) and (10 ###reference_###) are better than all known previous guarantees if In particular,\n (from (2 ###reference_###)), because (Theorem 21 ###reference_orem21###). In fact, since and , can be arbitrarily larger. We also improve on Hero PAGE (see Remark 22 ###reference_orem22###)."
},
{
"section_id": "4.6",
"parent_section_id": "4",
"section_name": "Comparison with existing asynchronous variance reduced methods",
"text": "Several studies have explored asynchronous variance reduced algorithms. Essentially all of them are variants of the existing synchronous methods discussed in Section 1.2 ###reference_### and depend on the slowest worker in every iteration.\nThere have been several attempts to combine variance reduction techniques with asynchronous computations.\nPerhaps the most relevant baseline is SYNTHESIS, an asynchronous variant of SPIDER (Fang et al., 2018 ###reference_b6###) introduced by Liu et al. (2022 ###reference_b17###). The obtained gradient complexity matches that of SPIDER in terms of dependence on , but scales linearly with the bound on the time performance of the slowest worker, making it non-adaptive to slow computations. Moreover, in Line of their Algorithm , the full gradient is calculated, assuming that it can be done for free.\nLastly, the analysis assumes the gradients to be bounded."
},
{
"section_id": "5",
"parent_section_id": null,
"section_name": "Lower Bound",
"text": "In previous sections, we showed that Freya PAGE converges in at most (9 ###reference_###) or (10 ###reference_###) seconds, providing better time complexity guarantees compared to all previous methods. The natural question is: how good are these time complexities, and can they be improved? In Section J ###reference_###, we formalize our setup and prove Theorems J.3 ###reference_3### and J.4 ###reference_4###, which collectively yield the following lower bound.\nAssume that and take any and such that Then, for any (zero-respecting) algorithm, there exists a function that satisfies and Assumption 2 ###reference_umption2###, such that it is impossible to find an \u2013stationary point faster than in\nseconds using uniform sampling with replacement.\nComparing (10 ###reference_###) and (12 ###reference_###), we see that Freya PAGE is optimal under Assumptions 1 ###reference_umption1### and 2 ###reference_umption2### in the large-scale regime (). Indeed, without Assumption 3 ###reference_umption3###, we have Up to constant factor, (10 ###reference_###) is less or equal to (12 ###reference_###) since\nThis is the first optimal method for the problem we consider, and Theorem 4 ###reference_orem4### gives the first lower bound."
},
{
"section_id": "6",
"parent_section_id": null,
"section_name": "Using the Developed Strategies in Other Methods",
"text": "ComputeBatchDifference (Algorithm 3 ###reference_###) is a generic subroutine and can be used in other methods.\nIn Section H ###reference_###, we introduce Freya SGD, a simple algorithm with update rule , where is a batch size and ComputeBatch (Algorithm 4 ###reference_###) is a minor modification of ComputeBatchDifference. Theorem 17 ###reference_orem17### establishes that Freya SGD converges in \nseconds (where we only keep the dependence on and ). For small , this complexity is worse than (10 ###reference_###), but it can be better, for instance, in the interpolation regime (Schmidt and Roux, 2013 ###reference_b26###; Ma et al., 2018 ###reference_b18###). Freya SGD resembles Rennala SGD (Tyurin and Richt\u00e1rik, 2023 ###reference_b28###), but unlike the latter, it is specialized to work with finite-sum problems (1 ###reference_###) and does not require the \u2013bounded variance assumption on stochastic gradients (which is not satisfied in our setting)."
}
],
"appendix": [
{
"section_id": "Appendix x1",
"parent_section_id": null,
"section_name": "APPENDIX",
"text": ""
},
{
"section_id": "Appendix 1",
"parent_section_id": null,
"section_name": "Appendix A Experiments",
"text": "We compare Freya PAGE with Rennala SGD, Asynchronous SGD, and Soviet PAGE on nonconvex quadratic optimization tasks and practical machine learning problems. The experiments were conducted in Python 3.8 with Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz.\nWe developed a library that emulates the working behavior of thousands of nodes.\n###figure_1### ###figure_2### In the first set of experiments, we compare the algorithms on a synthetic quadratic optimization task generated using the procedure from Section I ###reference_###. To ensure robust and fair comparison, we fix the performance of each worker and emulate our setup by assuming that the th worker requires seconds to calculate a stochastic gradient. For each algorithm, we fine-tune the step size from the set . Uniform sampling with replacement is used across all methods. In Freya PAGE, we set according to Theorem 4.2 ###reference_###. We consider and in each case plot the best run of each method.\nThe results are presented in Figure 1 ###reference_###. It is evident that our new method, Freya PAGE, has the best convergence performance among all algorithms considered. The convergence behavior of Rennala SGD and Asynchronous SGD is very noisy, and both achieve lower accuracy than Freya PAGE. Furthermore, the gap between Freya PAGE and Soviet PAGE widens with increasing because Soviet PAGE is not robust to the presence of slow workers.\n###figure_3### ###figure_4### We now consider the logistic regression problem on the MNIST dataset [LeCun et al., 2010 ###reference_b14###], where each algorithm samples one data point at a time. The results of the experiments are presented in Figure 2 ###reference_###. The difference between Freya PAGE and Rennala SGD/Asynchronous SGD is not as pronounced as in Section A.1 ###reference_###: the methods have almost the same performance for this particular problem. However, our method still outperforms its competitors in the low accuracy regime, and is significantly better than Soviet PAGE.\nA critical disadvantage of Rennala SGD and Asynchronous SGD is their noisy behavior, evident in both the plots and reflected in higher variance of the accuracy (see Table 2 ###reference_###). In contrast, the iterations of Freya PAGE in Figure 2 ###reference_### are smooth, and its accuracy exhibits the lowest variance, as shown in Table 2 ###reference_###. This stability can be attributed to the variance-reduction nature of Freya PAGE."
},
{
"section_id": "Appendix 2",
"parent_section_id": null,
"section_name": "Appendix B The Time Complexity Guarantees of Algorithms\u00a03 and 4",
"text": "In addition to ComputeBatchDifference (Algorithm 3 ###reference_###) introduced in the main part, we also analyze ComputeBatch (Algorithm 4 ###reference_###) that is similar to ComputeBatchDifference, but calculates a minibatch of stochastic gradients instead of stochastic gradient differences\n*\nLet\nAs soon as some worker finishes calculating the stochastic gradient difference, it immediately starts computing the difference of the next pair. Hence, by the time all workers will have processed at least\npairs. Assume that\nUsing Lemma 4 ###reference_ma4###, we have for all Thus, we get for all and\nWe can conclude that by the time (5 ###reference_###), the algorithm will have calculated pairs of stochastic gradients and exited the loop.\n\u220e\nThe time needed by Algorithm 4 ###reference_### to calculate is at most\nseconds.\nThe proof of this theorem is essentially the same as the proof of Theorem 4 ###reference_umption4###. The only difference is that Algorithm 4 ###reference_### calculates instead of \n\u220e"
},
{
"section_id": "Appendix 3",
"parent_section_id": null,
"section_name": "Appendix C The Time Complexity Guarantees of Algorithms\u00a02 and 5",
"text": "Instead of Algorithm 2 ###reference_###, we analyze a more general Algorithm 5 ###reference_### that reduces to Algorithm 2 ###reference_### when\ntheoremTHMFULLGRADTIMEFULL\n\nThe expected time needed by Algorithm 5 ###reference_### to calculate is at most\nseconds.\nProof Sketch: While the following proof is technical, the intuition and idea behind it and the algorithm are relatively simple. For simplicity, assume that The set includes all indices that have not yet been calculated. Each worker is assigned a new random index from and starts the calculation of the gradient. At the beginning of the algorithm, when the set is large, the probability that two workers are assigned the same index is very small. Hence, using the same idea as in the proof of Theorem 4 ###reference_umption4###, the workers will calculate stochastic gradients after\nseconds. However, once the size of becomes roughly equal to , the probability that two workers sample the same index increases. In the final steps of the algorithm, we encounter the same issue as in the famous coupon collector\u2019s problem, resulting in an additional factor of because some stochastic gradients will be calculated multiple times.\nLet us define and take any We refer to the workers with the upper bounds such that as \u201cfast\u201d, and the others will be termed \u201cslow\u201d.\nConsider the moment when the algorithm samples from the set to allocate it to one of the \u201cfast\u201d workers ({NoHyper}Line 4 ###reference_4### or 11 ###reference_11###).\nThe probability of sampling such that is currently being calculated by another \u201cfast\u201d worker is\nbecause there are at most \u201cfast\u201d workers and at most distinct stochastic gradients. Let us define the set\nA \u201cfast\u201d worker can be \u201cunlucky\u201d and start calculating a stochastic gradient that is being computed by another \u201cfast\u201d worker.\nHowever, with probability at least\nit will take a new index that was not previously taken by another \u201cfast\u201d worker.\nThus, the while loop of the algorithm defines a Markov process that begins with some The size of decreases by one with probability at least in iterations where the algorithm samples from and asks a \u201cfast\u201d worker to calculate the stochastic gradient. Additionally, the size of can decrease by one when a \u201cslow\u201d worker finishes calculating a stochastic gradient from .\nLet be the time required for the Markov process to reach the state Then, the while loop in Algorithm 2 ###reference_### will finish after at most\nseconds because once all non-processed indices from are assigned to the \u201cfast\u201d workers, so calculating the remaining stochastic gradients will take at most seconds.\nIt remains to estimate Let be the number of iterations of the while loop where the algorithm samples from and asks a \u201cfast\u201d worker to calculate the stochastic gradient when By the definition of the Markov chain, we have\nbecause with probability at least one of the (\u201clucky\u201d) \u201cfast\u201d workers receives from and decreases the size of by ( has a geometric-like distribution).\nSince at the beginning of the while loop, it is sufficient for the \u201cfast\u201d workers to calculate at most\nstochastic gradients to ensure that (it is possible that some stochastic gradients will be calculated many times). Indeed, if for the first moment, then after calculations of stochastic gradients by the \u201cfast\u201d workers, the size of set will be at most The last \u201cplus one\u201d calculation can only happen when\nThe time required for the \u201cfast\u201d workers to process this number of stochastic gradients is at most\nbecause for this choice of , we have\nwhere is the number of stochastic gradients that worker can calculate in seconds.\nTaking expectation gives\nwhere we use the standard bound on the harmonic series. Thus, the expectation of the total time (15 ###reference_###) can be bounded by\nwhere in the last line we add \nRecall that is a parameter we can choose. Let us take\nUsing Lemma 4 ###reference_ma4###, we have\nand hence\n\u220e"
},
{
"section_id": "Appendix 4",
"parent_section_id": null,
"section_name": "Appendix D Proofs for Algorithm 1 (Freya PAGE)",
"text": "The proofs use the simplified notation from Definition 1 ###reference_orem1###.\nSince the update rule of PAGE coincides with that of Freya PAGE, one can directly apply the iteration complexity results established in Tyurin et al. [2023 ###reference_b29###].\n*\nThe result follows from Theorem 6 of Tyurin et al. [2023 ###reference_b29###], using the parameters from the \u201cUniform With Replacement\u201d line in Table 1 of the same work.\n\u220e\n*\nThe result established in Theorem 1 ###reference_orem1### says that the iteration complexity of the algorithm is\nAt each iteration, with probability , the workers compute differences of stochastic gradients, which by Theorem 4 ###reference_umption4### takes\nseconds. Otherwise, they collect the full gradient, which can be done (Theorem 3 ###reference_.SSS0.Px1###) in\nseconds, where the inequality uses Assumption 4 ###reference_umption4###.\nHence, recalling the notation\nthe (expected) time complexity of the method is\nwhere the term corresponds to the preprocessing step, when the algorithm needs to calculate .\n\u220e\nUp to a constant factor, the time complexity from (LABEL:eq:compl_p_S_indep) is at least\nand attains this value with\nUp to a constant factor, by Theorem 1 ###reference_orem1###, the time complexity of Freya PAGE is\nLet us denote the second term in the above equation as . Then for all we have\nand for all\nOtherwise, when and , we have\nHence, up to a constant factor,\nIt can be easily verified that this bound can be attained (up to a constant factor) using the parameter as defined in (18 ###reference_###).\n\u220e\nUp to a constant factor, the minimum of the time complexities (LABEL:eq:compl_p_S_indep) and (17 ###reference_###) is\nand is achieved for\nand where is defined in (18 ###reference_###).\nThis is a simple corollary of Theorem 6 ###reference_orem6###: we take that minimizes (17 ###reference_###).\n\u220e\nIn certain scenarios, we can derive the optimal parameter values explicitly.\nIf , then (up to constants) and are optimal parameters and\nIf , then (up to constants) and are optimal parameters and\nIf , then (up to constants) is an optimal parameter and\nfor any .\nWe fist consider the case when .\nSince , we have\nand from the assumption that it follows that\nfor all . Thus,\nand\nIt follows that\nSince , we have\nand thus, according to the result from Theorem 7 ###reference_orem7###, it is sufficient to minimize\nFirst, let us note that\nIf , then\nOtherwise, if , then\nTherefore, the optimal choice is , and by Theorem 6 ###reference_orem6### and inequality (D ###reference_4###), should chosen to be\nUsing (22 ###reference_###), we can conclude that is optimal, proving the first part of the Theorem.\nNext, consider the case when .\nBy the reasoning above, it is sufficient to minimize\nFirst, let us note that\nwhere the last inequality follows from the fact that for any , .\nOn the other hand, if , then\nTherefore, the optimal choice is . Then\nwhile for any\nHence .\nIt remains to prove the third result. Suppose that . Then, the last part of the theorem follows from the fact that\n\u220e\nIn practice, the values of smoothness constants are often unknown. However, the algorithm can still be run with close to optimal parameters.\n*\nThe proof is the same as in Theorem 8 ###reference_orem8###. Indeed,\nup to a constant factor, the time complexity (LABEL:eq:compl_p_S_indep) can be bounded as\nTherefore, by setting in Theorem 8 ###reference_orem8###, one can easily derive the parameters and that are optimal up to the smoothness constants. The time complexity (10 ###reference_###) can be obtained by applying and to (LABEL:eq:compl_p_S_indep).\n\u220e"
},
{
"section_id": "Appendix 5",
"parent_section_id": null,
"section_name": "Appendix E Freya PAGE with Other Samplings",
"text": "Algorithm 1 ###reference_### can be adapted to accommodate other sampling methods. This brings us to the introduction of Algorithm 6 ###reference_###, which supports virtually any sampling strategy, formalized by the following mapping:\nA sampling is a random mapping , which takes as an input a set of indices and returns a (multi)set , where for all .\nThe only difference is that instead of ComputeBatchDifference (Algorithm 5 ###reference_###), Algorithm 6 ###reference_### uses a new subroutine, called ComputeBatchDifferenceAnySampling (Algorithm 3 ###reference_###).\nFor this algorithm, we can prove the following time complexity guarantees.\ntheoremTHMFULLGRADTIMEFULLGENERIC\n\nThe expected time needed by Algorithm 7 ###reference_### to calculate is at most\nseconds.\nThe proof of this theorem is the same as the proof of Theorem 5 ###reference_###. We only have to multiply (14 ###reference_###) by because Algorithm 3 ###reference_### calculates instead of \n\u220e\nWhile changing the sampling strategy might affect the iteration complexity of the method, for a fixed minibatch size , the time complexity of a single iteration remains unchanged. Thus, having established the expected time needed by the algorithm to perform a single iteration (i.e., to collect a minibatch of stochastic gradients of the required size), one can simply multiply it by the iteration complexity of the method determined for any supported sampling technique to obtain the resulting time complexity.\nWith this in mind, we now analyse the time complexity of Algorithm 6 ###reference_### with different sampling techniques: nice sampling and importance sampling. However, it can be analyzed with virtually any other unbiased sampling [Tyurin et al., 2023 ###reference_b29###].\nNice sampling returns a random subset of fixed cardinality chosen uniformly from . Unlike uniform sampling with replacement used in Algorithm 5 ###reference_###, which returns a random multiset (that can include repetitions), the samples obtained by nice sampling are distinct. The iteration complexity of Algorithm 6 ###reference_### with nice sampling is given by the following theorem.\nLet Assumptions 1 ###reference_umption1###, 2 ###reference_umption2### and 3 ###reference_umption3### hold. Choose a minibatch size , a probability and the stepsize\nThen, the number of iteration needed by Algorithm 6 ###reference_### with nice sampling to reach an -stationary point is\nThe result follows from Theorem and Table 1 from [Tyurin et al., 2023 ###reference_b29###].\n\u220e\nConsider the assumptions and parameters from Theorem 10 ###reference_orem10### and Assumption 4 ###reference_umption4###. Up to a constant factor, the time complexity of Algorithm 6 ###reference_### is\nwhere is defined from Definition 1 ###reference_orem1###.\nCompared to Theorem 1 ###reference_orem1###, which uses uniform sampling with replacement, the guarantees for nice sampling are slightly worse: the term from Theorem 1 ###reference_orem1### here is replaced with Ignoring the logarithmic term (), the result from Theorem 11 ###reference_orem11### is equivalent to that in Theorem 1 ###reference_orem1###. Thus, Theorems 4.1 ###reference_### and 4.2 ###reference_### hold also for the nice sampling (up to logarithmic factors).\nWe use the same reasoning as in the proof of Theorem 1 ###reference_orem1###. With probability the algorithm calculates the full gradients, which by Theorem 3 ###reference_.SSS0.Px1### requires\nseconds, where we use Assumption 4 ###reference_umption4###. With probability the algorithm calls ComputeBatchDifferenceAnySampling, which by Theorem 7 ###reference_### requires\nseconds. One can obtain the result by multiplying the iteration complexity (25 ###reference_###) by the expected time needed to collect the required number of stochastic gradients per iteration and adding the preprocessing time.\n\u220e\nHere we additionally assume -smoothness of the local objective functions .\nThe functions are -smooth. We denote and\nImportance sampling is a sampling technique that returns a multiset of indices with repetitions. Index is included in the multiset with probability\nLet Assumptions 1 ###reference_umption1### and 5 ###reference_umption5### hold. Choose a minibatch size , probability and the stepsize\nThen, the number of iteration needed by Algorithm 1 ###reference_### with importance sampling to reach an -stationary point is\nThe complexity (27 ###reference_###) is nearly identical to (7 ###reference_###) and (25 ###reference_###), with the only difference being the dependence on rather than . Thus, all the results up to constant and logarithmic factors can be derived using the same methodology as that outlined in Section 4 ###reference_###, with the simple substitution of with ."
},
{
"section_id": "Appendix 6",
"parent_section_id": null,
"section_name": "Appendix F Dynamic Bounds",
"text": "As noted in Section 4.4 ###reference_###, the results from Section D ###reference_### can be easily generalized to iteration-dependent processing times.\nConsider the assumptions and the parameters from Theorem 1 ###reference_orem1### and Assumption 4 ###reference_umption4###. Up to a constant factor, the time complexity of Freya PAGE (Algorithm 1 ###reference_###) with iteration-dependent processing times which are defined in Section 4.4 ###reference_###, is at most\nto find an -stationary point, where and are free parameters, and is a permutation such that for all\nThe theorem can be trivially extended to other samplings by changing to the iteration complexities from Theorems 10 ###reference_orem10### and 13 ###reference_orem13###.\nThe reasoning behind this result is exactly the same as in the proof of Theorem 1 ###reference_orem1###. The only difference is that in this more general setting, the expected time per iteration varies across iterations. Therefore, instead of simply multiplying, one needs to sum over the iterations to obtain the total time complexity.\nWe introduce the permutations to ensure that are sorted. When there is no need to introduce them because, throughout the paper, it is assumed (see Section 1 ###reference_###).\n\u220e"
},
{
"section_id": "Appendix 7",
"parent_section_id": null,
"section_name": "Appendix G Examples",
"text": "Here we provide the proofs for the examples from Section 4.3 ###reference_###. We will use the notation\nfor a fixed and all .\n*\nFirst, when for all , then for any , is minimized by taking :\nIt remains to substitute this equality in (10 ###reference_###).\n\u220e\n*\nThe statement follows easily from the fact that for any we have\n\u220e\n*\nThis follows from the fact that for any and any we have\n\u220e\n*\nSuppose that and fix any . Then, since for all , we have\nfor any . Rearranging and adding to both sides of the inequality, we obtain\nmeaning that\nfor any and any . Therefore,\nfor any , which proves the claim.\n\u220e"
},
{
"section_id": "Appendix 8",
"parent_section_id": null,
"section_name": "Appendix H A New Stochastic Gradient Method: Freya SGD",
"text": "In this section, we introduce a new a non-variance reduced SGD method that we call Freya SGD. Freya SGD is closely aligned with Rennala SGD [Tyurin and Richt\u00e1rik, 2023 ###reference_b28###], but Freya SGD does not require the \u2013bounded variance assumption on stochastic gradients.\n(note): is a set of size of i.i.d. indices sampled from uniformly with replacement\nFor all there exists such that for all\nWe define\nLet Assumptions 1 ###reference_umption1###, 5 ###reference_umption5### and 6 ###reference_umption6### hold. Choose minibatch size and stepsize\nwhere\nThen, the number of iterations needed by Algorithm 8 ###reference_### to reach an -stationary point is\nThe iteration complexity can be proved using Corollary 1 and Proposition 3 (i) of Khaled and Richt\u00e1rik [2022 ###reference_b9###] (with ).\n\u220e\nConsider the assumptions and the parameters from Theorem 16 ###reference_orem16###. Up to a constant factor, the time complexity of Freya SGD (Algorithm 8 ###reference_###) is at most\nand is minimized by choosing\nUp to a constant factor, we get\nAt each iteration, the algorithm needs to collect a minibatch of stochastic gradients of size . Multiplying the iteration complexity of Theorem 16 ###reference_orem16### by the time needed to gather such a minibatch (Theorem 5 ###reference_orem5###), the resulting time complexity is\nWe now find the optimal . Assume first that . Assumption 5 ###reference_umption5### ensures that the function is -smooth. Thus, we have \nand is an -solution. Therefore, we can take any\nNow, suppose that . Then we have\nFor all we get and hence\nLet us now consider the case We can additionally assume that and (otherwise, the set that satisfies the condition is empty). We get\nFinally, for we have\nbecause we assume Therefore, an optimal choice is .\n\u220e"
},
{
"section_id": "Appendix 9",
"parent_section_id": null,
"section_name": "Appendix I Setup of the Experiments from Section\u00a0A.1",
"text": "We consider the optimization problem (1 ###reference_###) with nonconvex quadratic functions. The matrices and vectors defining the objective functions are generated using Algorithm 9 ###reference_### with and . The output is used to construct"
},
{
"section_id": "Appendix 10",
"parent_section_id": null,
"section_name": "Appendix J Lower bound",
"text": "The classical lower bound frameworks [Nemirovskij and Yudin, 1983 ###reference_b20###, Carmon et al., 2020 ###reference_b3###, Arjevani et al., 2022 ###reference_b2###, Nesterov, 2018 ###reference_b21###] are not convenient in the analysis of parallel algorithms since they are designed to estimate lower bounds on iteration complexities. In order to obtain time complexity lower bounds, we use the framework by Tyurin and Richt\u00e1rik [2023 ###reference_b28###]. Let us briefly explain the main idea. A more detailed explanation can be found in [Tyurin and Richt\u00e1rik, 2023 ###reference_b28###][Sections 3-6].\nWe start by introducing an appropriate oracle for our setup:\nwhere i.e., is a random index sampled uniformly from the set . We assume that all draws from are i.i.d..\nNext, we define the time multiple oracles protocol, first introduced in [Tyurin and Richt\u00e1rik, 2023 ###reference_b28###].\nLet us explain the behavior of the protocol. At each iteration, the algorithm returns three outputs, based on the available information/gradients: time the index of a worker and a new point Depending on the current time and the state of the worker, three options are possible (see (29 ###reference_9###)). If then the worker is idle. It then starts calculations at the point changes the state from to stores the point in (at which a new stochastic gradient should be calculated), and returns a zero vector. If and then the worker is still calculating a stochastic gradient. It does not change the state and returns a zero vector because the computation has not finished yet. If and the worker can finally return a stochastic gradient at because sufficient time has passed since the worker was idle (). Note that with this oracle, the algorithm will never receive the first stochastic gradient before time (assuming that all oracles have the same processing time in general, we will assume that the processing times are different).\nIn the setting considered in this work, there are oracles that can do calculations in parallel, and an algorithm orchestrates their work. Let the processing times of the oracles be equal to A reasonable strategy would be to call each oracle with then to call the fastest worker with to get the first stochastic gradients as soon as possible, then to call this worker again with to request calculation of the next stochastic gradient, and so on.\nOne unusual thing about this protocol is that the algorithm controls the time. The oracle is designed to force the algorithm to increase the time; otherwise, the algorithm would not receive new information about the function.\nOur goal will be to bound the complexity measure , defined as\nwhere the sequences and are generated by Protocol 10 ###reference_###. Hence, unlike the classical approach, where the lower bounds are obtained for the minimum number of iterations required to find an \u2013stationary point, we seek to find the minimum time needed to get an \u2013stationary point.\nWe consider a standard for our setup class of functions [Fang et al., 2018 ###reference_b6###]:\nWe say that if it is -bounded, i.e., , and\nwhere the functions are differentiable and satisfy\nNext, we define the class of algorithms we will analyze.\nLet us consider Protocol 10 ###reference_###. We say that a sequence of mappings is a zero-respecting algorithm, if\nfor all and\nFor all and \nwhere and are defined as and\nfor all where\nWe denote the set of all algorithms with these properties as\nIn the above definition, property 1 ###reference_.i1### defines the domain of the mappings , and property 2 ###reference_.i2### ensures that our algorithm does not \u201ccheat\u201d and does not \u201ctravel into the past\u201d: the time can only go forward (see [Tyurin and Richt\u00e1rik, 2023 ###reference_b28###][Section 4, Definition 4.1]). Property 3 ###reference_.i3### is a standard assumption for zero-respecting algorithms [Arjevani et al., 2022 ###reference_b2###] that is satisfied by virtually all algorithms, including Adam [Kingma and Ba, 2015 ###reference_b10###], SGD, PAGE [Li et al., 2021 ###reference_b16###] and Asynchronous SGD.\nIt remains to define an oracle class for our problem that employs oracles from (29 ###reference_9###). We design oracles that emulate the real behavior of the workers.\nFor any the oracle class returns oracles , , where the mappings are defined in (29 ###reference_9###).\nThe analysis uses a standard function, commonly employed to derive lower bounds in the nonconvex regime. First, let us define\nOur choice of the underlying function follows the construction introduced in Carmon et al. [2020 ###reference_b3###], Arjevani et al. [2022 ###reference_b2###]: for any define\nwhere\nThroughout the proof, we only rely on the following properties of the function:\nThe function satisfies:\nwhere\nThe function is \u2013smooth, where\nFor all where\nFor all\nFor all if then\nWe are ready to present the main results of this section.\ntheoremTHEOREMLOWERBOUND\n\nLet us consider Protocol 10 ###reference_###. Without loss of generality, assume that and take any and such that and Then, for any algorithm there exists a function and computation oracles such that \nwhere and\nThe quantities and are universal constants. The sequences and are defined in Protocol 10 ###reference_###.\n(Step 1: Construction of a hard problem)\nWe start our proof by constructing an appropriate function Let us fix any and define such that\nfor all and for all and Essentially, all information about the function is in the first function. Note that\nThen, using Lemma 1 ###reference_ma1###, we have\nTaking\nwe ensure that\nwhere in the inequalities we use Lemma 1 ###reference_ma1### and the choice of Now, inequalities (32 ###reference_2###) and (J.3 ###reference_131###) imply that , and hence, using Lemma 1 ###reference_ma1### again, we get\nLet us take\nThen\nand\nThe last inequality means that while for all all gradients are large. The function is a zero-chain: due to Lemma 1 ###reference_ma1###, we know that for all This implies that we can discover at most one new non-zero coordinate by calculating the gradient of the function Since the algorithm is zero-respecting, by definition it cannot return a point with progress greater than that of the vectors returned by the oracles. In view of this, it is necessary to calculate the gradient of at least times to get\nThe gradient of can be calculated if and only if where (see (29 ###reference_9###)).\nConsider worker and define to be the number of draws from until the index is sampled. Clearly, is a Geometric random variable with parameter Recall that the workers can do the computations in parallel, and by the design of the oracles, worker needs at least seconds to calculate stochastic gradients.\nHence, it is impossible to calculate before the time\nOnce the algorithm calculates for the first time, it needs to do so at least times more to achieve Thus, one should wait at least\nseconds, where . We can conclude that\n(Step 2: The Chernoff Method)\nThe theorem\u2019s proof is now reduced to the analysis of the concentration of \nUsing the Chernoff method, for all , we have\nIndependence gives\nLet us consider the term in the last bracket separately. For a fixed , we have\nWe now consider the last term. Due to the independence, we have\nwhere we use the cumulative distribution function of a geometric random variable and temporarily define Using Lemma 3 ###reference_ma3###, we get\nLet us take\nwhere for all and assume that is the largest index such that Then, Lemma 4 ###reference_ma4### gives\nwhere we let . Therefore, and (39 ###reference_9###) gives\nUsing (40 ###reference_0###), we get for all \nSince for all we obtain\nSubstituting the last inequality to (38 ###reference_8###) gives\nWe now take to obtain\nSubstituting this inequality in (35 ###reference_5###) and (J.3 ###reference_141###) gives\nTherefore,\nfor all\nand Using the bound on the probability with and (34 ###reference_4###), we finally conclude\nfor all\nIt remains to use the conditions and from the theorem to finish the proof.\n\u220e\ntheoremTHEOREMLOWERBOUNDSECOND\n\nLet us consider Protocol 10 ###reference_###. Without loss of generality, assume that and take any and such that Then, for any algorithm there exists a function and computation oracles such that \nwhere and\nThe quantities and are universal constants. The sequences and are defined in Protocol 10 ###reference_###.\nWe use the same construction as in the proof of Theorem of Li et al. [2021 ###reference_b16###]. Let us consider\nfor all where and , are parameters that we define later.\nThen\nfor all and\nwhere we take\nThus, we have Now, let\nand choose any (one can always take ).\nThen\nLet us fix some time to be determined later and recall that the workers can calculate the stochastic gradients in parallel. Suppose that up to time , fewer than stochastic gradients have been computed. Then, since the algorithm is a zero-respecting algorithm, it cannot have discovered more than coordinates. Thus, at least coordinates are equal to and\nfor all . Therefore, substituting our choice of and using the assumptions from the theorem, we get\nIt remains to find the time The workers work in parallel, so in seconds they calculate at most\nstochastic gradients. Now, let\nand define for all Assume that is the largest index such that Using Lemma 4 ###reference_ma4###, we have\nSince for all and for all we obtain\nTherefore, it is possible to calculate at most stochastic gradient in time (43 ###reference_3###) and we can finally conclude that inequality (42 ###reference_2###) holds for any time that is less than or equal (43 ###reference_3###).\n\u220e"
},
{
"section_id": "Appendix 11",
"parent_section_id": null,
"section_name": "Appendix K Useful Identities and Inequalities",
"text": "It holds that , and .\nConsider a sequence . Then\nWe prove the result by induction. It is clearly true for : Now, assume that that it holds for , meaning that\nMultiplying both sides of the inequality by gives\nsince for all \n\u220e\nLet us consider the equilibrium time mapping from Definition 1 ###reference_orem1###. Then\n\n\nfor all for all and\nAssume that and then\nand\nThus, can be arbitrarily smaller than and This implies that our new complexities (9 ###reference_###) and (10 ###reference_###) can be arbitrarily better than and\nConsider a sequence and fix some . For all , define\nLet be the largest index such that For we have\nLet be any index such that For we have\nLet be the smallest index such that . Then\nLet be any index such that Then\nWe first prove the first inequality.\nSuppose that . Then , meaning that\nThis implies the following series of inequalities:\nThe proof of the second inequality is the same, but with non-strict inequalities.\nTo prove the third inequality, first suppose that . Then as needed. On the other hand, implies\nThe proof of the fourth inequality is the same, but with non-strict inequalities.\n\u220e"
},
{
"section_id": "Appendix x2",
"parent_section_id": null,
"section_name": "NeurIPS Paper Checklist",
"text": "Claims\nQuestion: Do the main claims made in the abstract and introduction accurately reflect the paper\u2019s contributions and scope?\nAnswer: [Yes]\nJustification: Sections 4 ###reference_### and 5 ###reference_###, Table 1 ###reference_###\nGuidelines:\nThe answer NA means that the abstract and introduction do not include the claims made in the paper.\nThe abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.\nThe claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.\nIt is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.\nLimitations\nQuestion: Does the paper discuss the limitations of the work performed by the authors?\nAnswer: [Yes]\nJustification: Sections 3 ###reference_###, 4.1 ###reference_###, and 5 ###reference_###\nGuidelines:\nThe answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.\nThe authors are encouraged to create a separate \u201dLimitations\u201d section in their paper.\nThe paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.\nThe authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.\nThe authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.\nThe authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.\nIf applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.\nWhile the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren\u2019t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.\nTheory Assumptions and Proofs\nQuestion: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?\nAnswer: [Yes]\nJustification: Sections 1.1 ###reference_###, 4 ###reference_###, and the appendix\nGuidelines:\nThe answer NA means that the paper does not include theoretical results.\nAll the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.\nAll assumptions should be clearly stated or referenced in the statement of any theorems.\nThe proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.\nInversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material.\nTheorems and Lemmas that the proof relies upon should be properly referenced.\nExperimental Result Reproducibility\nQuestion: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?\nAnswer: [Yes]\nJustification: Section A ###reference_###\nGuidelines:\nThe answer NA means that the paper does not include experiments.\nIf the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.\nIf the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.\nDepending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.\nWhile NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example\nIf the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.\nIf the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.\nIf the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).\nWe recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.\nOpen access to data and code\nQuestion: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?\nAnswer: [Yes]\nJustification: In the supplementary materials.\nGuidelines:\nThe answer NA means that paper does not include experiments requiring code.\nPlease see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy ###reference_onPolicy###) for more details.\nWhile we encourage the release of code and data, we understand that this might not be possible, so \u201cNo\u201d is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).\nThe instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy ###reference_onPolicy###) for more details.\nThe authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.\nThe authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.\nAt submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).\nProviding as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.\nExperimental Setting/Details\nQuestion: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?\nAnswer: [Yes]\nJustification: Section A ###reference_###\nGuidelines:\nThe answer NA means that the paper does not include experiments.\nThe experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.\nThe full details can be provided either with the code, in appendix, or as supplemental material.\nExperiment Statistical Significance\nQuestion: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?\nAnswer: [Yes]\nJustification: We provide two plots for each algorithm to ensure the soundness of our experiments. We also calculate the variance of accuracies in Table 2 ###reference_###.\nGuidelines:\nThe answer NA means that the paper does not include experiments.\nThe authors should answer \u201dYes\u201d if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.\nThe factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).\nThe method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)\nThe assumptions made should be given (e.g., Normally distributed errors).\nIt should be clear whether the error bar is the standard deviation or the standard error of the mean.\nIt is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified.\nFor asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).\nIf error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.\nExperiments Compute Resources\nQuestion: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?\nAnswer: [Yes]\nJustification: Section A ###reference_###\nGuidelines:\nThe answer NA means that the paper does not include experiments.\nThe paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.\nThe paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.\nThe paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn\u2019t make it into the paper).\nCode Of Ethics\nQuestion: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ###reference_###?\nAnswer: [Yes]\nJustification: We have thoroughly reviewed the code of ethics and we can confidently state that our paper is fully compliant with it.\nGuidelines:\nThe answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.\nIf the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.\nThe authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).\nBroader Impacts\nQuestion: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?\nAnswer: [N/A]\nJustification: Our paper considers a mathematical topic.\nGuidelines:\nThe answer NA means that there is no societal impact of the work performed.\nIf the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.\nExamples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.\nThe conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.\nThe authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.\nIf there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).\nSafeguards\nQuestion: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?\nAnswer: [N/A]\nJustification:\nGuidelines:\nThe answer NA means that the paper poses no such risks.\nReleased models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.\nDatasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.\nWe recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.\nLicenses for existing assets\nQuestion: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?\nAnswer: [Yes]\nJustification: Section A ###reference_###\nGuidelines:\nThe answer NA means that the paper does not use existing assets.\nThe authors should cite the original paper that produced the code package or dataset.\nThe authors should state which version of the asset is used and, if possible, include a URL.\nThe name of the license (e.g., CC-BY 4.0) should be included for each asset.\nFor scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.\nIf assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets ###reference_thcode.com/datasets### has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.\nFor existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.\nIf this information is not available online, the authors are encouraged to reach out to the asset\u2019s creators.\nNew Assets\nQuestion: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?\nAnswer: [Yes]\nJustification: In the supplementary materials.\nGuidelines:\nThe answer NA means that the paper does not release new assets.\nResearchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.\nThe paper should discuss whether and how consent was obtained from people whose asset is used.\nAt submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.\nCrowdsourcing and Research with Human Subjects\nQuestion: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?\nAnswer: [N/A]\nJustification:\nGuidelines:\nThe answer NA means that the paper does not involve crowdsourcing nor research with human subjects.\nIncluding this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.\nAccording to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.\nInstitutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects\nQuestion: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?\nAnswer: [N/A]\nJustification:\nGuidelines:\nThe answer NA means that the paper does not involve crowdsourcing nor research with human subjects.\nDepending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.\nWe recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.\nFor initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review."
}
],
"tables": {
"1": {
"table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S1.T1.40.10.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S1.T1.18.9\" style=\"font-size:90%;\">Comparison of the <em class=\"ltx_emph ltx_font_italic\" id=\"S1.T1.18.9.1\">worst-case time complexity</em> guarantees of methods that work with asynchronous computations in the setup from Section\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#S1\" title=\"1 Introduction \u2023 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations\"><span class=\"ltx_text ltx_ref_tag\">1</span></a> (up to smoothness constants). We assume that is the bound on the times required to calculate one stochastic gradient by worker and Abbr: # of data samples, # of workers, error tolerance.</span></figcaption>\n<p class=\"ltx_p ltx_align_center\" id=\"S1.T1.38\"><span class=\"ltx_text ltx_inline-block\" id=\"S1.T1.38.20\" style=\"font-size:50%;width:433.6pt;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S1.T1.38.20.20.20\" style=\"width:809.1pt;height:464.3pt;vertical-align:-43.1pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S1.T1.38.20.20.20.20\"><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20\">\n<span class=\"ltx_tabular ltx_align_top\" id=\"S1.T1.38.20.20.20.20.20.20\">\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.21\">\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S1.T1.38.20.20.20.20.20.20.21.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.38.20.20.20.20.20.20.21.1.1\">Method</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S1.T1.38.20.20.20.20.20.20.21.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.38.20.20.20.20.20.20.21.2.1\">Worst-Case Time Complexity</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S1.T1.38.20.20.20.20.20.20.21.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S1.T1.38.20.20.20.20.20.20.21.3.1\">Comment</span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.20.2.2.2.2.2.2.2\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.20.2.2.2.2.2.2.2.3\"><span class=\"ltx_text\" id=\"S1.T1.20.2.2.2.2.2.2.2.3.1\"></span> <span class=\"ltx_text\" id=\"S1.T1.20.2.2.2.2.2.2.2.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.20.2.2.2.2.2.2.2.3.2.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.20.2.2.2.2.2.2.2.3.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.20.2.2.2.2.2.2.2.3.2.1.1.1\"><span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.20.2.2.2.2.2.2.2.3.2.1.1.1.1\">Hero GD</span> \u2009 <span class=\"ltx_text\" id=\"S1.T1.20.2.2.2.2.2.2.2.3.2.1.1.1.2\" style=\"font-size:240%;\">(</span><span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.20.2.2.2.2.2.2.2.3.2.1.1.1.3\">Soviet GD</span><span class=\"ltx_text\" id=\"S1.T1.20.2.2.2.2.2.2.2.3.2.1.1.1.4\" style=\"font-size:240%;\">)</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.20.2.2.2.2.2.2.2.3.3\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.20.2.2.2.2.2.2.2.2\"><span class=\"ltx_text\" id=\"S1.T1.20.2.2.2.2.2.2.2.2.3\"></span> <span class=\"ltx_text\" id=\"S1.T1.20.2.2.2.2.2.2.2.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.20.2.2.2.2.2.2.2.2.2.2\">\n<span class=\"ltx_tr\" id=\"S1.T1.20.2.2.2.2.2.2.2.2.2.2.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.20.2.2.2.2.2.2.2.2.2.2.2.2\"> \u2003<span class=\"ltx_text\" id=\"S1.T1.20.2.2.2.2.2.2.2.2.2.2.2.2.1\" style=\"font-size:240%;\">()</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.20.2.2.2.2.2.2.2.2.4\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.20.2.2.2.2.2.2.2.4\">Suboptimal</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.22.4.4.4.4.4.4.4\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.22.4.4.4.4.4.4.4.3\"><span class=\"ltx_text\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.1\"></span> <span class=\"ltx_text\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.2.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.2.1.1.1\"><span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.2.1.1.1.1\">Hero PAGE</span> \u2009 <span class=\"ltx_text\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.2.1.1.1.2\" style=\"font-size:240%;\">(</span><span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.2.1.1.1.3\">Soviet PAGE</span><span class=\"ltx_text\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.2.1.1.1.4\" style=\"font-size:240%;\">)</span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.2.1.2.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Li et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#bib.bib16\" title=\"\">2021 ###reference_b16###</a>)</cite></span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.22.4.4.4.4.4.4.4.3.3\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.22.4.4.4.4.4.4.4.2\"><span class=\"ltx_text\" id=\"S1.T1.22.4.4.4.4.4.4.4.2.3\"></span> <span class=\"ltx_text\" id=\"S1.T1.22.4.4.4.4.4.4.4.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.22.4.4.4.4.4.4.4.2.2.2\">\n<span class=\"ltx_tr\" id=\"S1.T1.22.4.4.4.4.4.4.4.2.2.2.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.22.4.4.4.4.4.4.4.2.2.2.2.2\"> \u2003<span class=\"ltx_text\" id=\"S1.T1.22.4.4.4.4.4.4.4.2.2.2.2.2.1\" style=\"font-size:240%;\">()</span></span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.22.4.4.4.4.4.4.4.2.4\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.22.4.4.4.4.4.4.4.4\">Suboptimal</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.22\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.38.20.20.20.20.20.20.22.1\"><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.1.1\"></span> <span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.38.20.20.20.20.20.20.22.1.2.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.22.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.38.20.20.20.20.20.20.22.1.2.1.1.1\"><span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.38.20.20.20.20.20.20.22.1.2.1.1.1.1\">SYNTHESIS</span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.22.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.38.20.20.20.20.20.20.22.1.2.1.2.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Liu et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#bib.bib17\" title=\"\">2022 ###reference_b17###</a>)</cite></span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.1.3\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.38.20.20.20.20.20.20.22.2\"><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.2.1\"></span> <span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.2.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.38.20.20.20.20.20.20.22.2.2.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.22.2.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.38.20.20.20.20.20.20.22.2.2.1.1.1\">\u2014</span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.2.3\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.38.20.20.20.20.20.20.22.3\"><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.1\"></span> <span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.1.1\">Limitations:</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.2.1\">bounded gradient assumption,</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.3\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.3.1\">calculates the full gradients<sup class=\"ltx_sup\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.3.1.1\"><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.3.1.1.1\" style=\"color:#0000FF;\">(a)</span></sup>,</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.4\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.4.1\">suboptimal.<sup class=\"ltx_sup\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.4.1.1\"><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.2.1.4.1.1.1\" style=\"color:#0000FF;\">(b)</span></sup></span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.22.3.3\"></span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.25.7.7.7.7.7.7.7\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.25.7.7.7.7.7.7.7.4\"><span class=\"ltx_text\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.1\"></span> <span class=\"ltx_text\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.2.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.2.1.1.1\"><span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.2.1.1.1.1\">Asynchronous SGD</span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.2.1.2.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Koloskova et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#bib.bib11\" title=\"\">2022 ###reference_b11###</a>)</cite></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.2.1.3\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.2.1.3.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Mishchenko et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#bib.bib19\" title=\"\">2022 ###reference_b19###</a>)</cite></span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.25.7.7.7.7.7.7.7.4.3\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.23.5.5.5.5.5.5.5.1\"><span class=\"ltx_text\" id=\"S1.T1.23.5.5.5.5.5.5.5.1.2\"></span> <span class=\"ltx_text\" id=\"S1.T1.23.5.5.5.5.5.5.5.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.23.5.5.5.5.5.5.5.1.1.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.23.5.5.5.5.5.5.5.1.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.23.5.5.5.5.5.5.5.1.1.1.1.1\"></span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.23.5.5.5.5.5.5.5.1.3\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.25.7.7.7.7.7.7.7.3\"><span class=\"ltx_text\" id=\"S1.T1.25.7.7.7.7.7.7.7.3.3\"></span> <span class=\"ltx_text\" id=\"S1.T1.25.7.7.7.7.7.7.7.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.25.7.7.7.7.7.7.7.3.2.2\">\n<span class=\"ltx_tr\" id=\"S1.T1.25.7.7.7.7.7.7.7.3.2.2.3\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.25.7.7.7.7.7.7.7.3.2.2.3.1\">Limitations:</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.24.6.6.6.6.6.6.6.2.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.24.6.6.6.6.6.6.6.2.1.1.1.1\">\u2013bounded variance assumption,</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.25.7.7.7.7.7.7.7.3.2.2.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.25.7.7.7.7.7.7.7.3.2.2.2.1\">suboptimal when is small.</span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.25.7.7.7.7.7.7.7.3.4\"></span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.28.10.10.10.10.10.10.10\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.28.10.10.10.10.10.10.10.4\"><span class=\"ltx_text\" id=\"S1.T1.28.10.10.10.10.10.10.10.4.1\"></span> <span class=\"ltx_text\" id=\"S1.T1.28.10.10.10.10.10.10.10.4.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.28.10.10.10.10.10.10.10.4.2.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.28.10.10.10.10.10.10.10.4.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.28.10.10.10.10.10.10.10.4.2.1.1.1\"><span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.28.10.10.10.10.10.10.10.4.2.1.1.1.1\">Rennala SGD</span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.28.10.10.10.10.10.10.10.4.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.28.10.10.10.10.10.10.10.4.2.1.2.1\"><cite class=\"ltx_cite ltx_citemacro_citep\">(Tyurin and Richt\u00e1rik, <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#bib.bib28\" title=\"\">2023 ###reference_b28###</a>)</cite></span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.28.10.10.10.10.10.10.10.4.3\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.26.8.8.8.8.8.8.8.1\"><span class=\"ltx_text\" id=\"S1.T1.26.8.8.8.8.8.8.8.1.2\"></span> <span class=\"ltx_text\" id=\"S1.T1.26.8.8.8.8.8.8.8.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.26.8.8.8.8.8.8.8.1.1.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.26.8.8.8.8.8.8.8.1.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.26.8.8.8.8.8.8.8.1.1.1.1.1\"></span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.26.8.8.8.8.8.8.8.1.3\"></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.28.10.10.10.10.10.10.10.3\"><span class=\"ltx_text\" id=\"S1.T1.28.10.10.10.10.10.10.10.3.3\"></span> <span class=\"ltx_text\" id=\"S1.T1.28.10.10.10.10.10.10.10.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.28.10.10.10.10.10.10.10.3.2.2\">\n<span class=\"ltx_tr\" id=\"S1.T1.28.10.10.10.10.10.10.10.3.2.2.3\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.28.10.10.10.10.10.10.10.3.2.2.3.1\">Limitations:</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.27.9.9.9.9.9.9.9.2.1.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.27.9.9.9.9.9.9.9.2.1.1.1.1\">\u2013bounded variance assumption,</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.28.10.10.10.10.10.10.10.3.2.2.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.28.10.10.10.10.10.10.10.3.2.2.2.1\">suboptimal when is small.</span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.28.10.10.10.10.10.10.10.3.4\"></span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.31.13.13.13.13.13.13.13\">\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.31.13.13.13.13.13.13.13.4\" style=\"background-color:#CCFFFF;\"><span class=\"ltx_text\" id=\"S1.T1.31.13.13.13.13.13.13.13.4.1\" style=\"background-color:#CCFFFF;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.31.13.13.13.13.13.13.13.4.1.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.31.13.13.13.13.13.13.13.4.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.31.13.13.13.13.13.13.13.4.1.1.1.1\"><span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.31.13.13.13.13.13.13.13.4.1.1.1.1.1\">Freya PAGE</span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.31.13.13.13.13.13.13.13.4.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.31.13.13.13.13.13.13.13.4.1.1.2.1\">(Theorems\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#S4.SS2\" title=\"4.2 Optimal parameters \ud835\udc46 and \ud835\udc5d in the large-scale regime \u2023 4 Time Complexities and Convergence Rates \u2023 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations\">4.2 ###reference_###</a> and <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#Thmtheorem2\" title=\"Theorem 2 (Main result in the large-scale regime using the ratio \ud835\udc3f_\u00b1/\ud835\udc3f). \u2023 4.2 Optimal parameters \ud835\udc46 and \ud835\udc5d in the large-scale regime \u2023 4 Time Complexities and Convergence Rates \u2023 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations\">2 ###reference_orem2###</a>)</span></span>\n</span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.30.12.12.12.12.12.12.12.2\" style=\"background-color:#CCFFFF;\"><span class=\"ltx_text\" id=\"S1.T1.30.12.12.12.12.12.12.12.2.2\" style=\"background-color:#CCFFFF;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.30.12.12.12.12.12.12.12.2.2.2\">\n<span class=\"ltx_tr\" id=\"S1.T1.29.11.11.11.11.11.11.11.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.29.11.11.11.11.11.11.11.1.1.1.1.1\"></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.30.12.12.12.12.12.12.12.2.2.2.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.30.12.12.12.12.12.12.12.2.2.2.2.1\"><sup class=\"ltx_sup\" id=\"S1.T1.30.12.12.12.12.12.12.12.2.2.2.2.1.1\"><span class=\"ltx_text\" id=\"S1.T1.30.12.12.12.12.12.12.12.2.2.2.2.1.1.1\" style=\"color:#0000FF;\">(c)</span></sup></span></span>\n</span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.31.13.13.13.13.13.13.13.3\" style=\"background-color:#CCFFFF;\"><span class=\"ltx_text\" id=\"S1.T1.31.13.13.13.13.13.13.13.3.1\" style=\"background-color:#CCFFFF;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.31.13.13.13.13.13.13.13.3.1.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.31.13.13.13.13.13.13.13.3.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.31.13.13.13.13.13.13.13.3.1.1.2.1\">Optimal in the large-scale regime,</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.31.13.13.13.13.13.13.13.3.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.31.13.13.13.13.13.13.13.3.1.1.1.1\">i.e., (see Section\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#S5\" title=\"5 Lower Bound \u2023 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations\">5 ###reference_###</a>)</span></span>\n</span></span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.33.15.15.15.15.15.15.15\">\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S1.T1.33.15.15.15.15.15.15.15.3\" style=\"background-color:#CCFFFF;\"><span class=\"ltx_text\" id=\"S1.T1.33.15.15.15.15.15.15.15.3.1\" style=\"background-color:#CCFFFF;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.33.15.15.15.15.15.15.15.3.1.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.33.15.15.15.15.15.15.15.3.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.33.15.15.15.15.15.15.15.3.1.1.1.1\">Lower bound</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.33.15.15.15.15.15.15.15.3.1.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.33.15.15.15.15.15.15.15.3.1.1.2.1\">(Theorem <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#Thmtheorem4\" title=\"Theorem 4 (Less formal version of Theorems J.3 and J.4). \u2023 5 Lower Bound \u2023 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations\">4 ###reference_orem4###</a>)</span></span>\n</span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S1.T1.33.15.15.15.15.15.15.15.2\" style=\"background-color:#CCFFFF;\"><span class=\"ltx_text\" id=\"S1.T1.33.15.15.15.15.15.15.15.2.2\" style=\"background-color:#CCFFFF;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.33.15.15.15.15.15.15.15.2.2.2\">\n<span class=\"ltx_tr\" id=\"S1.T1.32.14.14.14.14.14.14.14.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.32.14.14.14.14.14.14.14.1.1.1.1.1\"></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.33.15.15.15.15.15.15.15.2.2.2.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.33.15.15.15.15.15.15.15.2.2.2.2.1\"></span></span>\n</span></span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S1.T1.33.15.15.15.15.15.15.15.4\" style=\"background-color:#CCFFFF;\"><span class=\"ltx_text\" id=\"S1.T1.33.15.15.15.15.15.15.15.4.1\" style=\"background-color:#CCFFFF;\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.33.15.15.15.15.15.15.15.4.1.1\">\n<span class=\"ltx_tr\" id=\"S1.T1.33.15.15.15.15.15.15.15.4.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S1.T1.33.15.15.15.15.15.15.15.4.1.1.1.1\">\u2014</span></span>\n</span></span></span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.20\">\n<span class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t ltx_colspan ltx_colspan_3\" id=\"S1.T1.38.20.20.20.20.20.20.20.5\"><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.20.5.6\"></span> <span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.20.5.5\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.38.20.20.20.20.20.20.20.5.5.5\">\n<span class=\"ltx_tr\" id=\"S1.T1.35.17.17.17.17.17.17.17.2.2.2.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.35.17.17.17.17.17.17.17.2.2.2.2.2\"><span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.35.17.17.17.17.17.17.17.2.2.2.2.2.1\">Freya PAGE</span> has <em class=\"ltx_emph ltx_font_italic\" id=\"S1.T1.35.17.17.17.17.17.17.17.2.2.2.2.2.2\">universally</em> better guarantees than all previous methods: the dependence on is (unlike <span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.35.17.17.17.17.17.17.17.2.2.2.2.2.3\">Rennala SGD</span> and <span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.35.17.17.17.17.17.17.17.2.2.2.2.2.4\">Asynchronous SGD</span>),</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.37.19.19.19.19.19.19.19.4.4.4.4\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.37.19.19.19.19.19.19.19.4.4.4.4.2\">the dependence on is harmonic-like and robust to slow workers (robust to ) (unlike <span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.37.19.19.19.19.19.19.19.4.4.4.4.2.1\">Soviet PAGE</span> and <span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.37.19.19.19.19.19.19.19.4.4.4.4.2.2\">SYNTHESIS</span>),</span></span>\n<span class=\"ltx_tr\" id=\"S1.T1.38.20.20.20.20.20.20.20.5.5.5.5\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"S1.T1.38.20.20.20.20.20.20.20.5.5.5.5.1\">the assumptions are weak, and the time complexity of <span class=\"ltx_text ltx_font_sansserif\" id=\"S1.T1.38.20.20.20.20.20.20.20.5.5.5.5.1.1\">Freya PAGE</span> is optimal when .</span></span>\n</span></span><span class=\"ltx_text\" id=\"S1.T1.38.20.20.20.20.20.20.20.5.7\"></span></span></span>\n</span>\n<span class=\"ltx_itemize\" id=\"S1.I2\">\n<span class=\"ltx_item\" id=\"S1.I2.ix1\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\"><span class=\"ltx_text\" id=\"S1.I2.ix1.1.1.1\" style=\"font-size:200%;color:#0000FF;\">(a)</span></span>\n<span class=\"ltx_para\" id=\"S1.I2.ix1.p1\">\n<span class=\"ltx_p\" id=\"S1.I2.ix1.p1.2\"><span class=\"ltx_text\" id=\"S1.I2.ix1.p1.2.1\" style=\"font-size:140%;\">In Line </span><span class=\"ltx_text\" id=\"S1.I2.ix1.p1.2.2\" style=\"font-size:140%;\"> of their Algorithm </span><span class=\"ltx_text\" id=\"S1.I2.ix1.p1.2.3\" style=\"font-size:140%;\">, they calculate the full gradient, assuming that it can be done for free and not explaining how.</span></span>\n</span></span>\n<span class=\"ltx_item\" id=\"S1.I2.ix2\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\"><span class=\"ltx_text\" id=\"S1.I2.ix2.1.1.1\" style=\"font-size:200%;color:#0000FF;\">(b)</span></span>\n<span class=\"ltx_para\" id=\"S1.I2.ix2.p1\">\n<span class=\"ltx_p\" id=\"S1.I2.ix2.p1.3\"><span class=\"ltx_text\" id=\"S1.I2.ix2.p1.3.1\" style=\"font-size:140%;\">Their convergence rates in Theorems </span><span class=\"ltx_text\" id=\"S1.I2.ix2.p1.3.2\" style=\"font-size:140%;\"> and </span><span class=\"ltx_text\" id=\"S1.I2.ix2.p1.3.3\" style=\"font-size:140%;\"> depend on a bound on the delays </span><span class=\"ltx_text\" id=\"S1.I2.ix2.p1.3.4\" style=\"font-size:140%;\"> which in turn depends on the performance of the slowest worker. Our method does not depend on the slowest worker if it is too slow (see Section\u00a0</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#S4.SS3\" style=\"font-size:140%;\" title=\"4.3 Discussion of the time complexity \u2023 4 Time Complexities and Convergence Rates \u2023 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations\">4.3 ###reference_###</a><span class=\"ltx_text\" id=\"S1.I2.ix2.p1.3.5\" style=\"font-size:140%;\">), which is required for optimality.</span></span>\n</span></span>\n<span class=\"ltx_item\" id=\"S1.I2.ix3\" style=\"list-style-type:none;\"><span class=\"ltx_tag ltx_tag_item\"><span class=\"ltx_text\" id=\"S1.I2.ix3.1.1.1\" style=\"font-size:200%;color:#0000FF;\">(c)</span></span>\n<span class=\"ltx_para\" id=\"S1.I2.ix3.p1\">\n<span class=\"ltx_p\" id=\"S1.I2.ix3.p1.1\"><span class=\"ltx_text\" id=\"S1.I2.ix3.p1.1.1\" style=\"font-size:140%;\">We prove better time complexity in Theorem\u00a0</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#S4.SS1\" style=\"font-size:140%;\" title=\"4.1 Optimal parameters \ud835\udc46 and \ud835\udc5d \u2023 4 Time Complexities and Convergence Rates \u2023 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations\">4.1 ###reference_###</a><span class=\"ltx_text\" id=\"S1.I2.ix3.p1.1.2\" style=\"font-size:140%;\">, but this result requires the knowledge of </span><span class=\"ltx_text\" id=\"S1.I2.ix3.p1.1.3\" style=\"font-size:140%;\"> in advance, unlike Theorems\u00a0</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#S4.SS2\" style=\"font-size:140%;\" title=\"4.2 Optimal parameters \ud835\udc46 and \ud835\udc5d in the large-scale regime \u2023 4 Time Complexities and Convergence Rates \u2023 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations\">4.2 ###reference_###</a><span class=\"ltx_text\" id=\"S1.I2.ix3.p1.1.4\" style=\"font-size:140%;\"> and </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#Thmtheorem2\" style=\"font-size:140%;\" title=\"Theorem 2 (Main result in the large-scale regime using the ratio \ud835\udc3f_\u00b1/\ud835\udc3f). \u2023 4.2 Optimal parameters \ud835\udc46 and \ud835\udc5d in the large-scale regime \u2023 4 Time Complexities and Convergence Rates \u2023 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations\">2 ###reference_orem2###</a><span class=\"ltx_text\" id=\"S1.I2.ix3.p1.1.5\" style=\"font-size:140%;\">.</span></span>\n</span></span>\n</span></span></span>\n</span></span></span><span class=\"ltx_text\" id=\"S1.T1.38.21\" style=\"font-size:50%;\"></span></p>\n</figure>",
"capture": "Table 1: Comparison of the worst-case time complexity guarantees of methods that work with asynchronous computations in the setup from Section\u00a01 (up to smoothness constants). We assume that is the bound on the times required to calculate one stochastic gradient by worker and Abbr: # of data samples, # of workers, error tolerance."
},
"2": {
"table_html": "<figure class=\"ltx_table\" id=\"A1.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A1.T2.3.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"A1.T2.4.2\" style=\"font-size:90%;\">Mean and variance of algorithm accuracies on the <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T2.4.2.1\">MNIST</span> test set during the final 100K seconds of the experiments from Figure\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#A1.F2.sf2\" title=\"In Figure 2 \u2023 A.2 Experiments with logistic regression \u2023 Appendix A Experiments \u2023 Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations\"><span class=\"ltx_text ltx_ref_tag\">2(b)</span></a>.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_top\" id=\"A1.T2.5\">\n<tr class=\"ltx_tr\" id=\"A1.T2.5.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T2.5.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.5.1.1.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T2.5.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.5.1.2.1\">Accuracy</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A1.T2.5.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.5.1.3.1\">Variance of Accuracy</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.5.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.5.2.1\">\n<span class=\"ltx_text\" id=\"A1.T2.5.2.1.1\"></span> <span class=\"ltx_text\" id=\"A1.T2.5.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"A1.T2.5.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"A1.T2.5.2.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A1.T2.5.2.1.2.1.1.1\"><span class=\"ltx_text ltx_font_sansserif\" id=\"A1.T2.5.2.1.2.1.1.1.1\">Asynchronous SGD</span></span></span>\n<span class=\"ltx_tr\" id=\"A1.T2.5.2.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A1.T2.5.2.1.2.1.2.1\"><cite class=\"ltx_cite ltx_citemacro_citep\"><span class=\"ltx_text\" id=\"A1.T2.5.2.1.2.1.2.1.1.1\" style=\"font-size:90%;\">[</span>Koloskova et\u00a0al.<span class=\"ltx_text\" id=\"A1.T2.5.2.1.2.1.2.1.2.2.1.1\" style=\"font-size:90%;\">, </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#bib.bib11\" title=\"\">2022</a><span class=\"ltx_text\" id=\"A1.T2.5.2.1.2.1.2.1.3.3\" style=\"font-size:90%;\">]</span></cite></span></span>\n<span class=\"ltx_tr\" id=\"A1.T2.5.2.1.2.1.3\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A1.T2.5.2.1.2.1.3.1\"><cite class=\"ltx_cite ltx_citemacro_citep\"><span class=\"ltx_text\" id=\"A1.T2.5.2.1.2.1.3.1.1.1\" style=\"font-size:90%;\">[</span>Mishchenko et\u00a0al.<span class=\"ltx_text\" id=\"A1.T2.5.2.1.2.1.3.1.2.2.1.1\" style=\"font-size:90%;\">, </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#bib.bib19\" title=\"\">2022</a><span class=\"ltx_text\" id=\"A1.T2.5.2.1.2.1.3.1.3.3\" style=\"font-size:90%;\">]</span></cite></span></span>\n</span></span><span class=\"ltx_text\" id=\"A1.T2.5.2.1.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.5.2.2\">92.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.5.2.3\">5.85e-07</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.5.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.5.3.1\">\n<span class=\"ltx_text\" id=\"A1.T2.5.3.1.1\"></span> <span class=\"ltx_text\" id=\"A1.T2.5.3.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"A1.T2.5.3.1.2.1\">\n<span class=\"ltx_tr\" id=\"A1.T2.5.3.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A1.T2.5.3.1.2.1.1.1\"><span class=\"ltx_text ltx_font_sansserif\" id=\"A1.T2.5.3.1.2.1.1.1.1\">Soviet PAGE</span></span></span>\n<span class=\"ltx_tr\" id=\"A1.T2.5.3.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A1.T2.5.3.1.2.1.2.1\"><cite class=\"ltx_cite ltx_citemacro_citep\"><span class=\"ltx_text\" id=\"A1.T2.5.3.1.2.1.2.1.1.1\" style=\"font-size:90%;\">[</span>Li et\u00a0al.<span class=\"ltx_text\" id=\"A1.T2.5.3.1.2.1.2.1.2.2.1.1\" style=\"font-size:90%;\">, </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#bib.bib16\" title=\"\">2021</a><span class=\"ltx_text\" id=\"A1.T2.5.3.1.2.1.2.1.3.3\" style=\"font-size:90%;\">]</span></cite></span></span>\n</span></span><span class=\"ltx_text\" id=\"A1.T2.5.3.1.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.5.3.2\">92.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.5.3.3\">1.62e-07</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.5.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.5.4.1\">\n<span class=\"ltx_text\" id=\"A1.T2.5.4.1.1\"></span> <span class=\"ltx_text\" id=\"A1.T2.5.4.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"A1.T2.5.4.1.2.1\">\n<span class=\"ltx_tr\" id=\"A1.T2.5.4.1.2.1.1\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A1.T2.5.4.1.2.1.1.1\"><span class=\"ltx_text ltx_font_sansserif\" id=\"A1.T2.5.4.1.2.1.1.1.1\">Rennala SGD</span></span></span>\n<span class=\"ltx_tr\" id=\"A1.T2.5.4.1.2.1.2\">\n<span class=\"ltx_td ltx_nopad_r ltx_align_center\" id=\"A1.T2.5.4.1.2.1.2.1\"><cite class=\"ltx_cite ltx_citemacro_citep\"><span class=\"ltx_text\" id=\"A1.T2.5.4.1.2.1.2.1.1.1\" style=\"font-size:90%;\">[</span>Tyurin and Richt\u00e1rik<span class=\"ltx_text\" id=\"A1.T2.5.4.1.2.1.2.1.2.2.1.1\" style=\"font-size:90%;\">, </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2405.15545v2#bib.bib28\" title=\"\">2023</a><span class=\"ltx_text\" id=\"A1.T2.5.4.1.2.1.2.1.3.3\" style=\"font-size:90%;\">]</span></cite></span></span>\n</span></span><span class=\"ltx_text\" id=\"A1.T2.5.4.1.3\"></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.5.4.2\">92.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.5.4.3\">3.12e-06</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T2.5.5.1\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A1.T2.5.5.1.1\">\n<tr class=\"ltx_tr\" id=\"A1.T2.5.5.1.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.5.5.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.5.5.1.1.1.1.1\">Freya PAGE</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T2.5.5.2\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A1.T2.5.5.2.1\">\n<tr class=\"ltx_tr\" id=\"A1.T2.5.5.2.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.5.5.2.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.5.5.2.1.1.1.1\">92.66</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"A1.T2.5.5.3\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A1.T2.5.5.3.1\">\n<tr class=\"ltx_tr\" id=\"A1.T2.5.5.3.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.5.5.3.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.5.5.3.1.1.1.1\">1.01e-07</span></td>\n</tr>\n</table>\n</td>\n</tr>\n</table>\n</figure>",
"capture": "Table 2: Mean and variance of algorithm accuracies on the MNIST test set during the final 100K seconds of the experiments from Figure\u00a02(b)."
}
},
"image_paths": {
"1(a)": {
"figure_path": "2405.15545v2_figure_1(a).png",
"caption": "(a) n=1000\ud835\udc5b1000n=1000italic_n = 1000\nFigure 1: Experiments with nonconvex quadratic optimization tasks. We plot function suboptimality against elapsed time.",
"url": "http://arxiv.org/html/2405.15545v2/x1.png"
},
"1(b)": {
"figure_path": "2405.15545v2_figure_1(b).png",
"caption": "(b) n=10000\ud835\udc5b10000n=10000italic_n = 10000\nFigure 1: Experiments with nonconvex quadratic optimization tasks. We plot function suboptimality against elapsed time.",
"url": "http://arxiv.org/html/2405.15545v2/x2.png"
},
"2(a)": {
"figure_path": "2405.15545v2_figure_2(a).png",
"caption": "(a) n=100\ud835\udc5b100n=100italic_n = 100\nFigure 2: Experiments with the logistic regression problem on the MNIST dataset.",
"url": "http://arxiv.org/html/2405.15545v2/x3.png"
},
"2(b)": {
"figure_path": "2405.15545v2_figure_2(b).png",
"caption": "(b) n=10000\ud835\udc5b10000n=10000italic_n = 10000\nFigure 2: Experiments with the logistic regression problem on the MNIST dataset.",
"url": "http://arxiv.org/html/2405.15545v2/x4.png"
}
},
"validation": true,
"references": [
{
"1": {
"title": "Variance reduction for faster non-convex optimization.",
"author": "Z. Allen-Zhu and E. Hazan.",
"venue": "In International Conference on Machine Learning, pages 699\u2013707. PMLR, 2016.",
"url": null
}
},
{
"2": {
"title": "Lower bounds for non-convex stochastic optimization.",
"author": "Y. Arjevani, Y. Carmon, J. C. Duchi, D. J. Foster, N. Srebro, and B. Woodworth.",
"venue": "Mathematical Programming, pages 1\u201350, 2022.",
"url": null
}
},
{
"3": {
"title": "Lower bounds for finding stationary points I.",
"author": "Y. Carmon, J. C. Duchi, O. Hinder, and A. Sidford.",
"venue": "Mathematical Programming, 184(1):71\u2013120, 2020.",
"url": null
}
},
{
"4": {
"title": "Revisiting distributed synchronous SGD.",
"author": "J. Chen, X. Pan, R. Monga, S. Bengio, and R. Jozefowicz.",
"venue": "arXiv preprint arXiv:1604.00981, 2016.",
"url": null
}
},
{
"5": {
"title": "Slow and stale gradients can win the race: Error-runtime trade-offs in distributed SGD.",
"author": "S. Dutta, G. Joshi, S. Ghosh, P. Dube, and P. Nagpurkar.",
"venue": "In International Conference on Artificial Intelligence and Statistics, pages 803\u2013812. PMLR, 2018.",
"url": null
}
},
{
"6": {
"title": "SPIDER: Near-optimal non-convex optimization via stochastic path-integrated differential estimator.",
"author": "C. Fang, C. J. Li, Z. Lin, and T. Zhang.",
"venue": "Advances in Neural Information Processing Systems, 31, 2018.",
"url": null
}
},
{
"7": {
"title": "Nonconvex variance reduced optimization with arbitrary sampling.",
"author": "S. Horv\u00e1th and P. Richt\u00e1rik.",
"venue": "In International Conference on Machine Learning, pages 2781\u20132789. PMLR, 2019.",
"url": null
}
},
{
"8": {
"title": "Adaptivity of stochastic gradient methods for nonconvex optimization.",
"author": "S. Horv\u00e1th, L. Lei, P. Richt\u00e1rik, and M. I. Jordan.",
"venue": "SIAM Journal on Mathematics of Data Science, 4(2):634\u2013648, 2022.",
"url": null
}
},
{
"9": {
"title": "Better theory for SGD in the nonconvex world.",
"author": "A. Khaled and P. Richt\u00e1rik.",
"venue": "Transactions on Machine Learning Research, 2022.",
"url": null
}
},
{
"10": {
"title": "Adam: A method for stochastic optimization.",
"author": "D. P. Kingma and J. Ba.",
"venue": "International Conference on Learning Representations, 2015.",
"url": null
}
},
{
"11": {
"title": "Sharper convergence guarantees for asynchronous SGD for distributed and federated learning.",
"author": "A. Koloskova, S. U. Stich, and M. Jaggi.",
"venue": "Advances in Neural Information Processing Systems, 2022.",
"url": null
}
},
{
"12": {
"title": "Optimal gradient sliding and its application to optimal distributed optimization under similarity.",
"author": "D. Kovalev, A. Beznosikov, E. Borodich, A. Gasnikov, and G. Scutari.",
"venue": "Advances in Neural Information Processing Systems, 35:33494\u201333507, 2022.",
"url": null
}
},
{
"13": {
"title": "First-order and stochastic optimization methods for machine learning.",
"author": "G. Lan.",
"venue": "Springer, 2020.",
"url": null
}
},
{
"14": {
"title": "Mnist handwritten digit database.",
"author": "Y. LeCun, C. Cortes, and C. Burges.",
"venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist, 2, 2010.",
"url": null
}
},
{
"15": {
"title": "Non-convex finite-sum optimization via SCSG methods.",
"author": "L. Lei, C. Ju, J. Chen, and M. I. Jordan.",
"venue": "Advances in Neural Information Processing Systems, 30, 2017.",
"url": null
}
},
{
"16": {
"title": "PAGE: A simple and optimal probabilistic gradient estimator for nonconvex optimization.",
"author": "Z. Li, H. Bao, X. Zhang, and P. Richt\u00e1rik.",
"venue": "In International Conference on Machine Learning, pages 6286\u20136295. PMLR, 2021.",
"url": null
}
},
{
"17": {
"title": "Synthesis: A semi-asynchronous path-integrated stochastic gradient method for distributed learning in computing clusters.",
"author": "Z. Liu, X. Zhang, and J. Liu.",
"venue": "In Proceedings of the Twenty-Third International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, pages 151\u2013160, 2022.",
"url": null
}
},
{
"18": {
"title": "The power of interpolation: Understanding the effectiveness of sgd in modern over-parametrized learning.",
"author": "S. Ma, R. Bassily, and M. Belkin.",
"venue": "In International Conference on Machine Learning, pages 3325\u20133334. PMLR, 2018.",
"url": null
}
},
{
"19": {
"title": "Asynchronous SGD beats minibatch SGD under arbitrary delays.",
"author": "K. Mishchenko, F. Bach, M. Even, and B. Woodworth.",
"venue": "Advances in Neural Information Processing Systems, 2022.",
"url": null
}
},
{
"20": {
"title": "Problem complexity and method efficiency in optimization.",
"author": "A. S. Nemirovskij and D. B. Yudin.",
"venue": "Wiley-Interscience, 1983.",
"url": null
}
},
{
"21": {
"title": "Lectures on convex optimization, volume 137.",
"author": "Y. Nesterov.",
"venue": "Springer, 2018.",
"url": null
}
},
{
"22": {
"title": "SGD and hogwild! convergence without the bounded gradients assumption.",
"author": "L. Nguyen, P. H. Nguyen, M. Dijk, P. Richt\u00e1rik, K. Scheinberg, and M. Tak\u00e1c.",
"venue": "In International Conference on Machine Learning, pages 3750\u20133758. PMLR, 2018.",
"url": null
}
},
{
"23": {
"title": "SARAH: A novel method for machine learning problems using stochastic recursive gradient.",
"author": "L. M. Nguyen, J. Liu, K. Scheinberg, and M. Tak\u00e1\u010d.",
"venue": "In International Conference on Machine Learning, pages 2613\u20132621. PMLR, 2017.",
"url": null
}
},
{
"24": {
"title": "Hogwild!: A lock-free approach to parallelizing stochastic gradient descent.",
"author": "B. Recht, C. Re, S. Wright, and F. Niu.",
"venue": "Advances in Neural Information Processing Systems, 24, 2011.",
"url": null
}
},
{
"25": {
"title": "Stochastic variance reduction for nonconvex optimization.",
"author": "S. J. Reddi, A. Hefny, S. Sra, B. Poczos, and A. Smola.",
"venue": "In International Conference on Machine Learning, pages 314\u2013323. PMLR, 2016.",
"url": null
}
},
{
"26": {
"title": "Fast convergence of stochastic gradient descent under a strong growth condition.",
"author": "M. Schmidt and N. L. Roux.",
"venue": "arXiv preprint arXiv:1308.6370, 2013.",
"url": null
}
},
{
"27": {
"title": "Permutation compressors for provably faster distributed nonconvex optimization.",
"author": "R. Szlendak, A. Tyurin, and P. Richt\u00e1rik.",
"venue": "In International Conference on Learning Representations, 2021.",
"url": null
}
},
{
"28": {
"title": "Optimal time complexities of parallel stochastic optimization methods under a fixed computation model.",
"author": "A. Tyurin and P. Richt\u00e1rik.",
"venue": "Advances in Neural Information Processing Systems, 2023.",
"url": null
}
},
{
"29": {
"title": "Sharper rates and flexible framework for nonconvex sgd with client and data sampling.",
"author": "A. Tyurin, L. Sun, K. Burlachenko, and P. Richt\u00e1rik.",
"venue": "Transactions on Machine Learning Research, 2023.",
"url": null
}
},
{
"30": {
"title": "Shadowheart SGD: Distributed asynchronous SGD with optimal time complexity under arbitrary computation and communication heterogeneity.",
"author": "A. Tyurin, M. Pozzi, I. Ilin, and P. Richt\u00e1rik.",
"venue": "arXiv preprint arXiv:2402.04785, 2024.",
"url": null
}
},
{
"31": {
"title": "SpiderBoost and momentum: Faster variance reduction algorithms.",
"author": "Z. Wang, K. Ji, Y. Zhou, Y. Liang, and V. Tarokh.",
"venue": "Advances in Neural Information Processing Systems, 32, 2019.",
"url": null
}
},
{
"32": {
"title": "Stochastic nested variance reduction for nonconvex optimization.",
"author": "D. Zhou, P. Xu, and Q. Gu.",
"venue": "Journal of Machine Learning Research, 21(103):1\u201363, 2020.",
"url": null
}
}
],
"url": "http://arxiv.org/html/2405.15545v2"
}