yilunzhao commited on
Commit
a5ee657
·
verified ·
1 Parent(s): 6bef901

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240119/1910.02950v3.json +0 -0
  2. 20240119/2002.06451v3.json +407 -0
  3. 20240119/2005.04907v2.json +429 -0
  4. 20240119/2012.03344v3.json +0 -0
  5. 20240119/2103.10702v4.json +0 -0
  6. 20240119/2106.01061v2.json +290 -0
  7. 20240119/2110.14014v6.json +0 -0
  8. 20240119/2111.13926v5.json +0 -0
  9. 20240119/2201.05158v3.json +210 -0
  10. 20240119/2203.09773v2.json +0 -0
  11. 20240119/2205.05359v3.json +165 -0
  12. 20240119/2206.01409v4.json +0 -0
  13. 20240119/2206.11828v5.json +379 -0
  14. 20240119/2208.06551v4.json +0 -0
  15. 20240119/2208.09424v3.json +0 -0
  16. 20240119/2209.00315v3.json +0 -0
  17. 20240119/2210.02428v3.json +0 -0
  18. 20240119/2210.08302v2.json +518 -0
  19. 20240119/2210.09745v2.json +77 -0
  20. 20240119/2211.12121v3.json +499 -0
  21. 20240119/2212.01521v2.json +0 -0
  22. 20240119/2212.08044v3.json +0 -0
  23. 20240119/2301.07300v3.json +0 -0
  24. 20240119/2301.10766v2.json +0 -0
  25. 20240119/2302.06120v3.json +0 -0
  26. 20240119/2302.09648v5.json +281 -0
  27. 20240119/2302.12190v2.json +0 -0
  28. 20240119/2302.13854v2.json +0 -0
  29. 20240119/2303.02901v2.json +113 -0
  30. 20240119/2303.05015v2.json +364 -0
  31. 20240119/2304.11171v4.json +0 -0
  32. 20240119/2305.01120v3.json +0 -0
  33. 20240119/2305.03077v2.json +44 -0
  34. 20240119/2305.11834v2.json +0 -0
  35. 20240119/2305.12997v3.json +0 -0
  36. 20240119/2305.13310v2.json +0 -0
  37. 20240119/2305.14402v3.json +280 -0
  38. 20240119/2306.00119v2.json +0 -0
  39. 20240119/2306.16199v2.json +318 -0
  40. 20240119/2307.08078v2.json +459 -0
  41. 20240119/2307.10266v3.json +0 -0
  42. 20240119/2307.14995v2.json +0 -0
  43. 20240119/2307.15610v2.json +0 -0
  44. 20240119/2308.02202v3.json +0 -0
  45. 20240119/2308.03016v3.json +169 -0
  46. 20240119/2308.03279v2.json +0 -0
  47. 20240119/2309.07988v3.json +333 -0
  48. 20240119/2309.09466v2.json +458 -0
  49. 20240119/2309.14393v2.json +0 -0
  50. 20240119/2309.16284v2.json +346 -0
20240119/1910.02950v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2002.06451v3.json ADDED
@@ -0,0 +1,407 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Symmetric Arithmetic CircuitsResearch funded by EPSRC grant EP/S03238X/1. A preliminary version of this work was reported in [14]",
3
+ "abstract": "We introduce symmetric arithmetic circuits, i.e. arithmetic circuits with a\nnatural symmetry restriction. In the context of circuits computing polynomials\ndefined on a matrix of variables, such as the determinant or the permanent, the\nrestriction amounts to requiring that the shape of the circuit is invariant\nunder simultaneous row and column permutations of the matrix. We establish unconditional\nexponential lower bounds on the size of any symmetric circuit for computing the\npermanent. In contrast, we show that there\nare polynomial-size symmetric circuits for computing the determinant over fields\nof characteristic zero.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Valiant\u2019s conjecture [30 ###reference_30###], that , is often referred to\nas the algebraic counterpart to the conjecture that . It has proved\nas elusive as the latter. The conjecture is equivalent to the statement that\nthere is no polynomial-size family of arithmetic circuits for computing the\npermanent of a matrix, over any field of characteristic other than 2. Here,\narithmetic circuits are circuits with input gates labelled by variables from\nsome set or constants from a fixed field , and internal gates labelled\nwith the operations and . The output of such a circuit is some\npolynomial in , and we think of the circuit as a compact representation\nof this polynomial. In particular, if the set of variables form the entries\nof an matrix, i.e. , then\n denotes the polynomial ,\nwhich is the permanent of the matrix.\nWhile a lower bound for the size of general arithmetic circuits computing the\npermanent remains out of reach, lower bounds have been established for some\nrestricted classes of circuits. For example, it is known that there is no\nsubexponential family of monotone circuits for the permanent. This was\nfirst shown for the field of real numbers [22 ###reference_22###] and a proof for\ngeneral fields, with a suitably adapted notion of monotonicity is given\nin [23 ###reference_23###]. An exponential lower bound for the permanent is also known\nfor depth-3 arithmetic circuits [19 ###reference_19###] over all finite fields. In\nboth these cases, the exponential lower bound obtained for the permanent also\napplies to the determinant, i.e. the family of polynomials , where is . However, the determinant is in and so there do exist\npolynomial-size families of general circuits for the determinant.\nIn this paper, we consider a new restriction on arithmetic circuits based on a\nnatural notion of symmetry, and we show that it distinguishes between the\ndeterminant and the permanent. That is to say, we are able to show exponential\nlower bounds on the size of any family of symmetric arithmetic circuits for\ncomputing the permanent, while establishing the existence of polynomial-size\nsymmetric circuits for computing the determinant.\nWe next define (informally) the notion of symmetry we use. A formal definition\nfollows in Section 3 ###reference_###. The permanent and the determinant are\nnot symmetric polynomials in the usual meaning of the word, in that they are not\ninvariant under arbitrary permutations of their variables. However, they do have\nnatural symmetries, e.g. permutations of the variables induced by row and column\npermutations. Specifically, is invariant under arbitrary permutations\nof the rows and columns of the matrix , while is invariant\nunder a more restricted group of permutations that includes simultaneous permutations of the rows and columns. We consider\nsimilar notions of symmetry on circuits. We say that an arithmetic circuit \n(seen as a labelled directed acyclic graph) that takes as input an \nmatrix of variables (i.e. has input gates labelled by , for ) is matrix symmetric if the natural action of any on the inputs (i.e. taking to\n) extends to an automorphism of . Similarly, we say \nis square symmetric if the natural action of any on\nits inputs (i.e. taking to ) extends to an\nautomorphism of .\nOur upper bound for the determinant is established for square symmetric circuits over fields of characteristic , and we conjecture it holds for all characteristics. For the permanent we prove exponential lower bounds for square symmetric circuits over fields of characteristic and for matrix symmetric circuits over all fields of characteristic other than two. On fields of characteristic two, of course, the permanent and the determinant coincide.\nA similar notion of symmetry has been studied previously in the context of\nBoolean circuits for deciding graph properties, or properties of relational\nstructures (see [17 ###reference_17###, 25 ###reference_25###, 2 ###reference_2###]). Specifically, such\nsymmetric circuits arise naturally in the translation into circuit form of\nspecifications of properties in a logic or similar high-level formalism.\nSimilarly, we can think of a symmetric arithmetic circuit as a straight-line\nprogram which treats the rows and columns of a matrix as being indexed by\nunordered sets. Many natural algorithms have this property. For example, Ryser\u2019s\nformula for computing the permanent naturally yields a symmetric circuit.\nPolynomial-size families of symmetric Boolean circuits with threshold gates form\na particularly robust class, with links to fixed-point\nlogics [2 ###reference_2###]. In particular, this allows us to deploy methods for\nproving inexpressiblity in such logics to prove lower bounds on the size of\nsymmetric circuits. A close link has also been established between the power of\nsuch circuits and linear programming extended formulations with a geometric\nnotion of symmetry [5 ###reference_5###]. Our lower bound for the permanent is\nestablished by first giving a symmetry-preserving translation of arithmetic\ncircuits to Boolean circuits with threshold gates, and then establishing a lower\nbound there for computing the permanent of a --matrix.\nThe lower bounds for symmetric Boolean circuits are based on a measure we call\nthe counting width of graph parameters (the term is introduced\nin [13 ###reference_13###]). This is also sometimes known as the Weisfeiler-Leman\ndimension. In short, we have, for each an equivalence relation ,\nknown as the -dimensional Weisfeiler-Leman equivalence, that is a coarse\napproximation of isomorphism, getting finer with increasing . The counting\nwidth of a graph parameter is the smallest , as a function of the graph\nsize , such that is constant on -classes of graphs of size\n. From known results relating Boolean circuits and counting\nwidth [2 ###reference_2###, 5 ###reference_5###], we know that the existence of\nsubexponential size square symmetric circuits computing implies a sublinear upper\nbound on its counting width. Hence, using the relationship between the\npermanent of the adjacency matrix of a graph and the number of\nperfect matchings in , we obtain our lower bound for the permanent\nfor square symmetric circuits over fields of characteristic zero by showing a\nlinear lower bound on the counting width of \u2014the number of perfect\nmatchings in . Indeed, showing the same for for every\nprime allows us to obtain an exponential lower bound for matrix symmetric circuits over any field of characteristic other than two.\nThe linear lower bound on the counting width of the number of perfect matchings\nis a result of interest in its own right, quite apart from the lower bounds it\nyields for circuits for the permanent. Indeed, there is an interest in\ndetermining the counting width of concrete graph parameters (see, for\ninstance, [4 ###reference_4###]), and the result here is somewhat surprising. The\ndecision problem of determining whether a graph has any perfect matching is\nknown to have constant counting width. Indeed, the width is for bipartite\ngraphs [8 ###reference_8###]. For general graphs, it is known to be strictly greater\nthan but still bounded above by a constant [3 ###reference_3###]."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Background",
15
+ "text": "In this section we discuss relevant background and introduce notation.\nWe write for the positive integers and for the non-negative\nintegers. For , denotes the set . For a\nset we write to denote the powerset of ."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Groups",
21
+ "text": "For a set , is the symmetric group on . For we\nwrite to abbreviate . The sign of a permutation\n is defined so that if is even \nand otherwise .\nLet be a group acting on a set . We denote this as a left action, i.e. for , . The action extends in a natural way to\npowers of . So, for , . It also extends to the powerset of and functions on as follows. The\naction of on is defined for and by\n. For any set, the action of on \nis defined for and by \nfor all . We refer to all of these as the natural action of \non the relevant set.\nLet and for each let be a group acting\non . The action of the direct product on is\ndefined for and by . If instead then the action of on is defined for and such that if then . Again, we refer to either of these as the natural action of on\n.\nLet be a group acting on a set . Let . Let denote the\n(pointwise) stabiliser of ."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Fields and Linear Algebra",
27
+ "text": "Let and be finite non-empty sets. An matrix\nwith entries in is a function . For , let . We recover the more familiar notion of an matrix with rows and columns indexed by ordered sets by taking and\n.\nThe permanent of a matrix is invariant under taking row and column permutations,\nwhile the determinant and trace are invariant under taking simultaneous\nrow and column permutations. With this observation in mind, we define these\nthree functions for unordered matrices. Let be a commutative ring and be a matrix where . Let be the set of bijections from to . The permanent of over\n is . Suppose . The determinant of over is\n. The trace of over is . In all three cases we omit reference to the ring when it is\nobvious from context or otherwise irrelevant.\nWe always use to denote a field and to denote the\ncharacteristic of . For any prime power we write for the\nfinite field of order . We are often interested in polynomials defined over a\nset of variables with a natural matrix structure, i.e. . We identify with this matrix. We also identify any\nfunction of the form with the matrix with entries in\n defined by replacing each with .\nFor let . Let and . In other words, \nis the formal polynomial defined by taking the permanent\nof an matrix with th entry , and\nsimilarly for the determinant.\nWe write to abbreviate and \nto abbreviate ."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Graphs, Matrices and Matchings",
33
+ "text": "Given a graph , the adjacency matrix of is the -matrix with if, and only if, . If is bipartite, with bipartition , then the biadjacency matrix of is the -matrix with if, and only if, .\nIt is well known that for a bipartite graph , over any field of characteristic zero counts the number of perfect matchings in [20 ###reference_20###] and for prime , for a field of characteristic counts the number of perfect matchings in modulo . For bipartite ,\n is a block anti-diagonal matrix with two blocks corresponding to and and ."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Counting Width",
39
+ "text": "For any , the -dimensional Weisfeiler-Leman equivalence\n(see [9 ###reference_9###]), denoted is an equivalence relation on graphs that\nprovides an over-approximation of isomorphism in the sense that for isomorphic\ngraphs and , we have for all . Increasing values of \ngive finer relations, so implies for all .\nThe equivalence relation is decidable in time , where is\nthe size of the graphs. If , then implies that and\n are isomorphic. The Weisfeiler-Leman equivalences have been widely studied\nand they have many equivalent characterizations in combinatorics, logic, algebra\nand linear optimization. One particularly useful characterization in\nterms of logic (see [9 ###reference_9###]) is that if,\nand only if, and cannot be distinguished by any\nformula of first-order logic with counting quantifiers using at most distinct variables. This has been used to establish\ninexpressibility results in various counting logics and motivates the notion of\ncounting width.\nA graph parameter is a function from graphs to a set which is\nisomorphism invariant. That is to say, whenever and are isomorphic graphs. Most commonly, is the set and examples of such graph parameters are the chromatic number, the number of\nconnected components or the number of perfect matchings. We can also\nlet be a field and let denote the permanent\n(over ) of the adjacency matrix of . When \nwe identify with the class of graphs for which it is the\nindicator function. In this case, we also call it a graph property.\nFor a graph parameter\n and any fixed , there is a smallest value of such that\n is -invariant on graphs with at most vertices. This motivates the definition.\nFor any graph parameter , the counting width of is the\nfunction such that \nis the smallest such that for all graphs of size\nat most , if , then .\nThe notion of counting width for classes of\ngraphs was introduced in [13 ###reference_13###], which we here extend to graph\nparameters. Note that for any graph parameter , since\nany non-isomorphic graphs on vertices can allways be distinguished\nin .\nCai, F\u00fcrer and Immerman [9 ###reference_9###] first showed that there is no fixed \nfor which coincides with isomorphism. Indeed, in our terminology,\nthey construct a graph property with counting width . Since then,\nmany graph properties have been shown to have linear counting width, including\nHamiltonicity and 3-colourability\n(see [5 ###reference_5###]). In other cases, such as the class of graphs that\ncontain a perfect matching, it has been proved that they have counting width\nbounded by a constant [3 ###reference_3###]. Our interest in counting width stems\nfrom the relation between this measure and lower bounds for symmetric circuits.\nRoughly, if a class of graphs is recognized by a family of\npolynomial-sized symmetric threshold circuits, it has bounded counting width (a\nmore precise statement is given in\nTheorem 16 ###reference_6###).\nOur lower bound construction in Section 7 ###reference_### is based on the\ngraphs constructed by Cai et al. [9 ###reference_9###]. While we review some of the\ndetails of the construction in Section 7 ###reference_###, a reader unfamiliar\nwith the construction may wish to consult a more detailed introduction. The\noriginal construction can be found in [9 ###reference_9###] and a version closer to what\nwe use is given in [12 ###reference_12###]."
40
+ },
41
+ {
42
+ "section_id": "2.5",
43
+ "parent_section_id": "2",
44
+ "section_name": "Circuits",
45
+ "text": "We provide a general definition that incorporates both Boolean and arithmetic\ncircuits.\nA circuit over the basis with variables and\nconstants is a directed acyclic graph with a labelling where each\nvertex of in-degree is labelled by an element of and each\nvertex of in-degree greater than is labelled by an element of such that the arity of the basis element matches the in-degree of the gate.\nNote that, in the examples we consider, the elements of the basis\noften do not have fixed arity. That is, we are considering unbounded\nfan-in circuits where gates such as AND, OR, , can take\nany number of inputs. The one exception is the NOT gate.\nLet , where , be a circuit with constants .\nWe call the elements of gates, and the elements of wires.\nWe call the gates with in-degree input gates and gates with\nout-degree output gates. We call those input gates labelled by\nelements of constant gates. We call those gates that are not input\ngates internal gates. For we say that is a child\nof if . We write to denote the set of children of\n. We write to denote the sub-circuit of rooted at . Unless\notherwise stated we always assume a circuit has exactly one output gate. We also assume that distinct input gates in a circuit have distinct labels.\nIf is a field , and is the set , we have an\narithmetic circuit over . If , and is a\ncollection of Boolean functions, we have a Boolean circuit over the basis\n. We define two Boolean bases here. The standard basis \ncontains the functions , , and . The threshold basis\n is the union of and , where for each\n, is defined for a string so\nthat if, and only if, the number of s in \nis at least . We call a circuit defined over this basis a threshold\ncircuit. Another useful function is , which is defined by . We do not explicitly include it in\nthe basis as it is easily defined in .\nIn general, we require that a basis contain only functions that are invariant\nunder all permutations of their inputs (we define this notion formally in\nDefinition 4 ###reference_###). This is the case for the arithmetic functions\n and and for all of the Boolean functions in and . Let\n be a circuit defined over such a basis with variables and constants .\nWe evaluate for an assignment by evaluating each gate labelled\nby some to and each gate labelled by some to , and\nthen recursively evaluating each gate according to its corresponding basis\nelement. We write to denote the value of the gate and to\ndenote the value of the output gate. We say that computes the function .\nIt is conventional to consider an arithmetic circuit over with\nvariables to be computing a polynomial in , rather than a function\n. This polynomial is defined via a similar recursive evaluation,\nexcept that now each gate labelled by a variable evaluates to the corresponding\nformal variable, and we treat addition and multiplication as ring operations in\n. Each gate then evaluates to some polynomial in . The\npolynomial computed by is the value of the output gate.\nFor more details on arithmetic circuits see [28 ###reference_28###] and for Boolean\ncircuits see [31 ###reference_31###].\nBy a standard translation (see [29 ###reference_29###]), arithmetic circuits with unbounded\nfan-in can be mapped to equivalent arithmetic circuits with constant fan-in with only a polynomial\nblowup in size and a logarithmic blowup in depth. This means that so long as we are interested in bounds on circuit size up to polynomial factors we may assume without a loss\nof generality that all gates have fan-in two. This assumption simplifies the\nanalysis of these circuits and in many cases authors simply define arithmetic\ncircuits to have internal gates with fan-in two (e.g. [28 ###reference_28###]). In\nthis paper we are interested in symmetric arithmetic circuits and\nthe standard\ntranslation does not preserve symmetry. As such, we cannot assume a bound on\nfan-in without a loss of generality and for this reason we define arithmetic circuits so as to\nallow for unbounded fan-in."
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "Symmetric Circuits",
51
+ "text": "In this section we discuss different symmetry conditions for functions and\npolynomials. We also introduce the notion of a symmetric circuit."
52
+ },
53
+ {
54
+ "section_id": "3.1",
55
+ "parent_section_id": "3",
56
+ "section_name": "Symmetric Functions",
57
+ "text": "For any group , we say that a function , along with an action\nof on is a -symmetric function, if for every ,\n.\nWe are interested in some specific group actions, which we now define and\nillustrate with examples.\nIf , we call a -symmetric function , fully\nsymmetric.\nExamples of fully symmetric functions are those that appear as labels of gates\nin a circuit, including , , , and .\nIf and is\n-symmetric with the natural action of on , then we say \nis matrix symmetric.\nMatrix symmetric functions are those where the input is naturally seen as a\nmatrix with the result invariant under aribtrary row and column permutations.\nThe canonical example for us of a matrix symmetric function is the permanent.\nThe determinant is not matrix symmetric over fields of characteristic other than\n, but does satisfy a more restricted notion of symmetry that we define next.\nIf and is -symmetric with the\nnatural action of on , then we say is square\nsymmetric.\nThe determinant is one example of a square symmetric function. However, as the\ndeterminant of a matrix is also invariant under the operation of transposing the\nmatrix, we also consider this variation. To be precise, let be the permutation that takes to for all .\nLet be the diagonal of (i.e. the image of\n in its natural action on ). We write for the\ngroup generated by . We say a\n-symmetric function is transpose symmetric.\nFinally, another useful notion of symmetry in functions is where the inputs are\nnaturally partitioned into sets.\nIf , , and is -symmetric with respect to the natural action of on , we\nsay is partition symmetric.\nIn Section 5 ###reference_###, we consider a generalization of\ncircuits to the case where the labels in the basis are not necessarily fully\nsymmetric functions, but they are still partition symmetric. The structure of\nsuch a circuit can not be described simply as a DAG, but requires additional\nlabels on wires, as we shall see."
58
+ },
59
+ {
60
+ "section_id": "3.2",
61
+ "parent_section_id": "3",
62
+ "section_name": "Symmetric Circuits",
63
+ "text": "Symmetric Boolean circuits have been considered in the literature, particularly\nin connection with definability in logic. In that context, we are considering\ncircuits which take relational structures (such as graphs) as inputs and we\nrequire their computations to be invariant under re-orderings of the elements of\nthe structure. Thus, the inputs to a Boolean circuit might be labelled by pairs of elements where and we require the output of to be invariant under a permutation of applied to the inputs. In short, the function computed by is square symmetric. A generalization to arbitrary symmetry groups was also defined by Rossman [26 ###reference_26###] who showed a lower bound for the parity function for formulas that are -symmetric for subgroups of .\nHere, we consider circuits that are symmetric with respect to arbitrary symmetry groups, and also consider them in the context of arithmetic circuits. In order to define\nsymmetric circuits, we first need to define the automorphisms of a circuit.\nLet be a circuit over the basis with variables and\nconstants . For , we say that a bijection is an automorphism extending if for every gate in \nwe have that\nif is a constant gate then ,\nif is a non-constant input gate then ,\nif is a wire, then so is\nif is labelled by , then so is .\nWe say that a circuit with variables is rigid if for every\npermutation there is at most one automorphism of \nextending .\nWe are now ready to define the key notion of a symmetric circuit.\nLet be a circuit with variables and . We say is\n-symmetric if the action of every on extends to\nan automorphism of . We say that is strictly -symmetric if the only automorphisms of are those extending a permutation in .\nIt is easy see that if a circuit is -symmetric then it computes a\n-symmetric polynomial (and hence function). We sometimes omit mention of \nwhen it is obvious from context. For a gate in a symmetric circuit , the\norbit of , denoted by , is the the set of all such\nthat there exists an automorphism of extending some permutation in with . We write\n for the maximum size of an orbit in , and call it the orbit\nsize of .\nWe use the same terminology for symmetric circuits as for symmetric functions.\nThat is, if a circuit with variables is -symmetric we say that is matrix symmetric. We similarly define\nsquare symmetric circuits, transpose symmetric circuits and partition symmetric circuits.\nThough symmetric arithmetic circuits have not previously been studied, symmetric\nBoolean circuits have [17 ###reference_17###, 25 ###reference_25###, 2 ###reference_2###, 26 ###reference_26###]. It is known that\npolynomial-size square symmetric threshold circuits are more powerful than\npolynomial-size square symmetric circuits over the standard basis [2 ###reference_2###].\nIn particular, the majority function is not computable by any family of\npolynomial-size symmetric circuits over the standard basis. On the other hand,\nit is also known [16 ###reference_16###] that adding any fully symmetric functions to\nthe basis does not take us beyond the power of the threshold basis. Thus, \ngives the robust notion, and that is what we use here. It is also this that has\nthe tight connection with counting width mentioned above."
64
+ },
65
+ {
66
+ "section_id": "3.3",
67
+ "parent_section_id": "3",
68
+ "section_name": "Polynomials",
69
+ "text": "In the study of arithmetic complexity, we usually think of a circuit over a\nfield with variables in as expressing a polynomial in , rather\nthan computing a function from to . The distinction is signficant,\nparticularly when is a finite field, as it is possible for distinct\npolynomials to represent the same function.\nThe definitions of symmetric functions given in\nSection 3.1 ###reference_### extend easily to polynomials. So, for a\ngroup acting on , a polynomial is said to be -symmetric\nif for all . It is clear that a -symmetric polynomial\ndetermines a -symmetric function. We define fully symmetric,\nmatrix symmetric, square symmetric and transpose symmetric\npolynomials analogously. Every matrix symmetric polynomial is also square\nsymmetric. Also, every transpose symmetric polynomial is square symmetric. The\npermanent is both matrix symmetric and transpose symmetric, while the\ndeterminant is transpose symmetric, but not matrix symmetric.\nWhat are usually called the symmetric polynomials are, in our\nterminology, fully symmetric. In particular, the homogeneous polynomial is fully symmetric. There is a known lower bound of on the size of any circuit expressing this polynomial [6 ###reference_6###].\nIt is worth remarking that the matching upper bound is achieved by a fully symmetric\ncircuit. Thus, at least in this case, there is no gain to be made by breaking\nsymmetries in the circuit. Similarly, we have tight quadratic upper and lower\nbounds for depth-3 circuits for the elementary symmetric polynomials over infinite fields [27 ###reference_27###]. The upper\nbound is obtained by the interpolation method and it can be seen that this is achieved by\nfully symmetric circuits. To be precise, the polynomial is computed\nas the coefficient of in , which is\nobtained by interpolation from computing at\n distinct values of . Note that, for any fixed constant ,\n is given by a fully symmetric circuit of size\n, and these can be combined to get the interpolant. The\nresulting circuit is still fully symmetric since a permutation of the\nvariables fixes the polynomial .\nIndeed, we can say something more general about fully symmetric\npolynomials. If any such polynomial has a circuit of\nsize polynomial in then it has a -circuit of size polynomial in . This follows from a result of Bl\u00e4ser and\nJindal [7 ###reference_7###] who establish that for any fully symmetric\npolynomial which has a\npolynomial-size circuit there exists a witness\n computable via an arithmetic\ncircuit of size polynomial in such that ,\nwhere the \u2019s are the elementary symmetric polynomials. To see\nwhy this implies the result, observe that if is a fully symmetric\npolynomial and is the corresponding witness computable via a\npolynomial size circuit , and are the (fully symmetric and\npolynomial size) circuits computing the polynomials , then we can\nbuild a circuit for by replacing each input in with the\noutput gate of . The resultant circuit is symmetric since any\npermutation on the input gates fixes the output gate of each .\nThe best known upper bound for general arithmetic circuits for expressing the\npermanent is given by Ryser\u2019s formula:\nIt is easily seen that this expression is matrix symmetric, and\nit yields a matrix symmetric circuit of size . Our main result,\nTheorem 18 ###reference_8###, gives us a near matching lower bound on the size of matrix symmetric circuits (or even square symmetric circuits) for expressing .\nA -symmetric circuit expressing a polynomial is also a\n-symmetric circuit computing the function determined by . In establishing our\nupper bound for the determinant, we show the existence of small transpose symmetric\ncircuits for the polynomial, and hence also for the function. For the lower\nbound on the permanent, we show that there are no small square symmetric circuits for\ncomputing the function, hence also none for the polynomial. For a discussion of\nfunctional lower bounds, as opposed to polynomial lower bounds,\nsee [18 ###reference_18###]."
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "An Upper-Bound for the Determinant",
75
+ "text": "In this section we show that for any field with characteristic zero there\nis a polynomial-size family of transpose symmetric arithmetic circuits over \ncomputing . We define this family using Le Verrier\u2019s method for\ncalculating the characteristic polynomial of a matrix. We review this method\nbriefly, and direct the reader to Section 3.4.1 in [21 ###reference_21###] for more\ndetail.\nThe characteristic polynomial of an matrix is\nwhere are the eigenvalues of , counted with\nmultiplicity. It is known that and . Le\nVerrier\u2019s method gives, for each , the linear recurrence given by\nwhere and for each , .\nThe determinant can thus be computed as follows. First, for each we\ncompute entries in the matrix . Then for each we compute . Finally, we recursively compute each and output . There\nis a natural arithmetic circuit with variables implementing this algorithm.\nTo see that is transpose symmetric we begin with some permutation and show that extends to an automorphism of the\ncircuit. We construct this automorphism layer by layer. We map each gate\ncomputing some entry of to the gate computing the \nentry of . We fix each gate computing some . Since each gate computing\nsome uses only the gates computing and a constant gate\ncomputing , we can also fix each of these gates. We now present this argument\nformally.\nFor be a field of characteristic , there exists a family of transpose\nsymmetric arithmetic circuits over computing\n for which the function is computable in time\n.\nLet and let be an \nmatrix of variables, for an index set with . We now\ndescribe an implementation of Le Verrier\u2019s method for matrices as\narithmetic circuit over the set of variables . We construct this\ncircuit as follows.\nFor each we include a family of gates intended to compute\nthe entries in the th power of the matrix . For each we\ninclude a gate intended to compute . Let\n and for all ,\n.\nFor each we include a gate intended to compute\nthe trace of . Let \nand for , .\nFor each we include a gate intended to compute the\ncoefficient in the characteristic polynomial. Let and for all let\nLet be the output gate of . It follows from the discussion\npreceding the statement of the theorem that computes .\nIt remains to show that the circuit is transpose symmetric. Let . Let be defined such that for each input\ngate labelled we have , for\neach gate of the form we have , and for every other gate we have . It\ncan be verified that is a circuit automorphism extending .\nSimilarly, if is the transpose permutation,\ni.e. , then we can extend it to an automorphism \nof by letting . It follows that is\na transpose symmetric arithmetic circuit.\nThe circuit contains constant gates labelled by . There are other input gates. Computing each gate uses gates ( products and sum). Then, since there are entries in each matrix and matrices to compute, the total number of gates needed to compute all of the gates is . There are additional gates required to compute all gates of the form . There are at most gates required to compute all\ngates of the form . It follows that the circuit is of size\n. The above description of the circuit can be\nadapted to define an algorithm that computes the function \nin time .\n\u220e\nLe Verrier\u2019s method explicitly involves multiplications by field elements\n for , and so cannot be directly applied to fields of\npositive characteristic. We conjecture that it is also possible to give square\nsymmetric arithmetic circuits of polynomial size to compute the determinant over\narbitrary fields. Indeed, there are many known algorithms that yield\npolynomial-size families of arithmetic circuits over fields of positive\ncharacteristic computing . It seems likely that some of these could\nbe implemented symmetrically."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "From Arithmetic To Boolean Circuits",
81
+ "text": "In this section we establish the following symmetry and orbit-size preserving translation from\narithmetic circuits to threshold circuits. Importantly, this translation does not preserve circuit size, which may grow exponentially.\nLet be a group acting on a set of variables . Let be a\n-symmetric arithmetic circuit over a field with variables . Let be finite. Then there is a -symmetric threshold circuit \nwith variables and , such that for all we have \nif, and only if, .\nWe use Theorem 11 ###reference_1### in Section 7 ###reference_### to\ntransfer a lower bound on threshold circuits to arithmetic circuits, a crucial\nstep in establishing our lower bound for the permanent. This lower bound relies on the preservation of orbit-size in Theorem 11 ###reference_1###, and the connection between orbit-size and counting width.\nWe prove Theorem 11 ###reference_1### by first establishing a similar\ntranslation from arithmetic circuits over a field to Boolean circuits over\na basis of partition symmetric functions. We then complete the proof\nby replacing each gate labelled by a partition symmetric function with an\nappropriate symmetric Boolean threshold circuit.\nTo enable this second step, we first show that each partition symmetric function\ncan be computed by a rigid strictly symmetric threshold circuit. The proof of\nthis follows from the fact that if a function is\npartition symmetric, then its output for depends only on the\nnumber of elements in each part of that maps to . We can thus\nevaluate by counting the number of s in each part, a procedure which we\nnow show can be implemented via a symmetric threshold circuit.\nLet be a partition symmetric function. There exists a rigid strictly\npartition symmetric threshold circuit computing .\nLet be a disjoint union of finite sets \nindexed by , and be a partition symmetric\nfunction. The fact that is partition symmetric means that whether for some is determined by the number of (for\neach ) for which . Write for this number. Then, there is a\nset such that if, and only if, . Since each is finite, so is . Then if,\nand only if, the following Boolean expression is true: We can turn this expression into a\ncircuit with an OR gate at the output, whose children are AND gates,\none for each , let us call it . The children of \nare a set of gates, one for each , let us call it , which is\nlabelled by and has as children all the inputs .\nThis circuit is symmetric and rigid, but not necessarily strictly\nsymmetric, as it may admit automorphisms that do not respect the partition of\nthe inputs as . To remedy this, we create pairwise\nnon-isomorphic gadgets , one for each . Each is a\none-input, one-output circuit computing the identity function. For example,\n could be a tower of single-input AND gates, and we choose a different\nheight for each . We now modify to obtain by inserting between\neach input and each gate a copy of the gadget\n.\nClearly computes . We now argue is rigid and strictly partition\nsymmetric. To see that it is partition symmetric, consider any in its natural action on . This extends to an automorphism\nof that takes the gadget to while fixing all\ngates and . To see that there are no other automorphisms,\nsuppose is an automorphism of . It must fix the output OR gate.\nAlso cannot map a gate to for because\nthe gadgets and are non-isomorphic. Suppose that maps\n to . Then, it must map to . Since\nthe labels of these gates are and respectively, we\nconclude that for all and therefore .\n\u220e\nWe now define for each field the basis . The functions in this\nbasis are intended to be Boolean analogues of addition and multiplication. Let\n be finite, be a disjoint union of\nnon-empty finite sets, and . Formally, we define for any the functions and as follows: and . Both and\n are partition symmetric. Let be the set of all\nfunctions and .\nWe aim to prove Theorem 11 ###reference_1### by first defining for a\ngiven -symmetric arithmetic circuit a corresponding -symmetric Boolean\ncircuit over a partition symmetric basis. To ensure unambiguous evaluation, the\ncircuit must include for each gate labelled by a partition symmetric function a\ncorresponding partition on its children. Let be a circuit with variables \nand let be a gate in labelled by a partition symmetric function , where is a disjoint union\nof finite non-empty sets. We associate with a bijection . We evaluate for an input as follows. For we\nlet be defined such that for\nall . Let .\nWe associate with each a finite set \nsuch that for any assignment of - values to the inputs, , we have . This can be defined by\ninduction on the structure of : If is an input gate, ; and if is an -gate for with children we let .\nLet be the output gate of . If let be the\ncircuit consisting of a single gate labelled by and if let consist of a single gate labelled by . Suppose that\nneither of these two cases hold.\nWe now construct a -circuit from by replacing\neach internal gate in with a family of gates for such that if, and only if, . Each is labelled by a function of the form or ,\ndepending on if is an addition or multiplication gate. We also add a\nsingle output gate in that has as children exactly those gates \nwhere . We define from recursively as follows.\nLet .\nIf is a non-constant input gate in let be an input\ngate in labelled by the same variable as and let be a\nNOT-gate with child .\nIf is a constant gate in labelled by some field element \nlet be a constant gate in labelled by .\nSuppose is an internal gate. Let .\nFor let . Let . For each let be a gate in \nsuch that if is an addition gate or multiplication gate then is\nlabelled by or , respectively. The labelling\nfunction is defined for such\nthat if then .\nWe add one final OR-gate to form with .\nWe now show that is a -symmetric circuit. Let and \nbe an automorphism of extending . Let be\ndefined such that for each gate , \nand for the output gate , . It can be verified by induction\nthat is an automorphism of extending .\nWe now show that . It suffices to prove that for and that if, and only if, . The forward direction follows from the above argument\nestablishing that is -symmetric. Let and \nand suppose . For each gate pick some such that if or then and for all , if then . Let be an\nautomorphism of such that . Let \nbe defined for such that . We now\nshow that is an automorphism of , and so . Since preserves the labelling on the gates in ,\na simple induction on the depth of the gate in the circuit shows\nthat for all , and so . Let\n and suppose . Then , and so and . It\nfollows that is injective, and so bijective. Let . Then\n. The first and last equivalences follow from the\nconstruction of the circuit. The remaining conditions for to be an\nautomorphism can be easily verified.\nLet . We now show by induction that for all \nand , if, and only if, . Let . If is an input gate then the claim holds trivially. Suppose \nis an internal gate and let . Suppose is an addition gate. Then\n is labelled by the function where , for , , and\n. Then\nA similar argument suffices if is a multiplication gate. It follows that\n if, and only if, there exists such that if, and only if, .\nWe define from by replacing each internal gate labelled\nby some with the rigid strictly -symmetric threshold\ncircuit computing defined in Lemma 12 ###reference_2###.\n computes the same function as . We now argue that . Suppose that some gate in corresponding to a\ngate in is mapped by an automorphism of to a gate\n in corresponding to in . Since each\n has a unique OR gate, it must be the case that the OR gate in\n then maps to the OR gate in and so we have an\nisomorphism between and . The fact that is\nrigid and strictly partition\nsymmetric ensures that the isomorphism respects the partition on the\ninput and so the circuits compute the same function, i.e. .\nWe can conclude that the only\nautomorphisms of are those that are obtained from automorphisms\nof . Thus, .\n\u220e"
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "Supports and Counting-Width",
87
+ "text": "Lower bounds that have been established for symmetric Boolean circuits are based on showing lower bounds on supports in such circuits. In this section, we review the connection between the orbit size of circuits, the size of supports and the counting width of graph parameters computed by such circuits. We improve on the known connection between support size and orbit size to show that it can be used to obtain exponential lower bounds. We begin by reviewing the definition of supports.\nLet be a rigid -symmetric circuit with variables . We say a set is a support of a gate if\n.\nLet be the minimum size of a support of a gate . Let be\nthe maximum size of for a gate in . We refer to as the support size of .\nUpper bounds on the orbit size of a square symmetric circuit yield upper bounds on its support size. Indeed, it was shown in [2 ###reference_2###, Theorem 4] that circuit families of size at most have supports of size at most . This result was extended to orbit size for arbitrary positive in [5 ###reference_5###, Theorem 1]. The result there is stated in terms of the\nsize of the circuit rather than its orbit size. However, the\nproof easily yields the bound for orbit size. These results\nimmediately yield that polynomial-size families of symmetric circuits\nhave support size. It also implies that a linear\nlower bound on support size yields a lower bound of\n on orbit size. It was this relationship\nthat was used to obtain lower bounds on the size of symmetirc circuits\nfor the permanent in the early version of this paper [14 ###reference_14###].\nHere we improve the lower bound by showing that a linear lower bound\non support size implies an exponential lower bound on orbit size, in\nTheorem 15 ###reference_5### below. First we recall the\nfollowing theorem.\nLet be a rigid square symmetric Boolean circuit with order . For\nevery if the maximum size of an orbit in is\nbounded by then each gate in has a support of size less\nthan .\nTheorem 14 ###reference_4### should be understood as a\nrestatement of [16 ###reference_16###, Theorem 4.10] using the language of\nthis paper. In [16 ###reference_16###] we dealt with a more general notion\nof circuits where individual gates could be labelled by functions that\nare not fully symmetric. What are called circuits with\ninjective labels and unique extensions in that paper,\nrestricted to the circuits we consider here, are exactly the rigid circuits.\nWe now extract from\nTheorem 14 ###reference_4### an asymptotic relationship between\norbits and supports.\nLet be a family of rigid square symmetric Boolean circuits over the threshold basis. If\n then .\nLet be the least value such that . By the assumption that , we have that is . Indeed, otherwise there is a constant with , such that for infinitely many . And since for all it follows that . Since is the least value such that , it follows that for infinitely many , contradicting the assumption that .\nFrom it follows that for all large enough , and so, by Theorem 14 ###reference_4###, and therefore as required.\n\u220e\nWe now use the connection between support size and counting width\nestablished in [2 ###reference_2###]. Indeed, Theorem 6\nof [2 ###reference_2###] asserts that a query on relational structures\n(e.g. a graph property) is decidable by a family of square symmetric\ncircuits with polynomial orbit size if, and only if, it is definable\nin , an infinitary logic with counting\nquantifiers. It is known that definability in this logic is the same\nas having bounded counting width. Moreover, the proof\nof [2 ###reference_2###, Theorem 6] establishes this by showing that a\ncircuit of support size translates into a formula with variables. Thus, if a class of\ngraphs is decidable by a family of symmetric circuits with supports of size at most then has\ncounting width . This, along with\nTheorem 15 ###reference_5###, immediately yields the following.\nLet be a class of graphs decidable by a family of square\nsymmetric Boolean circuits with threshold gates and with orbit size , then\n has counting width .\nThe statement of Theorem 16 ###reference_6### does not make mention of the rigidity condition. This suffices, as from [2 ###reference_2###, Lemma 7] we have that any symmetric Boolean circuit over the threshold basis may be converted into an equivalent rigid symmetric circuit with only a linear increase in size. It is easily seen from the proof of that lemma that the conversion does not increase orbit size.\nFor a field and a graph parameter with values in , we say that is computed by a family of -arithmetic circuits if the inputs to are labelled by the variables for and, given the adjacency matrix of a graph on its inputs, computes .\nIf a graph parameter is computed by a square symmetric family of arithmetic circuits with orbit size , then the counting width of is .\nLet be the counting width of . Then, by definition, we can find for each a pair of graphs and with at most vertices such that\n but . Let . Then, by Theorem 11 ###reference_1### there is a family of\nsquare symmetric circuits with threshold gates of orbit size \nthat decides for a graph \nwhether . It follows from Theorem 16 ###reference_6### that\nthe counting width of this decision problem is . Since the counting\nwidth of this decision problem is, by choice of , , it follows that\n.\n\u220e"
88
+ },
89
+ {
90
+ "section_id": "7",
91
+ "parent_section_id": null,
92
+ "section_name": "A Lower-Bound for the Permanent",
93
+ "text": "In this section we establish exponential lower bounds on the size of symmetric\narithmetic circuits for the permanent. We state the result for square\nsymmetric arithmetic circuits over fields of characteristic zero in\nSection 7.1 ###reference_### and show how it can be derived from a\nlower bound on the counting width of the number of perfect matchings.\nThe bulk of the section is the construction in\nSection 7.2 ###reference_### establishing this counting width lower\nbound. In Section 7.3 ###reference_###, we explain how the\nargument extends to fields of positive characteristic other than two,\nbut at the expense of making the stronger requirement that the\ncircuits are matrix symmetric. Finally, in\nSection 7.4 ###reference_### we make a comparison of our lower bounds\nwith lower bounds on equivariant determinantal representations of the permanent."
94
+ },
95
+ {
96
+ "section_id": "7.1",
97
+ "parent_section_id": "7",
98
+ "section_name": "Characteristic Zero",
99
+ "text": "There is no family of square symmetric arithmetic circuits over any\nfield of\ncharacteristic of orbit size computing .\nOur proof of this result establishes something stronger. We actually show that\nthere is no family of symmetric arithmetic circuits of orbit size \nthat computes the function for matrices .\nClearly, a circuit that computes the polynomial also computes this\nfunction. Theorem 18 ###reference_8### is proved by showing lower bounds on the\ncounting widths of functions which determine the number of perfect matchings in\na bipartite graph.\nFor a graph let be the number of perfect matchings in . Our construction establishes a linear lower bound on the counting width of . Indeed, it also shows a linear lower bound on the counting width of \nfor all odd values .\nThus, we aim to prove the following.\nThere is, for each , a pair of balanced bipartite graphs and\n with vertices, such that , and for some .\nBefore giving the proof of Theorem 19 ###reference_9### we show how\nTheorem 18 ###reference_8### now follows.\nBy Theorem 19 ###reference_9###, we have, for each , a pair of graphs and \nwith vertices such that and and hence . Thus, the counting width of is .\nSuppose that there is a family of square symmetric arithmetic circuits over a\nfield of characteristic with orbit size computing\n. Then, since the permanent of the adjacency matrix of a bipartite graph is exactly , it follows from\nCorollary 17 ###reference_7### that the counting width of the\n is , giving a contradiction.\n\u220e\nIt is worth noting why we consider the parameter rather than itself in the proof above. The proof of Theorem 16 ###reference_6###, relying on [2 ###reference_2###, Theorem 6] relates the counting witdth of a class to the size of supports in symmetric circuits deciding . Specifically, this is proved for circuits whose input is the adjacency matrix of a graph and which are symmetric with respect to permutations of the vertices of the graphs. This is why we need to take the permanent of the adjacency matrix, rather than the biadjacency matrix of the graph . We consider this point in more detail in Section 7.3 ###reference_### when we consider lower bounds in fields of positive characteristic."
100
+ },
101
+ {
102
+ "section_id": "7.2",
103
+ "parent_section_id": "7",
104
+ "section_name": "Construction",
105
+ "text": "The construction to prove Theorem 19 ###reference_9### is an adaptation of a\nstandard construction by Cai, F\u00fcrer and Immerman [9 ###reference_9###] which gives\nnon-isomorphic graphs and with for arbitrary (see\nalso [12 ###reference_12###]). We tweak it somewhat to ensure that both graphs have perfect\nmatchings (indeed, they are both balanced bipartite graphs). The main innovation\nis in the analysis of the number of perfect matchings the graphs contain.\nIn what follows, is always a 3-regular 2-connected graph. From this,\nwe first define a graph . The vertex set of contains, for each edge\n, two vertices that we denote and . For each vertex \nwith incident edges and , contains five vertices. One of these\nwe call the balance vertex and denote . The other four are called\ninner vertices and there is one , for each subset of even size. For each , the neighbours of are exactly\nthe four vertices of the form . Moreover, for each ,\n contains the edge if and the edge \notherwise. There are no other edges in .\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### The construction of from essentially replaces each vertex with\nincident edges and with the gadget depicted in\nFigure 1 ###reference_###, where the dashed lines indicate edges whose endpoints\nare in other gadgets. The vertices for each are\nshared with neighbouring gadgets.\nFor any fixed vertex with incident edges , the graph \nis obtained by modifying the construction of so that, for the one vertex\n, the gadget contains inner vertices for subsets of odd size. Again, for each , contains the\nedge if and the edge otherwise. Equivalently, we could describe this by saying that in this gadget, we interchange the roles of and .\nIf we remove the balance vertices , the graphs and are\nessentially the Cai-F\u00fcrer-Immerman (CFI) graphs associated with . The\nbalance vertex is adjacent to all the inner vertices associated with \nand so does not alter the automorphism structure of (or ) at\nall. Nor do these vertices alter any other essential properties of the CFI\nconstruction. In particular, since is connected, we have the following\nlemma. Though it is standard, we include a proof sketch.\nFor any , and are isomorphic.\nNote that the gadget corresponding to a vertex as in\nFigure 1 ###reference_### admits automorphisms that swap and for any\ntwo edges incident on . Now, let be a simple\npath from to in . We obtain an isomorphism from to\n by interchanging and for all edges on this path, and\nextending this to the induced automorphisms of the gadgets corresponding to\n.\n\u220e\nWith this in mind, we refer simply to the graph to mean a graph\n for some fixed , and we refer to as the special vertex of .\nBy known properties of the CFI construction, we also have the following\n(see [12 ###reference_12###, Theorem 3]).\nIf the treewidth of is greater than , then .\nThe purpose of the balance vertices is to change the structure of the perfect\nmatchings. Indeed, if we consider the subgraph of that\nexcludes the balance vertices, it is easily seen that this contains no perfect\nmatchings. It is a bipartite graph where one part contains the inner\nvertices and the other part contains the edge vertices and so no\nperfect matching is possible. But, is a bipartite graph where in one part\nwe have the inner vertices and in the other the edge vertices\nalong with the balance vertices. In short, this is a -regular bipartite\ngraph and so contains perfect matchings. We next analyse the structure of the\nset of such perfect matchings. In particular, we show that and \ncontain different numbers of perfect matchings.\nIn the sequel, we write to denote either one of the graphs or\n, to denote its vertices and to denote its edges. We\ncontinue to use and for the vertices and edges of . Also, for each , we write to denote the set of four inner vertices in \nassociated with .\nLet be a perfect matching in . For each and incident on , we define the projection of on to\nbe the value in which is the number of edges between \nand that are included in . These satisfy the following equations:\nThe first of these holds because must include exactly one edge incident on\neach of and . The second holds because must include an edge\nbetween and one vertex of . Thus, the three remaining vertices in\n must be matched with vertices among .\nOne solution to the set of equations is obtained by taking the constant\nprojection for all such pairs . Say that a matching is\nuniform if everywhere and non-uniform otherwise.\nThe number of non-uniform matchings in is the same as in .\nIt suffices to prove that for any non-constant projection , the number of\nmatchings with is the same for both and . For\nthen, taking the sum over all possible projections gives the result. So, let\n be a non-constant projection. Then, for some edge , we\nhave and . Then, let and be the\nsubgraphs of and respectively obtained by removing the edges\nbetween and . It is clear that any matching in \nwith is also a perfect matching in , and similarly for\n. However, and are isomorphic. This follows by an\nargument analogous to the proof of Lemma 20 ###reference_0###. Since is\n2-connected, there is a path from to the special vertex that does\nnot involve the edge . We can then define an isomorphism from to\n by mapping to , for each edge on the path , mapping\n to and extending this using the induced automorphisms of the\ngadgets corresponding to . We conclude that the numbers of\nsuch matchings are the same for both.\n\u220e\nNow, we aim to show that the number of uniform matchings of is different\nto that of . For this, it is useful to first analyse the orientations of\nthe underlying graph .\nAn orientation of is a directed graph obtained from by assigning\nto each edge a direction, either or . There are\nexactly distinct orientations of . We say that a vertex \nis odd with respect to an orientation of if it has an odd\nnumber of incoming directed edges and even otherwise. For an orientation\n of , we write for the set of its odd vertices. We\nsay that the orientiation is odd if is odd,\nand we say it is even otherwise.\nIf is even, then all orientations of are even. If is odd,\nthen all orientations of are odd.\nNote that since is -regular, , so is always even.\nMoreover, is even if, and only if, is. For an orientation\n, let denote the number of edges incoming to the vertex\n. Then, . But, .\n\u220e\nThus, we say that a graph is odd if is odd, and hence all\norientations of are odd, and is even if is even and hence\nall orientations of are even.\nIf is even then for every set with even,\nthere is an orientation of with . Similarly\nif is odd, then for every set with odd,\nthere is an orientation of with .\nIt suffices to show, for any set and any pair of vertices , if there is an orientation of with ,\nthen there is also an orientation with . Now, consider any simple path from to in and\nlet be the orientation obtained from by reversing the\ndirection of every edge on this path.\n\u220e\nIndeed, we can say more.\nFor every set with , there are exactly\n distinct orientations with .\nLet be the incidence matrix of the graph . This defines a\nlinear transformation from the vector space to .\nThe additive group of has a natural action on the orientations\nof : for a vector , and an orientation ,\ndefine to be the orientation obtained from by changing\nthe orientation of each edge with . Indeed, fixing one\nparticular orientation , the action generates all orientations and\ngives a bijective correspondence between the vectors in and the\norientations of . Similarly, the additive group of has a\nnatural action on the powerset of : for a vector \nand a set , let be the set . Again, for any fixed set , this action generates all\nsubsets of and gives a bijection between and the powerset\nof .\nThen, it can be seen that . Indeed,\nif is a vertex with incident edges , then . In other words just in case the\ndirection of an odd number of edges incident on is flipped by . Thus,\nthe set of vertices are exactly the ones that\nchange from being odd to even or vice versa under the action of , i.e.\n for any\norientation .\nFixing a particular orientation , the action of \ngenerates all orientation , and maps this to the collection of\nall sets . Then, by\nLemmas 23 ###reference_3### and 24 ###reference_4### the image\nof consists of exactly the set of vectors with an even number of s.\nHence, the image of has dimension and so its kernel has size\n. Since , this is . By\nlinearity, the pre-image of any vector in the image of has exactly\nthis size. Thus, for each even size set , there are exactly\n vectors with .\n\u220e\nAny uniform perfect matching of induces an orientation of , which we\ndenote as follows: any edge is oriented from \nto in if contains an edge between and a vertex in\n and an edge between and a vertex in .\nFurthermore, every orientation arises from some perfect matching. To see this,\nconsider again the gadget in Figure 1 ###reference_###. This has eight subgraphs\ninduced by taking the vertices , together with exactly one\nvertex from each of the sets , and . We\nclaim that each of these eight subgraphs contains a perfect matching. Indeed, it\nsuffices to verify this for the two cases and as the other\nsix are obtained from these by automorphisms of the gadget. In what follows, we\nalso write and for the subgraphs of the gadget in\nFigure 1 ###reference_### induced by these sets.\nhas exactly four perfect matchings:\nhas exactly two perfect matchings:\nHence, for any orientation , we get a matching with\n by choosing one matching from each gadget. To be precise,\nfor each vertex , define the relevant subgraph of at to\nbe the subgraph induced by along with the vertices for\neach edge incoming at in and for each edge outgoing\nat in . In , the relevant subgraph at is isomorphic to\n if is even in and it is isomorphic to if is odd in\n. The same is true for all vertices in , apart from the special\nvertex . For this one, the relevant subgraph is isomorphic to the induced subgraph on if is\nodd and to if is even. In either case, we get a perfect matching \nwith by independently choosing exactly one matching in\neach relevant subgraph. There are such choices when the relevant subgraph is\nlike and choices when it is like .\nIt follows that for any orientation of , the number of uniform\nperfect matchings of with is\n. The number of uniform perfect\nmatchings in depends on whether the special vertex is odd in\n or not. If it is, the number is\n otherwise it is\n. Thus, if we denote the number\nof uniform perfect matchings in by , then we have\nwhere the sum is over all orientations of . Then, by Lemma 25 ###reference_5###,\nBy the same token,\nWe aim to show that and are different. Let \ndenote the number \nand denote the number .\nFor all , .\nWe have\n\u220e\nBy a standard expander graph construction (e.g. [1 ###reference_1###]), for any , we\ncan find a -regular graph with treewidth at least and vertices. Then and both have \nvertices and by Lemma 21 ###reference_1### we have . Moreover,\n and have the same number of non-uniform perfect matchings by\nLemma 22 ###reference_2###. The number of uniform matchings is in\none case and in the other (which is which depends on whether is\neven or odd). Either way, , which is a\npower of as required."
106
+ },
107
+ {
108
+ "section_id": "7.3",
109
+ "parent_section_id": "7",
110
+ "section_name": "Positive Characteristics",
111
+ "text": "Theorem 18 ###reference_8### gives a lower bound for square-symmetric circuits computing the permanent in characteristic zero, which contrasts neatly with the upper bound for the determinant established in Theorem 10 ###reference_0###. We now briefly sketch the lower bounds that our method yields for computing the permanent in positive characteristic. The short statement is that we can get exponential lower bounds for all odd characteristics, but only with respect to a more stringent symmetry requirement\u2014namely matrix symmetry.\nTheorem 19 ###reference_9### establishes a lower bound on the counting width of \u2014the number of perfect matchings in a graph. The theorem also establishes a lower bound on , the number of perfect matchings modulo for any odd value of . This is because for the graphs and obtained from the theorem, we have and so for any odd .\nFor any odd , the counting width of the number of perfect matchings modulo of a bipartitioned graph with vertices is .\nHowever, we do not have a lower bound on the counting width of .It is quite possible that, for the graphs and of Theorem 19 ###reference_9### we have . This is the reason why Theorem 18 ###reference_8### is only formulated for characteristic zero.\nThe reason that we have to use in the proof of\nTheorem 18 ###reference_8### has to do with our use of the\nconnection between the counting width of a class of\nrelational structures and the orbit size of a circuit family deciding\nmembership in as established in [2 ###reference_2###],\nwhich we use as a black box to get Theorem 16 ###reference_6###. This\nconnection between counting width and orbit size of circuits is\nestablished in [2 ###reference_2###] specifically for circuits taking\nrelational structures as input and which are symmetric under the\naction of permutations of the elements. In the context of graphs, this means it applies to circuits taking the adjacency matrix of a graph as input and symmetric under all permutations of . For such circuits, it establishes that if and are two graphs on vertex set with , then their adjacency matrices cannot be distinguished by a circuit of small, i.e. orbit size. From this, we cannot directly obtain lower bounds for circuits that take the biadjacency matrix of a graph as input. To do this, we have to look inside the black box of Theorem 16 ###reference_6###, relating counting width to circuits.\nConsider a bipartite graph with bipartition ,\nwhere and both have elements. If we identify both sets\n and with the set (equivalently, if we fix a bijection\nbetween and ), then the biadjacency matrix of\n can be seen as the adjacency matrix of a directed graph \non vertex set with an arc whenever there is an edge\nbetween and . It then follows directly from the\nresults of [2 ###reference_2###] that if we have a pair of bipartite graphs and with , then the biadjacency matrices of and cannot be distinguished by small symmetric circuits. Unfortunately, for the graphs and of Theorem 19 ###reference_9###, we are not able to prove that .\nThe proof that a pair of structures are equivalent with respect to is often given by as a Duplicator winning strategy in the -pebble bijection game (see [12 ###reference_12###]). The relation between such winning strategies and lower bounds for symmetric circuits is made explicit in [11 ###reference_11###]. This has been greatly expanded to a method for proving lower bounds for -symmetric circuits for arbitrary groups in [15 ###reference_15###]. What this means in our context is that to prove that the biadjacency matrices of the bipartite graphs and are not distinguished by small square-symmetric circuits, we need to show a Duplicator winning strategy that respects a fixed bijection between the two parts of the bipartition in and . We are not able to do this. What we do know is that the Duplicator winning strategy that shows does respect the bipartition itself. In other words, we can expand the graphs and with colours for the sets and (the two parts of the bipartition) and these coloured graphs are still equivalent with respect to . This is sufficient to establish that their biadjacency matrices are not distnguished by matrix-symmetric circuits of size . Since for a bipartite graph , the permanent of its biadjacency matrix , over a field of characteristic is exactly , this allows us to establish our lower bound.\nThere is no family of matrix-symmetric arithmetic circuits over any field of\nodd characteristic of orbit size computing\n."
112
+ },
113
+ {
114
+ "section_id": "7.4",
115
+ "parent_section_id": "7",
116
+ "section_name": "Equivariant Determinantal Representations",
117
+ "text": "Lower bounds for computing the permanent in symmetric models of computation have previously been established, notably in the work of Landsberg and Ressayre [24 ###reference_24###]. They establish an exponential lower bound on the equivariant determinantal complexity of the permanent, specifically over the complex field . In this section we make a brief comparison of our results with theirs.\nThe determinantal complexity (DC) of a polynomial is defined to be the least such that there is an matrix with entries that are affine linear forms in such that . Such a matrix is called a determinantal representation of . It is known [30 ###reference_30###] that every polynomial in has DC that is at most quasi-polynomial. It follows that an exponential lower bound on the DC of the permanent would show that it is not in , separating from . Indeed, such a bound would\nshow that circuits computing must have size at least for some positive . On the other hand, an exponential lower bound on the circuit complexity of the permanent would also yield a similar lower bound for its determinental complexity. To see this note that using an family of circuits for computing and an determinantal representation of the permanent, we get an family of circuits computing . This is obtained by taking the circuit computing the determinant and attaching to its inputs the circuits (of at most size) computing the affine linear forms that form the entries of . Hence a lower bound on the circuit complexity fo the permanent gives us a lower bound on its determinantal complexity.\nLandsberg and Ressayre establish exponential lower bounds on any equivariant determinantal representation of the permanent, that is one that preserves all the symmetries of the permanent function. This includes not just the\npermutations on entries that we consider, but the entire projective symmetry\ngroup. Our aim is to see how this relates to our lower bounds on symmetric circuit complexity. Unfortunately, the relationship is not straightforward in either direction because of the different notions of symmetry used and the symmetry-breaking nature of the translation from circuits to determinantal representations. To make this explicit, we first introduce some definitions. These are simplified from (and so less general than) those given by Landsberg and Ressayre but suffice to show that our results are incomparable with theirs.\nFormally, consider a homogeneous polynomial . Let\n denote the group of invertible linear maps on the vector space\n. In what follows, we identify with the set\nof linear forms in the variables , so we can write for\n and a linear form in . We extend the notation to\naffine linear form by the convention that when for and a linear form.\nFor a map , we write to mean the polynomial obtained from by replacing each variable by the linear form . We now define the symmetry group of to be the group of linear maps such that . In particular, when we can think of the elements of as matrices and the symmetry group of can be identified with the group . Here the action of takes to and the action of the non-trivial element in takes to .\nLet be an determinantal representation of a polynomial . For , write to be the matrix obtained from by replacing each entry by (where we see each affine linear form as a polynomial in . We say that is an equivariant determinantal representation of if for each there is a such that . In other words, all symmetries of extend to symmetries of . Landsberg and Ressayre prove that any equivariant determinantal representation of must have size .\nWe could ask if this lower bound yields a lower bound for symmetric circuits just as an exponential lower bound on the determinantal complexity of the permanent yields a lower bound for its unrestricted circuit complexity. This would require a translation, along the lines of Valiant [30 ###reference_30###] from symmetric circuits to equivariant determinantal representations. There is little reason to believe that we could have such a translation. For one thing, the symmetry requirement for square-symmetric circuits is only that they are invariant under the natural action of on , and this is a rather small subgroup of . Secondly, Valiant\u2019s translation of circuits to determinantal representations is not symmetry preserving. Thus, the representations obtained from this translation applied to square-symmetric circuits are not even guaranteed to be equivariant with respect to the action of on , let alone that of .\nIn the other direction, we could ask if our lower bounds for symmetric circuits for the permanent yield any lower bounds for equivariant determinantal complexity, especially in combination with the polynomial upper bound for transpose-symmetric circuits for the determinant proved in Theorem 10 ###reference_0###. Indeed, given a circuit of size computing and an determinantal representation of , a polynomial on variables, we obtain a circuit computing of size , where the second term represents the size of the subcircuits required to compute the affine expressions making up the entries of . If an equivariant determinantal expression translates to a symmetric circuit, then a symmetric circuit lower bound can be translated to a lower bound on equivariant determinantal complexity. Since the symmetry conditions for circuits are less restrictive, this seems plausible, but there is a mismatch.\nConsider the case when is , and is the square-symmetric circuit for obtained from Theorem 10 ###reference_0###. For the circuit to be square symmetric, we require that the action of on the variables extends to automorphisms of the circuit. Since this action gives a subgroup of acting on , we know that for each there is a such that . If this map was itself the action of a permutation in on the rows and columns of , the square symmetry of would guarantee that was also square-symmetric. However, the equivariance of does not enforce this. So, to state the lower bound on determinantal complexity that we can get from our results, we define an alternative notion of equivariance.\nSay that is permutation equivariant if for each , there is a permutation matrix such that\n. Note that this notion is incomparable with\nequivariance of . We have relaxed the requirement by only asking\nthat permutations in extend to\nsymmetries of , but we have made it more stringent by asking that\nthe symmetry they extend to is itself a permutation matrix in\n. Here, we identify the permutation matrix \nwith the element in as this yields\nthe desired permutation action.\nWe can now state the following corollary of our results.\nAny permutation equivariant determinantal representation of has size ."
118
+ },
119
+ {
120
+ "section_id": "8",
121
+ "parent_section_id": null,
122
+ "section_name": "Concluding Discussion",
123
+ "text": "We have introduced a novel restriction of arithmetic circuits based on a natural\nnotion of symmetry. On this basis, we have shown a fundamental difference\nbetween circuits for the determinant and the permanent. The former is computable using a polynomial-size family of\nsquare symmetric circuits, while the latter requires at least exponential-size\nfamilies of square symmetric circuits for fields of characteristic . The lower bound for the permanent can be extended to fields of odd positive characteristic for matrix-symmetric circuits.\nThere are several ways in which our results could be tightened. The first would\nbe to show the existence of polynomial-size circuits for computing the\ndeterminant over arbitrary fields. Our construction for fields of characteristic\nzero is based on Le Verrier\u2019s method, which does not easily transfer to other\nfields as it relies on division by arbitrarily large integers. There are general\nmethods for simulating such division on small fields, but it is not clear if any\nof them can be carried out symmetrically. Indeed, there are many other efficient\nways of computing a determinant and it seems quite plausible that some method that\nworks on fields of positive characteristic could be implemented symmetrically.\nIt should be noted, however, that Gaussian elimination is not such a method.\nKnown results about the expressive power of fixed-point logic with counting\n(see, e.g. [10 ###reference_10###]) tell us that there is no polynomial-size family of\nsymmetric circuits that can carry out Gaussian elimination. On the other hand,\nwe do know that the determinant, even over finite fields, can be computed by\nexactly such a family of Boolean circuits, as shown by Holm [21 ###reference_21###]. It is\nwhen we restrict to arithmetic circuits, and also require symmetry, that\nthe question is open.\nThere is a corresponding question for the permanent lower bound. That is, can the\nlower bound on square symmetric circuits for the permanent be extended to all\nfields of odd positive characteristic. This might be done by adapting our\nconstruction to analyse the counting width of the number of cycle covers of\ngeneral graphs. Another approach would be to adapt our construction and choose\n so that the sum of the numbers of perfect matchings in and \nis a power of two. This would suffice to establish that and also differ by a power of two.\nWe could consider more general symmetries. For example, the determinant has\nother symmetries besides simultaneous row and column permutations. The\nconstruction we use already yields a circuit which is symmetric not only with\nrespect to these but also transposition of rows and columns. However, we could\nconsider a richer group that allowed for arbitrary even permutations of the rows\nand columns. In recent work [15 ###reference_15###] we have been able to show, with this rich\ngroup of symmetries, an exponential lower bound for the determinant. It would be interesting to identify the exact boundary on the spectrum of symmetries between the tractability and the intractability of the determinant.\nFinally, it is reasonable to think that even just considering\nsquare-symmetric circuits, there are polynomials in which do\nnot admit polynomial-size symmetric arithmetic circuits, by analogy with the\ncase of Boolean circuits. Can we give an explicit example of such a polynomial?"
124
+ }
125
+ ],
126
+ "appendix": [],
127
+ "tables": {},
128
+ "image_paths": {
129
+ "1(a)": {
130
+ "figure_path": "2002.06451v3_figure_1(a).png",
131
+ "caption": "Figure 1: A gadget in X\u2062(\u0393)\ud835\udc4b\u0393X(\\Gamma)italic_X ( roman_\u0393 ) corresponding to vertex v\ud835\udc63vitalic_v with incident edges\nf,g,h\ud835\udc53\ud835\udc54\u210ef,g,hitalic_f , italic_g , italic_h",
132
+ "url": "http://arxiv.org/html/2002.06451v3/x1.png"
133
+ },
134
+ "1(b)": {
135
+ "figure_path": "2002.06451v3_figure_1(b).png",
136
+ "caption": "Figure 1: A gadget in X\u2062(\u0393)\ud835\udc4b\u0393X(\\Gamma)italic_X ( roman_\u0393 ) corresponding to vertex v\ud835\udc63vitalic_v with incident edges\nf,g,h\ud835\udc53\ud835\udc54\u210ef,g,hitalic_f , italic_g , italic_h",
137
+ "url": "http://arxiv.org/html/2002.06451v3/x2.png"
138
+ },
139
+ "1(c)": {
140
+ "figure_path": "2002.06451v3_figure_1(c).png",
141
+ "caption": "Figure 1: A gadget in X\u2062(\u0393)\ud835\udc4b\u0393X(\\Gamma)italic_X ( roman_\u0393 ) corresponding to vertex v\ud835\udc63vitalic_v with incident edges\nf,g,h\ud835\udc53\ud835\udc54\u210ef,g,hitalic_f , italic_g , italic_h",
142
+ "url": "http://arxiv.org/html/2002.06451v3/x3.png"
143
+ },
144
+ "1(d)": {
145
+ "figure_path": "2002.06451v3_figure_1(d).png",
146
+ "caption": "Figure 1: A gadget in X\u2062(\u0393)\ud835\udc4b\u0393X(\\Gamma)italic_X ( roman_\u0393 ) corresponding to vertex v\ud835\udc63vitalic_v with incident edges\nf,g,h\ud835\udc53\ud835\udc54\u210ef,g,hitalic_f , italic_g , italic_h",
147
+ "url": "http://arxiv.org/html/2002.06451v3/x4.png"
148
+ },
149
+ "1(e)": {
150
+ "figure_path": "2002.06451v3_figure_1(e).png",
151
+ "caption": "Figure 1: A gadget in X\u2062(\u0393)\ud835\udc4b\u0393X(\\Gamma)italic_X ( roman_\u0393 ) corresponding to vertex v\ud835\udc63vitalic_v with incident edges\nf,g,h\ud835\udc53\ud835\udc54\u210ef,g,hitalic_f , italic_g , italic_h",
152
+ "url": "http://arxiv.org/html/2002.06451v3/x5.png"
153
+ }
154
+ },
155
+ "validation": true,
156
+ "references": [
157
+ {
158
+ "1": {
159
+ "title": "Recursive construction for 3-regular expanders.",
160
+ "author": "M. Ajtai.",
161
+ "venue": "Combinatorica, 14:379\u2013416, 1994.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "2": {
167
+ "title": "On symmetric circuits and fixed-point logics.",
168
+ "author": "M. Anderson and A. Dawar.",
169
+ "venue": "Theory Comput. Syst., 60(3):521\u2013551, 2017.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "3": {
175
+ "title": "Solving linear programs without breaking abstractions.",
176
+ "author": "M. Anderson, A. Dawar, and B. Holm.",
177
+ "venue": "J. ACM, 62, 2015.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "4": {
183
+ "title": "On Weisfeiler-Leman invariance: Subgraph counts and related graph\nproperties.",
184
+ "author": "V. Arvind, F. Fuhlbr\u00fcck, J. K\u00f6bler, and O. Verbitsky.",
185
+ "venue": "In Fundamentals of Computation Theory - 22nd International\nSymposium, FCT 2019, pages 111\u2013125, 2019.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "5": {
191
+ "title": "On the power of symmetric linear programs.",
192
+ "author": "A. Atserias, A. Dawar, and J. Ochremiak.",
193
+ "venue": "In 34th Annual ACM/IEEE Symposium on Logic in Computer\nScience, LICS, pages 1\u201313, 2019.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "6": {
199
+ "title": "The complexity of partial derivatives.",
200
+ "author": "W. Baur and V. Strassen.",
201
+ "venue": "Theor. Comput. Sci., 22, 1983.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "7": {
207
+ "title": "On the Complexity of Symmetric Polynomials.",
208
+ "author": "M. Bl\u00e4ser and G. Jindal.",
209
+ "venue": "In Avrim Blum, editor, 10th Innovations in Theoretical Computer\nScience Conference (ITCS 2019), volume 124 of Leibniz International\nProceedings in Informatics (LIPIcs), pages 47:1\u201347:14, Dagstuhl, Germany,\n2018. Schloss Dagstuhl\u2013Leibniz-Zentrum fuer Informatik.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "8": {
215
+ "title": "On polynomial time computation over unordered structures.",
216
+ "author": "A. Blass, Y. Gurevich, and S. Shelah.",
217
+ "venue": "Journal of Symbolic Logic, 67(3):1093\u20131125, 2002.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "9": {
223
+ "title": "An optimal lower bound on the number of variables for graph\nidentification.",
224
+ "author": "J-Y. Cai, M. F\u00fcrer, and N. Immerman.",
225
+ "venue": "Combinatorica, 12(4):389\u2013410, 1992.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "10": {
231
+ "title": "The nature and power of fixed-point logic with counting.",
232
+ "author": "A. Dawar.",
233
+ "venue": "ACM SIGLOG News, 2(1):8\u201321, 2015.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "11": {
239
+ "title": "On symmetric and choiceless computation.",
240
+ "author": "A. Dawar.",
241
+ "venue": "In Mohammad Taghi Hajiaghayi and Mohammad Reza Mousavi, editors, Topics in Theoretical Computer Science, pages 23\u201329. Springer International\nPublishing, 2016.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "12": {
247
+ "title": "The power of counting logics on restricted classes of finite\nstructures.",
248
+ "author": "A. Dawar and D. Richerby.",
249
+ "venue": "In CSL 2007:Computer Science Logic, volume 4646 of LNCS,\npages 84\u201398. Springer, 2007.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "13": {
255
+ "title": "Definability of semidefinite programming and Lasserre lower bounds\nfor CSPs.",
256
+ "author": "A. Dawar and P. Wang.",
257
+ "venue": "In 32nd Annual ACM/IEEE Symposium on Logic in Computer\nScience, LICS, 2017.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "14": {
263
+ "title": "Symmetric arithmetic circuits.",
264
+ "author": "A. Dawar and G. Wilsenach.",
265
+ "venue": "In 47th International Colloquium on Automata, Languages, and\nProgramming, ICALP 2020, Leibniz International Proceedings in Informatics\n(LIPIcs), pages 36:1\u201336:18. Schloss Dagstuhl\u2013Leibniz-Zentrum f\u00fcr\nInformatik, 2020.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "15": {
271
+ "title": "Lower bounds for symmetric circuits for the determinant.",
272
+ "author": "A. Dawar and G. Wilsenach.",
273
+ "venue": "In 13th Innovations in Theoretical Computer Science Conference,\nITCS, volume 215 of LIPIcs, pages 52:1\u201352:22. Schloss Dagstuhl -\nLeibniz-Zentrum f\u00fcr Informatik, 2022.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "16": {
279
+ "title": "Symmetric circuits for rank logic.",
280
+ "author": "A. Dawar and G. Wilsenach.",
281
+ "venue": "ACM Trans. Comput. Log., 23:6:1\u20136:35, 2022.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "17": {
287
+ "title": "Definability by constant-depth polynomial-size circuits.",
288
+ "author": "L. Denenberg, Y. Gurevich, and S. Shelah.",
289
+ "venue": "Information and Control, 70:216\u2013240, 1986.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "18": {
295
+ "title": "Functional lower bounds for arithmetic circuits and connections to\nboolean circuit complexity.",
296
+ "author": "M. A. Forbes, M. Kumar, and R. Saptharishi.",
297
+ "venue": "In 31st Conference on Computational Complexity, CCC 2016,\npages 33:1\u201333:19, 2016.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "19": {
303
+ "title": "An exponential lower bound for depth 3 arithmetic circuits.",
304
+ "author": "D. Grigoriev and M. Karpinski.",
305
+ "venue": "In Proceedings of the Thirtieth Annual ACM Symposium on the\nTheory of Computing, pages 577\u2013582, 1998.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "20": {
311
+ "title": "Determinants, permanents and bipartite graphs.",
312
+ "author": "F. Harary.",
313
+ "venue": "Mathematics Magazine, 42:146\u2013148, 1969.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "21": {
319
+ "title": "Descriptive Complexity of Linear Algebra.",
320
+ "author": "B. Holm.",
321
+ "venue": "PhD thesis, University of Cambridge, 2010.",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "22": {
327
+ "title": "Some exact complexity results for straight-line computations over\nsemirings.",
328
+ "author": "M. Jerrum and M. Snir.",
329
+ "venue": "J. ACM, 29:874\u2013897, 1982.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "23": {
335
+ "title": "A selection of lower bounds for arithmetic circuits.",
336
+ "author": "N. Kayal and R. Saptharishi.",
337
+ "venue": "In M. Agrawal and V. Arvind, editors, Perspectives in\nComputational Complexity. Birkh\u00e4user Basel, 2014.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "24": {
343
+ "title": "Permanent v. determinant: An exponential lower bound assuming\nsymmetry.",
344
+ "author": "J.M. Landsberg and N. Ressayre.",
345
+ "venue": "In Proc. ACM Conference on Innovations in Theoretical Computer\nScience, pages 29\u201335. ACM, 2016.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "25": {
351
+ "title": "The logic of explicitly presentation-invariant circuits.",
352
+ "author": "M. Otto.",
353
+ "venue": "In Computer Science Logic, 10th International Workshop, CSL\n\u201996, Annual Conference of the EACSL, pages 369\u2013384, 1996.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "26": {
359
+ "title": "Subspace-invariant AC^0 formulas.",
360
+ "author": "B. Rossman.",
361
+ "venue": "Log. Methods Comput. Sci., 15, 2019.",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "27": {
367
+ "title": "Depth-3 arithmetic circuits over fields of characteristic zero.",
368
+ "author": "A. Shpilka and A. Wigderson.",
369
+ "venue": "Computational Complexity, 10:1\u201327, 2001.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "28": {
375
+ "title": "Arithmetic circuits: A survey of recent results and open questions.",
376
+ "author": "A. Shpilka and A. Yehudayoff.",
377
+ "venue": "Foundations and Trends in Theoretical Computer Science,\n5(3-4):207\u2013388, 2010.",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "29": {
383
+ "title": "Fast parallel computation of polynomials using few processors.",
384
+ "author": "L. Valiant and S. Skyum.",
385
+ "venue": "In Jozef Gruska and Michal Chytil, editors, Mathematical\nFoundations of Computer Science 1981, Lecture Notes in Computer\nScience, pages 132\u2013139, Berlin, Heidelberg, 1981. Springer.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "30": {
391
+ "title": "Completeness classes in algebra.",
392
+ "author": "L. G. Valiant.",
393
+ "venue": "In Proceedings of the 11h Annual ACM Symposium on Theory of\nComputing STOC, pages 249\u2013261, 1979.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "31": {
399
+ "title": "Introduction to Circuit Complexity - A Uniform Approach.",
400
+ "author": "H. Vollmer.",
401
+ "venue": "Texts in Theoretical Computer Science. An EATCS Series. Springer,\n1999.",
402
+ "url": null
403
+ }
404
+ }
405
+ ],
406
+ "url": "http://arxiv.org/html/2002.06451v3"
407
+ }
20240119/2005.04907v2.json ADDED
@@ -0,0 +1,429 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "High-Multiplicity Fair Allocation Using Parametric Integer Linear Programming",
3
+ "abstract": "Using insights from parametric integer linear programming, we\nimprove the work of Bredereck et al. [Proc. ACM EC 2019] on high-multiplicity\nfair allocation. Answering an open question from their work,\nwe proved that the problem of finding envy-free Pareto-efficient\nallocations of indivisible items is fixed-parameter tractable\nwith respect to the combined parameter \u201cnumber of agents\u201d plus \u201cnumber\nof item types.\u201d\nOur central improvement, compared to their result, is to break the condition\nthat the corresponding utility and multiplicity values have\nto be encoded in unary, which is required there. Concretely, we show that, while preserving\nfixed-parameter tractability, these values can be\nencoded in binary. Thus, we substantially expand the range of feasible utility\nand multiplicity values.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Fairly allocating (indivisible) items (Bouveret et al., 2016a ###reference_12###) is a key issue in a world of\nlimited resources, which is, for instance, reflected by multiple application\ncontexts such as distributing food by food banks (Walsh, 2015 ###reference_41###), university\ncourse assignment problems (Budish, 2011 ###reference_18###), or sharing computing\nresources (Ghodsi et al., 2011 ###reference_26###). In recent decades, studying fair allocation issues\nthrough the computational lens or, more generally, applying computer science\ntoolbox (Walsh, 2021 ###reference_42###) proved useful in advancing our knowledge of how to deal\nwith finding desirable allocations. Examples include popular tools such as the\nAdjusted Winner Procedure (Brams and Taylor, 1996 ###reference_14###) or the web platform\nspliddit.org (Goldman and Procaccia, 2014 ###reference_27###) to name a few.\nIn this work, we focus on the so-called \u201chigh-multiplicity fair allocation\u201d\nscenario in which various item types come in multiple copies.\nTo understand important facets of our research contribution, let us, however,\nbecome more precise on the studied problem and the most relevant existing\nresults.\nWe consider a set of item types, each coming with the number of\nactual items of this type, and a set of agents who report their non-negative utilities over\neach item type. An allocation of items is an assignment of disjoint sets of the items,\ncalled bundles, to the agents. In our work we first focus on one of the\nmost prominent fairness concepts which\nis envy-freeness. It considers an allocation as fair\nif there is no agent that would prefer a bundle of any other agent over\nher own one. However, it is trivial to achieve envy-freeness by giving every\nagent an empty bundle. To circumvent this issue, several\n\u201cefficiency\u201d measures of\nallocations have been proposed. A very important one, Pareto-efficiency,\nrequires that for an efficient allocation there exists no other allocation that\nis preferred by at least one agent and, at the same time, does not make any\nagent worse off.\nCombining the aforementioned concepts together, we end up with so-called\nenvy-free Pareto-efficient allocations on which we mostly focus in this paper.\nFinding envy-free Pareto-efficient allocations is a computationally\nvery hard problem. For instance, the corresponding decision problem\nis -complete for general utilities (Bouveret and Lang, 2008 ###reference_11###). The hardness holds\neven for (positive) additive utilities (de Keijzer et al., 2009 ###reference_32###)\u2014here, the utility that\nan agent gets from a bundle is a sum of utilities that this agent reports for\nevery item in the bundle. This model, due to its simplicity is\nfrequently assumed in the scientific social choice\nliterature (Bouveret et al., 2016b ###reference_13###; Brams and Taylor, 1996 ###reference_14###; Rothe, 2015 ###reference_39###) and also forms an important part of\nexperimental studies (Bredereck et al., 2021 ###reference_16###; Dickerson et al., 2014 ###reference_21###). Notably, practically relevant\ntools (like the Adjusted Winner Procedure and the web platform111The\nspliddit.org webpage is currently (April 2023) unavailable. However, a\ngithub repository with the software is available at\nhttps://github.com/jogo279/spliddit ###reference_###. spliddit.org (Goldman and Procaccia, 2014 ###reference_27###)) make use\nof additive utilities too.\nMotivated by a high practical relevance of the problem of finding envy-free\nPareto-efficient allocations assuming additive utilities, Bliem et\nal. Bliem et al. (2016 ###reference_10###) studied its fine-grained computational complexity providing\nseveral parameterized-tractability results. However, they left\nopen\na question whether the subject problem is fixed-parameter tractable with\nrespect to the (combined) parameter \u201cnumber of agents plus number of item types.\u201d222Technically, the open\nquestion was formulated for the parameter ,\nwhere denotes the number of different values in the utility\nfunctions. This parameter can easily be seen to be equivalent to our parameter \nin terms of fixed-parameter tractability.\nNote that Bliem et al. Bliem et al. (2016 ###reference_10###)\nused the variable name for the number of items and showed fixed-parameter\ntractability for this parameter.\nThe question was then answered partially positively (with the restriction of unary encoded item multiplicities and utilities)\nin the work of Bredereck et al. Bredereck et al. (2019 ###reference_15###)."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Related Work",
15
+ "text": "Our work brings together the two worlds of fair allocations and parameteric\nInteger Linear Programs. Hence, we split the discussion of the related work\ninto two parts organized thematically. We note that due to a flurry of\nliterature dealing with fair allocations, we only focus on the works most\nrelevant to ours.\nBouveret and Lang Bouveret and Lang (2008 ###reference_11###) were the first to study the computational complexity of computing\nPareto-efficient and envy-free allocations of indivisible items in a systematic way.\nTheir findings include -completeness for the so-called monotonic dichotomous preferences\nas well as NP-hardness and polynomial-time solvability for several special cases.\nMost relevant to our setting with additive utility-based preferences, they showed that\neven if there are just two agents or if every agent assigns either utility value or \nto each item, the problem of finding a Pareto-efficient and envy-free allocation remains NP-hard.\nMoreover, de Keijzer et al. de Keijzer et al. (2009 ###reference_32###) showed that -completeness even holds\nfor positive additive preferences.\nBliem et al. Bliem et al. (2016 ###reference_10###) analyzed the parameterized complexity, showing that the problem becomes\ntractable for the parameter \u201cnumber of items\u201d and various special settings but remains intractable\nfor the parameter \u201cnumber of agents.\u201d\nMultiple approaches have been developed to relax fairness concepts in order to circumvent computational\nintractability as well as possible non-existence of Pareto-efficient and envy-free allocations.\nFor instance, Lipton et al. Lipton et al. (2004 ###reference_36###) considered the concept of envy-freeness up to one good (EF1).\nHerein, every agent compares its bundle with the bundles of all other agents\nand she is envious if any other bundle minus the most valuable item in there is better\nthan her own bundle.\nFurther studied concepts include envy-freeness up to any good (EFX) (Caragiannis et al., 2016 ###reference_19###; Plaut and Roughgarden, 2018 ###reference_37###), minimum envy (Lipton et al., 2004 ###reference_36###),\ngroup envy-freeness, group Pareto-efficiency (Aleksandrov and Walsh, 2018 ###reference_2###), or graph\nenvy-freeness (Abebe et al., 2017 ###reference_1###; Bei et al., 2017 ###reference_9###; Bredereck et al., 2022 ###reference_17###; Aziz et al., 2018a ###reference_4###).\nAmanatidis et al. Amanatidis et al. (2018 ###reference_3###) provide a comparison of approximate or relaxed fairness notions.\nCaragiannis et al. Caragiannis et al. (2016 ###reference_19###) showed how to compute an allocation that\nmaximizes Nash welfare and thus yields Pareto-efficiency and EF1.\nBarman et al. Barman et al. (2018 ###reference_8###) improved this result and developed an algorithm that\ncomputes an allocation that is Pareto-efficient and EF1 with pseudo-polynomial\nrunning time (being polynomial in the number of agents, the number of items,\nand the maximum utility).\nWhile a round-robin allocation of items can be used to obtain a complete EF1\nallocation in polynomial time when all items have positive utilities, Aziz et\nal. Aziz et al. (2018b ###reference_5###, 2019 ###reference_6###) have argued that this procedure fails when items may\nhave negative utilities.\nLeaving the complexity of computing Pareto-efficient and EF1 allocation (when\nnegative utilities are allowed) open, they showed that a complete EF1\nallocation can be found in polynomial time even when items with negative\nutilities are present.\nThe setting of high-multiplicity items (where items come in multiple copies)\ndeserves a separate treatment. Copies of items played an important role in the\nseminal work of Budish Budish (2011 ###reference_18###). However, there each agent\u2019s bundle was\nassumed to have to at most a single copy of a given resource (this follows from\nthe fact that the author was focusing on an assignment problem, like assigning\nstudents to courses). Later, Gafni et al. Gafni et al. (2021 ###reference_24###) proposed a framework\nfor studying the existence of EFX allocations in this model. The setting where an agent can\nobtain more resources of the same type was, to the best of our knowledge, first\nconsidered by Bredereck et al. Bredereck et al. (2019 ###reference_15###) (on whose work we improve\non). They establish a theoretical ILP-based framework for computing various\ntypes of efficient and fair allocations. The framework was later implemented\nand tested on real-data by Bredereck et al. Bredereck et al. (2021 ###reference_16###). Implicitly, the\nhigh-multiplicity setting is also present in the work of Eiben\net al. Eiben et al. (2023 ###reference_22###). They study parameterized complexity of finding graph\nenvy-free allocations considering a parameterization (among others) by the\nnumber of item-types. The high-multiplicity regime has also been reinvented\nby Gorantla et al. Gorantla et al. (2023 ###reference_28###) in the context of studying the conditions under\nwhich EF1 allocations exist.\nEisenbrand and Shmonin (Eisenbrand and Shmonin, 2008 ###reference_23###, Theorem 4.2) gave an algorithm\nthat, if the number of variables is fixed, solves the given instance of\nParametric ILP (PILP) in polynomial time (we formally define PILP in the\nPreliminaries). K\u00f6ppe et al. K\u00f6ppe et al. (2010 ###reference_34###) showed that one can express the\nnegation of bilevel integer programs (a family of certain linear programs) as\nPILP and used the result of Eisenbrand and Shmonin to obtain polynomial-time\nsolvability of bilevel integer programs in some restricted cases.\nTo the best of our knowledge, Crampton et al. (Crampton et al., 2019 ###reference_20###, Corollary\n2.2) were the first to give an \u201cinterpretation\u201d of the result\nof Eisenbrand and Shmonin Eisenbrand and Shmonin (2008 ###reference_23###) in terms of parameterized\ncomplexity analysis. More specifically, they showed membership in the complexity\nclass FPT, that is, they showed a running time for an\ninstance of PILP\nprovided that the coefficients of the matrix are encoded in unary. Using\nthis result\nCrampton et al. Crampton et al. (2019 ###reference_20###)\ninitiated the parameterized study of the so-called resiliency problems \n(such as the Resiliency Closest String problem).\nKnop et al. Knop et al. (2018 ###reference_33###) used the interpretation of Crampton et\nal. Crampton et al. (2019 ###reference_20###) to solve a decade-long-standing open question of\nFPT-membership of a variant of the Bribery problem in the\nfield\nof elections manipulation.\nRecently, Bredereck et al. Bredereck et al. (2019 ###reference_15###) also used the\ninterpretation of Crampton et al. Crampton et al. (2019 ###reference_20###) in the context of fair\nallocation.\nMore specifically, they\nshowed (Bredereck et al., 2019 ###reference_15###, Corollary 5) that finding a fair and efficient\nallocation is fixed-parameter tractable for few agents and few item types. The\nresult holds for numerous different concepts of fairness and efficiency.\nYet, their result holds only when the maximum utility value an agent assigns to\nan item type and item multiplicities are encoded in unary.\nAs we shall shortly see, we are improving upon this result by allowing item\nmultiplicities to be encoded in binary."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Organization",
21
+ "text": "In the following Section 2 ###reference_###, we first give necessary notation and formal\npreliminaries regarding allocations, parameterized complexity, and\nparameterized integer linear programs. Then, in Section 3 ###reference_###,\nwe lay foundations for proving our main result by presenting a convenient\ninterpretation of Theorem 4.2 from the work of Eisenbrand and\nShmonin Eisenbrand and Shmonin (2008 ###reference_23###) (our interpretation is more detailed than the one\nprovided by Crampton et al. Crampton et al. (2019 ###reference_20###)). We proceed with formally\nstating our result and proving it in Section 4 ###reference_###. Later, in\nSection 5 ###reference_### we discuss how to extend our main result to\ncover multiple further prominent fairness and efficiency concepts. In the last\nsection (Section 6 ###reference_###) we give conclusions."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "Preliminaries",
27
+ "text": "For a positive integer , by we denote the set .\nWe use boldface letters, like , to represent vectors. A vector \nconsisting of coordinates is said to be in dimensions\nor -dimensional and we denote its -th coordinate, , by .\nFor two vectors and in dimensions and \nrespectively, vector is a -dimensional\nvector . We symbolically\ndenote some real matrix with rows and columns by . We treat -dimensional vectors as matrices with\n rows and column.\nA polyhedron is an intersection of half-spaces, that is, for some\ndimensions and , a polyhedron is a set of vectors,\nfor some and . Similarly,\nassuming the same notation and defining and analogously, a\npartially open polyhedron is an intersection of half-spaces and open\nhalf-spaces, that is, a set of vectors."
28
+ },
29
+ {
30
+ "section_id": "2.1",
31
+ "parent_section_id": "2",
32
+ "section_name": "Allocations, Envy-Freeness, and Pareto-Efficiency",
33
+ "text": "Consider a set of agents, a\nset of item types with\nmultiplicities for each item . An allocation is an\nintegral -dimensional vector , whose entries describe\nfor each agent how many items of each item type are allocated to the agent. For\neach agent ,\nlet be the agent\u2019s utility\nfunction (in fact, utility values may be rational numbers, in which case an\nequivalent problem instance with integral values can be obtained without loss\nof generality by multiplying each values by the least common multiplier of the\ndenominators).\nWe assume the preferences of the agents to be additive, which means that\nthe utility value for a set of items is the sum of the items utility values.\nThus, we define the satisfaction of agent from\nallocation as ; for brevity, we\nslightly abuse the notation and denote it by .\nBefore we proceed, let us fix a set of agents and a set \nof item types with multiplicities for each item type .\nLet be an allocation of the items to the agents in . In the\nfollowing two definitions we provide formal phrasings of envy-freeness and\nPareto-efficiency, which play a central role in our study.\nAn allocation of the items with multiplicities , , to agents is envy-free if there is no two agents and such that .\nAn allocation of the items with multiplicities , , to agents is Pareto-dominated if there exists another\nallocation (over the same sets of agents and items together with\ntheir multiplicities) such that for every agent it holds\nthat and for at least one agent the\ninequality is strict. An allocation is Pareto-efficient if it is not\nPareto-dominated.\nIn our work, we focus on a decision problem in which we ask whether for given\nsets of agents and resources, an allocation that is simultaneously envy-free\nand Pareto-efficient exists.\nThe name of the problem, standing for \u201cefficient envy-free\u201d allocation might\nbe misleading in the light of the fact that in the literature \u201cefficiency\u201d\nhas multiple embodiments (besides Pareto-efficiency, perhaps the most frequent\nones are completeness or social welfare maximization). However, for clarity, we\ndecided to keep the name as defined by Bouveret and Lang Bouveret and Lang (2008 ###reference_11###) and then\nconsequently used by the follow-up works (Bliem et al., 2016 ###reference_10###; Bredereck et al., 2019 ###reference_15###)."
34
+ },
35
+ {
36
+ "section_id": "2.2",
37
+ "parent_section_id": "2",
38
+ "section_name": "Parameterized Complexity",
39
+ "text": "A parameterized (decision) problem\u2019s input consists of a decision problem instance \nand a parameter value ; the task is then to decide whether \nis a \u201cyes\u201d-instance. We say that a parameterized problem is fixed-parameter\ntractable with respect to (belongs to the class FPT with respect\nto ) if there is an algorithm deciding in time, where is the size of the input and\n is an arbitrary computable function of parameter . Intuitively, the\nexponential blow-up is then related only to the value of parameter , which\nallows for efficient computation of the problem if is small.\nThe following proposition describing a relation between various functions\nvalues will come handy later.\nFor every two computable functions and with , there exists a computable function such that for every and we have\n."
40
+ },
41
+ {
42
+ "section_id": "2.3",
43
+ "parent_section_id": "2",
44
+ "section_name": "Parametric Integer Programming",
45
+ "text": "For a rational polyhedron , the integer\nprojection of , denoted by , is a collection of all vectors\n for which there exists an integral vector such that . Thus, formally\nParametric Integer Programming (PILP) is the following problem. Given a matrix\n and a rational polyhedron , decide if for all vectors in the\ninteger projection of , the system of inequalities has an\nintegral solution. In other words, one has to decide the validity of the\nsentence\nIntuitively, PILP consists of a collection of integer linear programs\ndefined by and right-hand side vectors , where the latter ones come\nfrom the integer projection . The question then is whether each\nof these integer linear programs has some feasible solution. The PILP\nproblem is complete for the class (Stockmeyer, 1976 ###reference_40###; Wrathall, 1976 ###reference_43###)."
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "Preparation for Main Result",
51
+ "text": "We devote this section to describe important consequences resulting from the\nwork of Eisenbrand and Shmonin (Eisenbrand and Shmonin, 2008 ###reference_23###, Theorem 4.1 and\nTheorem 4.2). Most importantly, their results\nallow for efficiently solving PILP subject to additional constraints. As it\nwill turn out, we are able to formulate EEF\u2013Allocation in a way\nthat respects these constraints. Yet, before we show the formulation\nin Section 4 ###reference_###, we discuss the aforementioned consequences in\ndetail and present them formally in Proposition 2 ###reference_osition2###.\nDespite the -completeness of the PILP problem, Eisenbrand and\nShmonin (Eisenbrand and Shmonin, 2008 ###reference_23###, Theorem 4.1 and Theorem 4.2) gave a\npolynomial-time algorithm for PILP for the fixed number of variables and\ndimension (their work extended the pioneering\u2014to the best of our\nknowledge\u2014works of Kannan Kannan (1990 ###reference_30###, 1992 ###reference_31###) on efficient algorithms\nfor PILP).\nAn analysis of their algorithm leads to the following\nProposition 2 ###reference_osition2###; we discuss its details afterwards.\nThere is an algorithm deciding the sentence (PILP ###reference_###) in\ntime, where is the size (encoding length) of any column in , is the encoding length of the sentence and (the description of) the polyhedron , and and are computable functions.\nMoreover, if the sentence (PILP ###reference_###) is not valid, then a certificate is provided (i.e., has no integral solution with such a ).\nProposition 2 ###reference_osition2### essentially follows from an in-depth\nanalysis of a known result (Eisenbrand and Shmonin, 2008 ###reference_23###, Theorem 4.2). A similar\ninvestigation has also been provided by Crampton et al. Crampton et al. (2019 ###reference_20###).\nHowever, we decided to slightly adjust it to our needs and hence we present it\nin more detail. Since Proposition 2 ###reference_osition2### plays an important\nrole in our result, we believe that discussing its argument explicitly is\nimportant for the completeness of our paper.\nIn the algorithm backing Proposition 2 ###reference_osition2###, we first utilize\nthe Fourier\u2013Motzkin elimination procedure to make sure that for all the system has a fractional solution. If this is not the\ncase, then a corresponding vector is reported which certifies the right-hand\nside vector for which the PILP sentence has no solution. Running this\nprocedure for all yielding the corresponding integer linear\nprograms , requires solving many mixed integer\nlinear programs in dimension . This can be done in time\nusing Lenstra\u2019s celebrated result (Lenstra, Jr., 1983 ###reference_35###) about solving integer linear\nprograms in bounded dimensions.\nSecond, we partition the polyhedron into partially open\npolyhedrons , . Due to a result by Eisenbrand and\nShmonin (Eisenbrand and Shmonin, 2008 ###reference_23###, Theorem 4.1), the number of partially open\npolyhedra , , is expressed (using helper\nconstants and , which we describe below) as\nHere, , where is the\nconstant from the flatness theorem (the current best value is (Banaszczyk et al., 1999 ###reference_7###)), and . Importantly, Eisenbrand and\nShmonin (Eisenbrand and Shmonin, 2008 ###reference_23###, Theorem 4.1) show that each , , is\nan integer projection of some partially open polyhedron , that is ; additionally they show that , .\nLastly, the result of Eisenbrand and Shmonin (Eisenbrand and Shmonin, 2008 ###reference_23###, Theorem 4.1)\ngives, for each , a collection of specific\ntransformations , for . The transformations are very\nspecific in the sense that for each there is an integral point in\nthe polyhedron if and\nonly if for some .\nThe negation of this condition can be verified using a mixed integer linear program for each ; such an ILP has integral variables.\nIt holds that if the input sentence (PILP ###reference_###) is not valid, then one of the\nabove mixed ILPs is feasible; thus, again, providing the claimed certificate .\nCarefully inspecting the two parts of the above-sketched algorithm reveals\nthat it runs in the requested time."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Finding EEF\u2013Allocations via PILP",
57
+ "text": "The interpretation of Theorem 4.2 of Eisenbrand and\nShmonin Eisenbrand and Shmonin (2008 ###reference_23###) presented in Section 2 ###reference_###\ncontains an important bit. Specifically, we observed that it is possible\nto derive a certificate of infeasibility of a given PILP sentence.\nThis inspired us to consider the following reasoning, which we employ to derive\nour result about finding envy-free and Pareto-efficient allocations. Instead of\nfocusing directly on EEF\u2013Allocation, we decided to work with\nthe complementary problem. This way, by obtaining the certificate\nof infeasibility for the complementary problem, we in fact get a (membership)\ncertificate for the original problem. In more details, we think of a problem of\ndeciding whether \u201cevery envy-free allocation is Pareto-dominated.\u201d If such a\nsentence is invalid, then a certificate proving it is an envy-free allocation\nthat cannot be Pareto-dominated. It is worth pointing out that due to the\ncertificate, we do not only answer the question posed\nby EEF\u2013Allocation but we also find an envy-free and Pareto-efficient\nallocation, which makes our approach constructive.\nThe method described above leads us to the main contribution of our work, which\nstrengthens Corollary 5 of Bredereck et al. Bredereck et al. (2019 ###reference_15###) about\nfixed-parameter tractability of EEF\u2013Allocation with respect to the\ncombined parameter \u201cnumber of agents plus number of items.\u201d Therein, the\nauthors devise the negation of EEF\u2013Allocation in a similar spirit to\nours (however, their approach is fundamentally different as it is based on\nanalyzing a collection of improving steps among which none can be added to\nimprove a given allocation) employing the big-M method to do so.\nWe avoid this method, which (as used in the mentioned paper) forces a unary\nencoding of the input item multiplicities and utility values, arriving at our\nTheorem 1 ###reference_rem1###, which offers the\nsame computational complexity guarantees but does not require the unary\nencoding of the discussed input elements.\nLet be an instance of the EEF\u2013Allocation problem with the\nmaximum input utility value .\nThen, there is an algorithm that decides in time, for some computable function and being the size of .\nBefore we proceed with\nproving Theorem 1 ###reference_rem1### in the\nfollowing Section 4.1 ###reference_###, we remark that our technique also applies\nto other variants of EEF\u2013Allocation where we replace envy-freeness or\nPareto-efficiency with related concepts. We devote a separate section\n(Section 5 ###reference_###) to a detailed discussion about these\nadditional applications."
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Proving the result",
63
+ "text": "Employing Proposition 2 ###reference_osition2###, we now show how to efficiently\nsolve the EEF\u2013Allocation problem for the (combined) parameter \u201cnumber\nof agents plus number item types,\u201d obtaining a proof\nof Theorem 1 ###reference_rem1###. From now on, we\nfix a set of agents and a set \nof item types with multiplicities , .\nAs already discussed, we show the FPT-membership\nof EEF\u2013Allocation for the parameter by constructing\na PILP sentence deciding whether every envy-free allocation of a\ngiven collection of items is dominated by some other allocation. The high-level\nidea is as follows. We first construct the PILP sentence (which essentially\ncorresponds to the matrix in Formula (PILP ###reference_###)) assuming that we\nhave a polyhedron that describes all envy-free allocations. Then we show\nhow to construct the polyhedron such that it meets our assumptions. (In fact, the\npolyhedron also contains additional technical parts needed to represent that\nthere is an allocation that dominates some allocation from the polyhedron.) Eventually,\nwe use the results from Proposition 2 ###reference_osition2###.\nStarting our proof with assuming that we have polyhedron and showing its\nconstruction later is due to the fact that the former will develop our\nintuition how the polyhedron should look like. Before we go ahead with the\nproof, we recall that an allocation consists of entries , for\neach agent and item type , with the meaning \u201cwe\ngive items of item type to agent .\u201d\nLet us assume such a polyhedron that we have ,\nwhere is an allocation (we do not discuss as it is still to be\ndefined in the next point of the proof where we construct a proper ).\nOur aim is to design a matrix such that if and only\nif is an allocation that dominates . We first focus on constraints\nenforcing that is a proper allocation (not necessarily allocating all\nitems to the agents; this will be guaranteed later due to\nthe requirement of Pareto-efficiency).\nCondition (1 ###reference_###) ensures that does not allocate\n\u201cmore items than available,\u201d while Condition (2 ###reference_###)\nguarantees that each agent is allocated a non-negative number of\nitems by . It is now not hard to see that satisfies\nConditions (1 ###reference_###) and (2 ###reference_###) if and\nonly if is a valid allocation.\nThus, it remains to model that Pareto-dominates .\nOne can do so with the following system of inequalities. Note that on the\nright-hand side we use the (entries of the) vector ; we do so for brevity\nof our proof. In the final PILP sentence the right-hand side must be defined\nby and we will indeed use the insights from the following inequalities\nto define (as a part of defining ) in the next step of our proof.\nThe system of inequalities above guarantees that dominates if and\nonly if it satisfies Conditions (3 ###reference_###) and (4 ###reference_###).\nNote that Condition (3 ###reference_###) ensures that the total utility\nof each agent in allocation is at least as good as\nthat of agent in allocation .\nFurthermore, given the above, the condition described by\nInequality (4 ###reference_###) ensures that there is at least one\nagent for whom it holds that , that is, whose utility is greater in\nallocation than that in .\nWe now aim at designing an appropriate polyhedron , existence of which we\n(only) assumed in the first step. Given the above discussion and\nConditions (1 ###reference_###)\u2013(4 ###reference_###), we have that\nthe claimed is in dimension , that is . Indeed, the summands in \u2019s dimension\nexpression come directly from the numbers of inequalities in, respectively,\nConditions (1 ###reference_###)\u2013(4 ###reference_###). Since we\nassumed that is an allocation, we have by\ndefinition. Overall, it must hold that .\nLet us now split the vector according to\nConditions (1 ###reference_###)\u2013(4 ###reference_###) above\u2014that\nis, is the vector of right-hand sides coming from\nCondition (1 ###reference_###) and so forth. Based on the first two\nsubject conditions, we thus have\nwhere is the vector of item multiplicities. Clearly, if we now use the\nabove-defined and substituting the right-hand sides of,\nrespectively, Conditions (1 ###reference_###)\nand (2 ###reference_###), the meaning of\nConditions (1 ###reference_###) and (2 ###reference_###) stays\nintact. More precisely, both conditions still encode the fact that is an\nallocation.\nWe proceed with constructing vector and the value of . To\nachieve this, we first ensure that is an envy-free allocation and then\nderive and from this analysis.\nThe following conditions ensure that is an envy-free allocation.\nConditions (7 ###reference_###) and (8 ###reference_###) ensure\nthat encodes an allocation. These expressions and hence the argument are\nanalogous to those of Conditions (1 ###reference_###)\nand (2 ###reference_###) for .\nFurther, Condition (9 ###reference_###) ensures that is envy-free,\nsince the left-hand side is the total satisfaction of agent (under\nallocation ) and the right-hand side is the total value of the bundle\nof viewed via the utility function of agent (that is, the satisfaction\nof if she got the bundle that gets under allocation ).\nAt the moment, intuitively,\nConditions (5 ###reference_###)\u2013(9 ###reference_###) describe the \u201cpart\u201d\nof polyhedron that defines , , and . What remains, is\nto define the remaining and in a way that we can use them as the\nright-hand sides of Conditions (4 ###reference_###)\nand (3 ###reference_###), respectively. We can do so by binding \nto as follows, thus obtaining the final two expressions describing\npolyhedron .\nObserve that the left-hand side of Condition (10 ###reference_###) is exactly the\nright-hand side of (3 ###reference_###). Similarly, the right-hand side\nof (11 ###reference_###) contains exactly (up to the constant ) the\nright-hand side of Condition (4 ###reference_###).\nConsequently, we can replace the right-hand sides of Conditions (3 ###reference_###)\nand (4 ###reference_###) with the right-hand sides\nof Conditions (10 ###reference_###) and (11 ###reference_###) while\nkeeping the meaning of the latter unchanged. Observing that in this last step\nwe defined the whole in a way that allows us using in the\nright-hand sides of\nConditions (1 ###reference_###)\u2013(4 ###reference_###), we arrive at\nthe next lemma, which summarizes (and follows) from the above discussion.\nLet be a polyhedron defined by the conditions (5 ###reference_###)\u2013(11 ###reference_###).\nThen, if and only if\nis an envy-free allocation of the items described by ,\nis the vector of right-hand sides of\nConditions (1 ###reference_###)\u2013(4 ###reference_###).\nWe remark that the fact that Conditions (9 ###reference_###) and\n(11 ###reference_###) are presented in a way that the right-hand side is\nnot a constant is not important in the light of the definition of \nfrom Lemma 1 ###reference_a1###. Clearly, to obtain a constant on the\nright-hand sides it is enough to substract the right-hand side from both sides\nstarting from the expressions presented in Conditions (9 ###reference_###) and\n(11 ###reference_###).\nHaving described how to construct the parametric ILP\nrepresenting EEF\u2013Allocation, we finish the proof of\nTheorem 1 ###reference_rem1### by applying\nProposition 2 ###reference_osition2###. More specifically, for a given\ninstance of the EEF\u2013Allocation problem, we construct\nmatrix and polyhedron as described earlier and directly build a\nparametric PILP instance out of them. Then we run the algorithm\nfrom Proposition 2 ###reference_osition2### on instance . If the\nalgorithm returns \u201cyes,\u201d then for every envy-free allocation there exists one\nthat dominates it, so the answer to the original instance is\n\u201cno.\u201d In the opposite case, we know that admits some Pareto-efficient\nenvy-free allocation , so we output \u201cyes\u201d as an answer to .\nMoreover, due to the fact that Proposition 2 ###reference_osition2### guarantees\nreturning a certificate, the \u201cno\u201d-certificate computed by the algorithm is in\nfact the envy-free Pareto-efficient allocation .\nIt remains to analyze the running time of the invocation of the algorithm\nfrom Proposition 2 ###reference_osition2### on the constructed\ninstance .\nIn the presented model, described\nby (1 ###reference_###)\u2013(11 ###reference_###), forming\ninstance , the dimension of is , where is the\nnumber of agents in and is the number of item types. Hence, the\nvalue of parameter from Proposition 2 ###reference_osition2### is . It remains to estimate the parameter thereof. Recall that is\nthe maximum encoding length of a column in , which is, in our case, the\nmatrix of left-hand sides\nin Conditions (1 ###reference_###)\u2013(4 ###reference_###).\nThe columns of the matrix are vectors of length \u2014this length\nis equal to the number of constraints (inequalities) required to implement\nthese conditions. Hence, there are many delimiter symbols in the\nencoding of a single column.\nRecall that each such column corresponds to a pair, a single agent and a single item , and let us fix some pair .\nSo, in the column of , there are ones, one coming from\nCondition (1 ###reference_###) and one from\nCondition (2 ###reference_###). In addition to this, there are \nnumbers, both equal to . Since we assumed a binary encoding, that is , we overall obtain the encoding length\n of a single column, which, after dropping\nthe asymptotically irrelevant terms, gives .\nDue to Proposition 1 ###reference_osition1###, we thus get that there is a\nfunction such that .\nApplying this value for , together with the one for shown\nearlier, proves that the algorithm from Proposition 2 ###reference_osition2### runs\nin the running time required to show fixed-parameter tractability of\nEEF\u2013Allocation with respect to the parameter ."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Generalizing Our Approach",
69
+ "text": "Envy-freeness is an appealing yet demanding concept. Consider a very simple\nexample of two agents desiring a single item. Already in this situation an\nallocation that allocates the item cannot be envy-free. Hence, there is no\nnontrivial envy-free allocation of items (recall that an empty allocation is\nalways envy-free).\nThe experimental results of Bredereck et al. Bredereck et al. (2021 ###reference_16###) give empirical evidence that\nnon-existence of envy-free and Pareto-efficient allocations poses a real threat\nto applicability of these concepts in real-world instances. The authors show\nthat there were no envy-free and Pareto-efficient allocations for 63% of the\ninstances in their dataset from spliddit.org. The observed phenomenon clearly\nmotivates the need for general approaches. In practice, in the case of a\nscenario with no envy-free and Pareto-efficient allocation, a reasonable\nalgorithm should not only report the non-existence but also offer a\npossibly-best alternative allocation, which yields weaker desiderata. The\ncurrent state of the art in the form of both, an extensive literature on\nenvy-freeness relaxations (see our Related Work section for the references) and\ngeneral frameworks presented by Bredereck et al. Bredereck et al. (2021 ###reference_16###, 2019 ###reference_15###)\nstrongly suggest that providing generalizable results is of high value.\nOur method meets this criterion and can be used with numerous other problem\nvariants that aim at finding efficient fair allocations. Indeed, it turns out\nthat our technique can be applied to the -Efficient -Allocation problem (Bredereck et al., 2019 ###reference_15###), which is a more general variant of the\nEEF\u2013Allocation where Pareto-efficiency is replaced by some efficiency\nnotion and envy-freeness is replaced by some fairness\nnotion . Formally, the problem, as defined by Bredereck\net al. Bredereck et al. (2019 ###reference_15###), is as follows.\nIn fact, our approach can be used to show fixed-parameter tractability\nof the above problem with respect to the parameterization by the number of agents\nplus the number of item types for various efficiency and fairness notions.\nBesides relaxed notions of Pareto-efficiency (e.g., where one only cares about\nbeing dominated by allocations to some extent similar to the to-be-dominated\none) or relaxed envy-freeness such as EF1 (Barman et al., 2018 ###reference_8###; Caragiannis et al., 2016 ###reference_19###; Lipton et al., 2004 ###reference_36###) or\nEFX (Caragiannis et al., 2016 ###reference_19###; Plaut and Roughgarden, 2018 ###reference_37###), our approach can also deal with generalizations of\nthe concepts of Pareto-optimality such as such as group\nPareto-efficiency (Aleksandrov and Walsh, 2018 ###reference_2###) or generalizations of envy-freeness such as graph\nenvy-freeness (Bredereck et al., 2022 ###reference_17###). Additionally, our method is adaptable to further\nsomewhat related fairness concepts such as MaxiMinShare (Budish, 2011 ###reference_18###; Procaccia and Wang, 2014 ###reference_38###) or a\nbasic efficiency concept completeness, which only requires that all resources\nare allocated.\nSummarizing, with our technique we can show that -Efficient -Allocation is fixed-parameter\ntractable for parameter even if item multiplicities and utilities are\nbinary encoded when\nis a combination of (graph/group) Pareto-efficiency or completeness, and\nis a combination of (graph/group) EF, (graph) EF1,\n(graph) EFX, MaxiMin, or MaxiMinShare.\nTo avoid repetitiveness, we refer to the work of\nBredereck et al. Bredereck et al. (2019 ###reference_15###) on how to model these notions within the\nILP framework."
70
+ },
71
+ {
72
+ "section_id": "6",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusion",
75
+ "text": "We described a somewhat new usage of Parametric ILPs in fixed dimension in the\ndesign of parameterized algorithms, enabling to improve a\nprevious fixed-parameter tractability result.\nTo the best of our knowledge, we are the first to model (and solve) the\nnegation of a given instance to obtain a solution to the original in the\ncontext of parameterized complexity. Thus, we believe to have contributed to\nthe, recently gaining increased attention (see, for example, a survey by\nGaven\u010diak et al. Gaven\u010diak et al. (2022 ###reference_25###)), understanding of how the theory of integer\n(linear) programming impacts the theory of parameterized complexity. We hope\nour approach leads to further new results in parameterized algorithms, including\napplications beyond social choice.\nOur work also brings up new challenges and highlights the importance of some\nyet unexplored research directions, mostly in the area of empirical study of\nefficient and fair allocations of indivisible items.\nFirst of all, given a practically applicable implementation (Bredereck et al., 2021 ###reference_16###) of\nthe approach of Bredereck et al. Bredereck et al. (2019 ###reference_15###), it appears valuable to\npursue an empirical study of our approach as well. It is not uncommon that\nalgorithms with appealing (worst-case) computational complexity guarantees do\nnot perform that well when applied to real-life instances. Hence, designing an\nimplementation of our method and comparing it against the existing methods of\ncomputing efficient and fair allocations is a necessary step in judging the\nusability of our study in practice.\nPerforming computational experiments is a natural next step to gain additional\ninsights into the problem nature (like, a sharp phase transition in the\nexistence of efficient envy-free allocations reported by Dickerson et\nal. Dickerson et al. (2014 ###reference_21###)). Offering a next tool in the algorithmic toolbox for\nseeking fair allocations, we also highlight the need for further efforts\ntowards obtaining realistic data or, at least, designing diversified synthetic\nmodels of generating allocation instances. By now, to the best of our\nknowledge, except for the relatively small dataset of real-world data from the\nwebsite spliddit.org (Goldman and Procaccia, 2014 ###reference_27###) and two very simple synthetic models\nby Dickerson et al. Dickerson et al. (2014 ###reference_21###), such data is lacking. Our method might not\nonly turn out to be useful in spotting new phenomena of fair allocation\ninstances, but might also well complement other existing methods to form a\nrobust framework for finding fair and efficient allocations."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {},
80
+ "image_paths": {},
81
+ "validation": true,
82
+ "references": [
83
+ {
84
+ "1": {
85
+ "title": "Fair division via social comparison.",
86
+ "author": "Rediet Abebe, Jon Kleinberg, and David C. Parkes.",
87
+ "venue": "In Proceedings of the 16th Conference on Autonomous Agents and\nMultiAgent Systems (AAMAS \u201917), pages 281\u2013289, 2017.",
88
+ "url": null
89
+ }
90
+ },
91
+ {
92
+ "2": {
93
+ "title": "Group envy freeness and group pareto efficiency in fair division with\nindivisible items.",
94
+ "author": "Martin Aleksandrov and Toby Walsh.",
95
+ "venue": "In Proceedings of the 41st German Conference on Artificial\nIntelligence (KI \u201918), pages 57\u201372. Springer, 2018.",
96
+ "url": null
97
+ }
98
+ },
99
+ {
100
+ "3": {
101
+ "title": "Comparing approximate relaxations of envy-freeness.",
102
+ "author": "Georgios Amanatidis, Georgios Birmpas, and Vangelis Markakis.",
103
+ "venue": "In Proceedings of the 27th International Joint Conference on\nArtificial Intelligence (IJCAI \u201918), pages 42\u201348. AAAI Press, 2018.",
104
+ "url": null
105
+ }
106
+ },
107
+ {
108
+ "4": {
109
+ "title": "Knowledge, fairness, and social constraints.",
110
+ "author": "Haris Aziz, Sylvain Bouveret, Ioannis Caragiannis, Ira Giagkousi, and\nJ\u00e9r\u00f4me Lang.",
111
+ "venue": "In Proceedings of the 32nd AAAI Conference on Artificial\nIntelligence (AAAI \u201918), pages 4638\u20134645, 2018a.",
112
+ "url": null
113
+ }
114
+ },
115
+ {
116
+ "5": {
117
+ "title": "Fair allocation of combinations of indivisible goods and chores.",
118
+ "author": "Haris Aziz, Ioannis Caragiannis, and Ayumi Igarashi.",
119
+ "venue": "CoRR, abs/1807.10684, 2018b.",
120
+ "url": null
121
+ }
122
+ },
123
+ {
124
+ "6": {
125
+ "title": "Fair allocation of indivisible goods and chores.",
126
+ "author": "Haris Aziz, Ioannis Caragiannis, Ayumi Igarashi, and Toby Walsh.",
127
+ "venue": "In Proceedings of the 28th International Joint Conference on\nArtificial Intelligence (IJCAI \u201919), pages 53\u201359, 2019.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "7": {
133
+ "title": "The flatness theorem for nonsymmetric convex bodies via the local\ntheory of Banach spaces.",
134
+ "author": "Wojciech Banaszczyk, Alexander E. Litvak, Alain Pajor, and Stanislaw J. Szarek.",
135
+ "venue": "Mathematics of Operations Research, 24(3):728\u2013750, 1999.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "8": {
141
+ "title": "Finding fair and efficient allocations.",
142
+ "author": "Siddharth Barman, Sanath Kumar Krishnamurthy, and Rohit Vaish.",
143
+ "venue": "In Proceedings of the 19th ACM Conference on Economics and\nComputation (EC \u201918), pages 557\u2013574. ACM, 2018.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "9": {
149
+ "title": "Networked fairness in cake cutting.",
150
+ "author": "Xiaohui Bei, Youming Qiao, and Shengyu Zhang.",
151
+ "venue": "In Proceedings of the 26th International Joint Conference on\nArtificial Intelligence (IJCAI \u201917), pages 3632\u20133638. AAAI Press, 2017.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "10": {
157
+ "title": "Complexity of efficient and envy-free resource allocation: Few\nagents, resources, or utility levels.",
158
+ "author": "Berhard Bliem, Robert Bredereck, and Rolf Niedermeier.",
159
+ "venue": "In Proceedings of the 25th International Joint Conference on\nArtificial Intelligence (IJCAI \u201916), pages 102\u2013108. AAAI Press, 2016.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "11": {
165
+ "title": "Efficiency and envy-freeness in fair division of indivisible goods:\nLogical representation and complexity.",
166
+ "author": "Sylvain Bouveret and J\u00e9r\u00f4me Lang.",
167
+ "venue": "Journal of Artificial Intelligence Research, 32(1):525\u2013564, 2008.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "12": {
173
+ "title": "Fair allocation of indivisible goods.",
174
+ "author": "Sylvain Bouveret, Yann Chevaleyre, and Nicolas Maudet.",
175
+ "venue": "In F. Brandt, V. Conitzer, U. Endriss, J. Lang, and A. D. Procaccia,\neditors, Handbook of Computational Social Choice, chapter 12.\nCambridge University Press, 2016a.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "13": {
181
+ "title": "Handbook of Computational Social Choice.",
182
+ "author": "Sylvain Bouveret, Yann Chevaleyre, and Nicolas Maudet.",
183
+ "venue": "Cambridge University Press, 2016b.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "14": {
189
+ "title": "Fair Division: From Cake-Cutting to Dispute Resolution.",
190
+ "author": "Steven J. Brams and Alan D. Taylor.",
191
+ "venue": "Cambrige University Press, 1996.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "15": {
197
+ "title": "High-multiplicity fair allocation: Lenstra empowered by -fold\ninteger programming.",
198
+ "author": "Robert Bredereck, Andrzej Kaczmarczyk, Dusan Knop, and Rolf Niedermeier.",
199
+ "venue": "In Proceedings of the 2019 ACM Conference on Economics and\nComputation (EC \u201919), pages 505\u2013523. ACM, 2019.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "16": {
205
+ "title": "High-multiplicity fair allocation made more practical.",
206
+ "author": "Robert Bredereck, Aleksander Figiel, Andrzej Kaczmarczyk, Du\u0161an Knop, and\nRolf Niedermeier.",
207
+ "venue": "In Proceedings of the 20th International Conference on\nAutonomous Agents and MultiAgent Systems (AAMAS \u201921), pages 260\u2013268,\n2021.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "17": {
213
+ "title": "Envy-free allocations respecting social networks.",
214
+ "author": "Robert Bredereck, Andrzej Kaczmarczyk, and Rolf Niedermeier.",
215
+ "venue": "Artificial Intelligence, 305:103664, 2022.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "18": {
221
+ "title": "The combinatorial assignment problem: Approximate competitive\nequilibrium from equal incomes.",
222
+ "author": "Eric Budish.",
223
+ "venue": "Journal of Political Economy, 119(6):1061\u20131103, 2011.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "19": {
229
+ "title": "The unreasonable fairness of maximum nash welfare.",
230
+ "author": "Ioannis Caragiannis, David Kurokawa, Herv\u00e9 Moulin, Ariel D. Procaccia,\nNisarg Shah, and Junxing Wang.",
231
+ "venue": "In Proceedings of the 17th ACM Conference on Economics and\nComputation (EC \u201916), pages 305\u2013322. ACM, 2016.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "20": {
237
+ "title": "Parameterized resiliency problems.",
238
+ "author": "Jason Crampton, Gregory Z. Gutin, Martin Kouteck\u00fd, and R\u00e9mi\nWatrigant.",
239
+ "venue": "Theoretical Computer Science, 795:478\u2013491, 2019.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "21": {
245
+ "title": "The computational rise and fall of fairness.",
246
+ "author": "John P. Dickerson, Jonathan R. Goldman, Jeremy Karp, Ariel D. Procaccia, and\nTuomas Sandholm.",
247
+ "venue": "In Proceedings of the 28th AAAI Conference on Artificial\nIntelligence (AAAI \u201918), pages 1405\u20131411, 2014.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "22": {
253
+ "title": "Parameterized complexity of envy-free resource allocation in social\nnetworks.",
254
+ "author": "Eduard Eiben, Robert Ganian, Thekla Hamm, and Sebastian Ordyniak.",
255
+ "venue": "Artificial Intelligence, 315:103826, 2023.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "23": {
261
+ "title": "Parametric integer programming in fixed dimension.",
262
+ "author": "Friedrich Eisenbrand and Gennady Shmonin.",
263
+ "venue": "Mathematics of Operations Research, 33(4):839\u2013850, 2008.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "24": {
269
+ "title": "Unified fair allocation of goods and chores via copies.",
270
+ "author": "Yotam Gafni, Xin Huang, Ron Lavi, and Inbal Talgam-Cohen.",
271
+ "venue": "CoRR, abs/2109.08671, 2021.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "25": {
277
+ "title": "Integer programming in parameterized complexity: Five miniatures.",
278
+ "author": "Tom\u00e1\u0161 Gaven\u010diak, Martin Kouteck\u00fd, and Du\u0161an Knop.",
279
+ "venue": "Discrete Optimization, 44:100596, 2022.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "26": {
285
+ "title": "Dominant resource fairness: Fair allocation of multiple resource\ntypes.",
286
+ "author": "Ali Ghodsi, Matei Zaharia, Benjamin Hindman, Andy Konwinski, Scott Shenker, and\nIon Stoica.",
287
+ "venue": "In 8th USENIX Symposium on Networked Systems Design and\nImplementation (NSDI \u201911), 2011.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "27": {
293
+ "title": "Spliddit: Unleashing fair division algorithms.",
294
+ "author": "Jonathan R Goldman and Ariel D Procaccia.",
295
+ "venue": "SIGecom Exchanges, 13(2):41\u201346, 2014.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "28": {
301
+ "title": "Fair allocation of a multiset of indivisible items.",
302
+ "author": "Pranay Gorantla, Kunal Marwaha, and Santhoshini Velusamy.",
303
+ "venue": "In Proceedings of the 2023 Annual ACM-SIAM Symposium on\nDiscrete Algorithms (SODA \u201923), pages 304\u2013331, 2023.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "29": {
309
+ "title": "Parameterized complexity of directed Steiner tree on sparse graphs.",
310
+ "author": "Mark Jones, Daniel Lokshtanov, M. S. Ramanujan, Saket Saurabh, and Ondrej\nSuch\u00fd.",
311
+ "venue": "SIAM J. Discrete Math., 31(2):1294\u20131327, 2017.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "30": {
317
+ "title": "Test sets for integer programs, sentences.",
318
+ "author": "Ravi Kannan.",
319
+ "venue": "In William J. Cook and Paul D. Seymour, editors, Proceedings of\na DIMACS Workshop on Polyhedral Combinatorics, volume 1 of DIMACS\nSeries in Discrete Mathematics and Theoretical Computer Science, pages\n39\u201348. DIMACS/AMS, 1990.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "31": {
325
+ "title": "Lattice translates of a polytope and the Frobenius problem.",
326
+ "author": "Ravi Kannan.",
327
+ "venue": "Combinatorica, 12(2):161\u2013177, 1992.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "32": {
333
+ "title": "On the complexity of efficiency and envy-freeness in fair division of\nindivisible goods with additive preferences.",
334
+ "author": "Bart de Keijzer, Sylvain Bouveret, Tomas Klos, and Yingqian\nZhang.",
335
+ "venue": "In Proceedings of the 1st International Conference on\nAlgorithmic Decision Theory (ADT \u201909), pages 98\u2013110. Springer, 2009.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "33": {
341
+ "title": "A unifying framework for manipulation problems.",
342
+ "author": "Du\u0161an Knop, Martin Kouteck\u00fd, and Matthias Mnich.",
343
+ "venue": "In Proceedings of the 17th International Conference on\nAutonomous Agents and Multiagent Systems (AAMAS \u201918), pages 256\u2013264.\nIFAAMAS, 2018.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "34": {
349
+ "title": "Parametric integer programming algorithm for bilevel mixed integer\nprograms.",
350
+ "author": "Matthias K\u00f6ppe, Maurice Queyranne, and Christopher Thomas Ryan.",
351
+ "venue": "Journal of optimization theory and applications, 146(1):137\u2013150, 2010.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "35": {
357
+ "title": "Integer programming with a fixed number of variables.",
358
+ "author": "Hendrik W. Lenstra, Jr.",
359
+ "venue": "Mathematics of Operations Research, 8(4):538\u2013548, 1983.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "36": {
365
+ "title": "On approximately fair allocations of indivisible goods.",
366
+ "author": "Richard J. Lipton, Evangelos Markakis, Elchanan Mossel, and Amin Saberi.",
367
+ "venue": "In Proceedings of the 5th ACM Conference on Electronic Commerce\n(EC \u201904), pages 125\u2013131. ACM, 2004.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "37": {
373
+ "title": "Almost envy-freeness with general valuations.",
374
+ "author": "Benjamin Plaut and Tim Roughgarden.",
375
+ "venue": "In Proceedings of the 29th Annual ACM-SIAM Symposium on\nDiscrete Algorithms (SODA \u201918), pages 2584\u20132603. SIAM, 2018.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "38": {
381
+ "title": "Fair enough: Guaranteeing approximate maximin shares.",
382
+ "author": "Ariel D. Procaccia and Junxing Wang.",
383
+ "venue": "In Proceedings of the 15th ACM Conference on Economics and\nComputation (EC \u201914), pages 675\u2013692. ACM, 2014.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "39": {
389
+ "title": "Economics and Computation: An Introduction to Algorithmic Game\nTheory, Computational Social Choice, and Fair Division.",
390
+ "author": "J\u00f6rg Rothe.",
391
+ "venue": "Springer Texts in Business and Economics. Springer Berlin Heidelberg,\n2015.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "40": {
397
+ "title": "The polynomial-time hierarchy.",
398
+ "author": "Larry J. Stockmeyer.",
399
+ "venue": "Theoretical Computer Science, 3(1):1\u201322,\n1976.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "41": {
405
+ "title": "Challenges in resource and cost allocation.",
406
+ "author": "Toby Walsh.",
407
+ "venue": "In Proceedings of the 29th AAAI Conference on Artificial\nIntelligence (AAAI \u201915), pages 4073\u20134077. AAAI Press, 2015.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "42": {
413
+ "title": "Fair division: The computer scientist\u2019s perspective.",
414
+ "author": "Toby Walsh.",
415
+ "venue": "In Proceedings of the 29th International Joint Conference on\nArtificial Intelligence (IJCAI \u201920), pages 4966\u20134972, 2021.",
416
+ "url": null
417
+ }
418
+ },
419
+ {
420
+ "43": {
421
+ "title": "Complete sets and the polynomial-time hierarchy.",
422
+ "author": "Celia Wrathall.",
423
+ "venue": "Theoretical Computer Science, 3(1):23\u201333,\n1976.",
424
+ "url": null
425
+ }
426
+ }
427
+ ],
428
+ "url": "http://arxiv.org/html/2005.04907v2"
429
+ }
20240119/2012.03344v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2103.10702v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2106.01061v2.json ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Rethinking Cross-modal Interaction from a Top-down Perspective for Referring Video Object Segmentation",
3
+ "abstract": "Referring video object segmentation (RVOS) aims to segment video objects with the guidance of natural language reference.\nPrevious methods typically tackle RVOS through directly grounding linguistic reference over the image lattice. Such bottom-up strategy fails to explore object-level cues, easily leading to inferior results. In this work, we instead put forward a two-stage, top-down RVOS solution. First, an exhaustive set of object tracklets is constructed by propagating object masks detected from several sampled frames to the entire video. Second, a Transformer-based tracklet-language grounding module is proposed, which models instance-level visual relations and cross-modal interactions simultaneously and efficiently. Our model ranks place on CVPR2021 Referring Youtube-VOS challenge.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Referring video object segmentation (RVOS) targets at segmenting video objects referred by given language expressions. RVOS is a challenging task as it requires not only comprehensive understanding the semantics within individual modalities, but also pixel-level cross-modal reasoning. Existing RVOS models [18 ###reference_18###, 9 ###reference_9###] typically work in a bottom-up fashion (Fig. 1 ###reference_###-(a)), i.e\\onedot, perform grid-level alignment between visual and linguistic modalities. Thus they lack explicit knowledge about visual objects, leading to unreliable cross-modal reasoning and inaccurate segmentation.\nIn this work, we rethink RVOS from a top-down perspective (Fig. 1 ###reference_###-(b)), by comprehensively exploring cross-object relations and conducting object-level cross-modal grounding. With a similar spirit of [13 ###reference_13###], our approach mainly consists of two stages: object tracklet generation and tracklet-language grounding. In the first stage, we generate a set of high-quality object tracklets from input videos. Then, in the second stage, we ground the reference over the detected tracklets and select the best-matched one as the final output.\n###figure_1### More specifically, in the object tracklet generation stage, a lot of object candidate masks are first generated by applying instance segmentation over several sampled key frames. We further propagate the detected candidate masks to the whole video sequence, and generate an exhaustive set of object tracklets. After that, a tracklet-NMS mechanism is designed to remove redundant tracklets and select the high-quality ones as candidates for language-guided segmentation. In the tracklet-language grounding stage, we build a Transformer-based grounding module. Benefiting from the powerful self-attention computation within the Transformer blocks, the within-modal relations among objects and inter-modal interactions between tracklets and language can be comprehensively and efficiently modeled.\nOur model ranked place in Large-scale Video Object Segmentation Challenge (CVPR2021): Referring Video Object Segmentation track [1 ###reference_1###], with an overall & of and on test-dev and test-challenge, respectively.\n###figure_2###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "RVOS Datasets. The task of RVOS is proposed in [9 ###reference_9###], which mainly focuses on actor behavior understanding within an limited predefined action categories. Recently, Seo et al\\onedotintroduced a new large-scale dataset [18 ###reference_18###], i.e\\onedot, Refer-Youtube-VOS (RVOS-D), derived from Youtube-VOS [25 ###reference_25###]. RVOS-D provides more complex language descriptions from a broader object categories (\u200b 90 categories) within relatively longer video sequences (\u200b 6\u200b s). Thus it poses more challenges for RVOS methods. Referring Youtube-VOS challenge [1 ###reference_1###] is built upon RVOS-D [18 ###reference_18###].\nRVOS Methods. Current studies in the filed of RVOS are made mainly around the theme of building effective multi-modal feature representations. Existing methods typically make use of dynamic convolutions [22 ###reference_22###, 4 ###reference_4###] to adaptively generate convolutional filters that better respond to the referent, or leverage cross-modal attention [23 ###reference_23###, 17 ###reference_17###] to compute the correlations among input visual and linguistic embeddings. However, these methods only approach RVOS on the grid level, ignoring the importance of object-level visual cues."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methodology",
21
+ "text": "Overview.\nGiven a video sequence with frames, and corresponding referring expression with words, a set of referred object segmentation masks is requested for RVOS.\nAs illustrated in Fig. 2 ###reference_###, we design a two-down approach with two stages for object tracklet generation and tracklet-language grounding, respectively. In the first stage, we construct a comprehensive set of object candidate tracklets from and propose a sequence-nms module for reducing redundant candidates. In the second stage, the referred target is selected from the tracklets under the guidance of .\nObject Tracklet Construction.\nWe first uniformly sample frames from . For each key frame , a set of mask candidates, i.e\\onedot, , are generated through an image instance segmentation model :\nwhere refers to the number of the candidates in , and, for each mask candidate , we have .\nThen, a video mask propagation model is applied for each , to forward and backward propagate the mask to the entire video and get corresponding object tracklet :\nThus each tracklet is a sequence of masks, i.e\\onedot, , corresponds to the object candidate in key frame . And we define as the set of all the tracklets generated from .\nBased on above strategy, we generate a lot of tracklets, i.e\\onedot, , from the key frames. This ensures that we can generate a complete object candidate set that covers object instances in as many as possible, without the disturbance from object occlusion, and move-in/-out. We denote the set of all generated candidate tracklets as .\nTracklet-NMS. As we sample several key frames, there exist a lot of similar tracklets that correspond to the same object instance. This would bring an extra challenge to the following tracklet-language grounding process. Inspired by [14 ###reference_14###], we introduce a tracklet-level NMS process that eliminates redundant candidates in efficiently. We first define tracklet-IoU that measures the similarity between two tracklets, i.e\\onedot, :\nwhere , and . Each tracklet is also assigned with a score, defined as the product of the confidence score of (obtained from ) and the mask propagation probability (obtained from ), averaged over all the frames. Based on the tracklet score and tracklet-IoU, traditional NMS algorithms [5 ###reference_5###, 6 ###reference_6###] is conducted. As at most 5 objects might be requested in our concerned experimental setting, we keep at most tracklets with highest scores for each video after NMS. We refer the final tracklet set as .\n###figure_3### Tracklet-Language Grounding. We adopt per-frame reference grounding to determine the referred object from . Each frame and linguistic input are first fed into single-modality encoders, i.e\\onedot, separately for within-modal feature extraction:\nwhere and are extracted visual and linguistic features, respectively.\nFor each tracklet , we extracted its corresponding feature at frame through:\nwhere refers to the candidate mask of tracklet in frame , and denotes hadamard product. Note that the rescaling process for feature dimension alignment is omitted.\nGiven concatenated embeddings for the candidate tracklets at frame , and the linguistic representation , we propose a Transformer-based [21 ###reference_21###] grounding module for tracklet-language grounding:\nwhere is the grounding score of tracklet in .\nHere and are learnable modal embeddings.\nDue to the self-attention mechanism in the Transformer, the interactions among different object tracklets and between different modalities are comprehensively captured, leading to promising grounding performance.\nThe final grounding score for each is given as: . The the segments are , where ."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Experiment",
27
+ "text": "Challenge Dataset and Evaluation Metrics. We test our model on Referring Youtube-VOS challenge [1 ###reference_1###], which is built upon the recently released RVOS-D dataset [18 ###reference_18###].\nThe challenge dataset has videos with about 15K language reference sentences in total; videos are released with annotations for training. The rest videos are split into / for constructing test-dev/test-challenge sets, whose annotations are preserved for benchmarking.\nWe use standard metrics, i.e\\onedot, region similarity and contour accuracy , for evaluation.\nDetailed Network Architecture.\nWe employ two instance segmentation models, i.e\\onedot, HTC [2 ###reference_2###] and CondInst [20 ###reference_20###], for implementing in Eq. 1 ###reference_###. At each key frame , the mask candidates are a combination of all proposals generated from the two models. The mask propagation model (Eq. 2 ###reference_###) is implemented as CFBI+ [26 ###reference_26###].\nWe uniformly sample key frames for each video sequence.\nFor tracklet-language grounding, we implement the visual encoder as ResNet-101 [7 ###reference_7###] initialized from ImageNet-pretrained weights and linguistic encoder as a standard model. The grounding module is a 4-layer Transformer [21 ###reference_21###] with 12 heads in each layer, followed by a 2-layer MLP and a softmax layer for probability prediction.\nInput sentences are split by the WordPiece tokenizer [24 ###reference_24###] as in [3 ###reference_3###].\nBoth the hidden dimensions of Transformer and feature channel of within-modal representations are set to , i.e\\onedot, .\nTraining Detail.\nFor , HTC is trained on COCO [15 ###reference_15###] without finetuning. CondInst is pretrained on COCO and finetuned on the training split of RVOS-D with the standard training setting in [20 ###reference_20###] for about steps.\nThe propagation module , i.e\\onedot, CFBI+, is pretrained on COCO and finetuned over training split of VOS [25 ###reference_25###] track as a standard training setting in semi-supervised VOS task (see [26 ###reference_26###] for more details).\nFor tracklet-language grounding module (Eqs. 4 ###reference_###- 6 ###reference_###), we pretrain it using the data from RefCOCO [27 ###reference_27###], RefCOCOg [27 ###reference_27###] and RefCOCO+ [16 ###reference_16###] for about 20 epochs.\nWe use Adam [10 ###reference_10###] as the optimizer with a learning rate of 4e-5, batch size of and weight decay of 1e-4.\nThe module is further finetuned on the training split of RVOS-D for five epochs with a learning rate of 1e-5.\nModel Ensemble.\nModel ensemble is also used in our final submission. We build five models with different implementations of the visual encoder , i.e\\onedot, ResNet101 [7 ###reference_7###], HRNet [19 ###reference_19###] and ResNeSt101 [28 ###reference_28###], and linguistic encoder , i.e\\onedot, [8 ###reference_8###] and [11 ###reference_11###]. We use different hyperparameter settings to further promote model performance and simply average grounding probabilities from different models for final prediction.\nResults on RVOS Challenge.\nTable 1 ###reference_### and Table 2 ###reference_### show the ranking result of top teams in test-dev and test-challenge sets respectively.\nOur approach achieves the best performance on both the two sets across all the metrics and outperforms best team with a large margin, i.e\\onedot, 11.3% in terms of overall & on test-challenge.\nFig. 3 ###reference_### shows qualitative results of our proposed model on test-challenge.\nWith the effective top-down model design, our approach generates robust predictions even in challenging scenes, e.g\\onedot, semantically similar instances, inconspicuous referent, complex linguistic description, etc\\onedot.\nAblation Study.\nWe start our ablation study with a simple image-level grounding pipeline ( row in Table 3 ###reference_###), which only contains an image instance segmentation module (Eq. 1 ###reference_###) for image-level object candidates generation and implements the grounding module (Eq. 6 ###reference_###) as a na\u00efve feature similarity operation without tracklet construction (Eq. 2 ###reference_###).\nThen we progressively add essential modules (- rows in Table 3 ###reference_###).\nWith the fully exploration of intra- and inter-modal interactions, consistent performance improvements can be achieved."
28
+ }
29
+ ],
30
+ "appendix": [],
31
+ "tables": {
32
+ "1": {
33
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.4\" style=\"width:212.5pt;height:110.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(2.7pt,-1.4pt) scale(1.02608111153178,1.02608111153178) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.4\" style=\"background-color:#E6E6E6;\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T1.4.4.4.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.0pt;background:black;display:inline-block;\">\u00a0</span><span class=\"ltx_text\" id=\"S4.T1.4.4.4.5.1\" style=\"font-size:90%;background-color:#E6E6E6;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.4.5.1.1\">Team</span></span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.2.2.2.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.2.2.2.2.1\" style=\"font-size:90%;background-color:#E6E6E6;\">&amp;</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.3.3.3.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.4.4.4.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.5.1\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_rr ltx_border_tt\" id=\"S4.T1.4.4.5.1.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.5.1.1.1\" style=\"font-size:90%;\">leonnnop (Ours)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.4.5.1.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.5.1.2.1\" style=\"font-size:90%;\">61.4</span><span class=\"ltx_text\" id=\"S4.T1.4.4.5.1.2.2\" style=\"font-size:90%;\"> </span><span class=\"ltx_text\" id=\"S4.T1.4.4.5.1.2.3\" style=\"font-size:90%;color:#5DAE56;\">(<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.5.1.2.3.1\">+6.6</span>)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.4.5.1.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.5.1.3.1\" style=\"font-size:90%;\">60.0</span><span class=\"ltx_text\" id=\"S4.T1.4.4.5.1.3.2\" style=\"font-size:90%;\"> </span><span class=\"ltx_text\" id=\"S4.T1.4.4.5.1.3.3\" style=\"font-size:90%;color:#5DAE56;\">(<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.5.1.3.3.1\">+6.3</span>)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.4.5.1.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.5.1.4.1\" style=\"font-size:90%;\">62.7</span><span class=\"ltx_text\" id=\"S4.T1.4.4.5.1.4.2\" style=\"font-size:90%;\"> </span><span class=\"ltx_text\" id=\"S4.T1.4.4.5.1.4.3\" style=\"font-size:90%;color:#5DAE56;\">(<span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.4.5.1.4.3.1\">+6.7</span>)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.6.2\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S4.T1.4.4.6.2.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.6.2.1.1\" style=\"font-size:90%;\">nowherespyfly</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.6.2.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.6.2.2.1\" style=\"font-size:90%;\">54.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.6.2.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.6.2.3.1\" style=\"font-size:90%;\">53.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.4.6.2.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.6.2.4.1\" style=\"font-size:90%;\">56.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.7.3\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_rr\" id=\"S4.T1.4.4.7.3.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.7.3.1.1\" style=\"font-size:90%;\">seonguk</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.7.3.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.7.3.2.1\" style=\"font-size:90%;\">48.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.7.3.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.7.3.3.1\" style=\"font-size:90%;\">47.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.7.3.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.7.3.4.1\" style=\"font-size:90%;\">50.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.8.4\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_rr\" id=\"S4.T1.4.4.8.4.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.8.4.1.1\" style=\"font-size:90%;\">wangluting</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.8.4.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.8.4.2.1\" style=\"font-size:90%;\">48.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.8.4.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.8.4.3.1\" style=\"font-size:90%;\">47.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.4.8.4.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.8.4.4.1\" style=\"font-size:90%;\">49.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.9.5\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_b ltx_border_rr\" id=\"S4.T1.4.4.9.5.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.9.5.1.1\" style=\"font-size:90%;\">Merci1</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.4.4.9.5.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.9.5.2.1\" style=\"font-size:90%;\">44.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.4.4.9.5.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.9.5.3.1\" style=\"font-size:90%;\">43.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.4.4.9.5.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.4.9.5.4.1\" style=\"font-size:90%;\">45.9</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.10.1\">Benchmarking results</span> on the <span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T1.11.2\">test-dev</span> set of Referring Youtube-VOS challenge.\n</figcaption>\n</figure>",
34
+ "capture": "Table 1: Benchmarking results on the test-dev set of Referring Youtube-VOS challenge.\n"
35
+ },
36
+ "2": {
37
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.4\" style=\"width:212.5pt;height:104pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-4.0pt,2.0pt) scale(0.963281096026718,0.963281096026718) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.4\" style=\"background-color:#E6E6E6;\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T2.4.4.4.5\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.0pt;background:black;display:inline-block;\">\u00a0</span><span class=\"ltx_text\" id=\"S4.T2.4.4.4.5.1\" style=\"font-size:90%;background-color:#E6E6E6;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.4.5.1.1\">Team</span></span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.2.2.2.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.2.2.2.2.1\" style=\"font-size:90%;background-color:#E6E6E6;\">&amp;</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.3.3.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.4.4.4.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.5.1\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_rr ltx_border_tt\" id=\"S4.T2.4.4.5.1.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.5.1.1.1\" style=\"font-size:90%;\">leonnnop (Ours)</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.5.1.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.5.1.2.1\" style=\"font-size:90%;\">60.7</span><span class=\"ltx_text\" id=\"S4.T2.4.4.5.1.2.2\" style=\"font-size:90%;\"> </span><span class=\"ltx_text\" id=\"S4.T2.4.4.5.1.2.3\" style=\"font-size:90%;color:#5DAE56;\">(<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.5.1.2.3.1\">+11.3</span>)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.5.1.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.5.1.3.1\" style=\"font-size:90%;\">59.4</span><span class=\"ltx_text\" id=\"S4.T2.4.4.5.1.3.2\" style=\"font-size:90%;\"> </span><span class=\"ltx_text\" id=\"S4.T2.4.4.5.1.3.3\" style=\"font-size:90%;color:#5DAE56;\">(<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.5.1.3.3.1\">+11.0</span>)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.4.4.5.1.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.5.1.4.1\" style=\"font-size:90%;\">62.0</span><span class=\"ltx_text\" id=\"S4.T2.4.4.5.1.4.2\" style=\"font-size:90%;\"> </span><span class=\"ltx_text\" id=\"S4.T2.4.4.5.1.4.3\" style=\"font-size:90%;color:#5DAE56;\">(<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.4.4.5.1.4.3.1\">+11.7</span>)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.6.2\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_rr ltx_border_t\" id=\"S4.T2.4.4.6.2.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.6.2.1.1\" style=\"font-size:90%;\">nowherespyfly</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.6.2.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.6.2.2.1\" style=\"font-size:90%;\">49.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.6.2.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.6.2.3.1\" style=\"font-size:90%;\">48.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.4.4.6.2.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.6.2.4.1\" style=\"font-size:90%;\">50.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.7.3\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_rr\" id=\"S4.T2.4.4.7.3.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.7.3.1.1\" style=\"font-size:90%;\">feng915912132</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.7.3.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.7.3.2.1\" style=\"font-size:90%;\">48.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.7.3.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.7.3.3.1\" style=\"font-size:90%;\">47.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.7.3.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.7.3.4.1\" style=\"font-size:90%;\">49.0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.8.4\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_rr\" id=\"S4.T2.4.4.8.4.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.8.4.1.1\" style=\"font-size:90%;\">Merci1</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.8.4.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.8.4.2.1\" style=\"font-size:90%;\">41.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.8.4.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.8.4.3.1\" style=\"font-size:90%;\">40.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.8.4.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.8.4.4.1\" style=\"font-size:90%;\">41.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.9.5\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_b ltx_border_rr\" id=\"S4.T2.4.4.9.5.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.9.5.1.1\" style=\"font-size:90%;\">wangluting</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.4.9.5.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.9.5.2.1\" style=\"font-size:90%;\">40.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.4.9.5.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.9.5.3.1\" style=\"font-size:90%;\">39.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.4.4.9.5.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.4.4.9.5.4.1\" style=\"font-size:90%;\">41.8</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.1\">Benchmarking results</span> on <span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T2.11.2\">test-challenge</span> set of Referring Youtube-VOS challenge.\n</figcaption>\n</figure>",
38
+ "capture": "Table 2: Benchmarking results on test-challenge set of Referring Youtube-VOS challenge.\n"
39
+ },
40
+ "3": {
41
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.6\" style=\"width:212.5pt;height:79.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-13.8pt,5.2pt) scale(0.885077977389284,0.885077977389284) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.6.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.4\" style=\"background-color:#E6E6E6;\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.4.4.4.5\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.0pt;background:black;display:inline-block;\">\u00a0</span><span class=\"ltx_text\" id=\"S4.T3.4.4.4.5.1\" style=\"font-size:90%;background-color:#E6E6E6;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.4.5.1.1\">Model</span></span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.2.2.2.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T3.2.2.2.2.1\" style=\"font-size:90%;background-color:#E6E6E6;\">&amp;</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.3.3.3.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.4.4.4.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_rr ltx_border_tt\" id=\"S4.T3.6.6.7.1.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.7.1.1.1\" style=\"font-size:90%;\">Image-level Baseline</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.6.6.7.1.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.7.1.2.1\" style=\"font-size:90%;\">40.9</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.6.6.7.1.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.7.1.3.1\" style=\"font-size:90%;\">40.5</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.6.6.7.1.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.7.1.4.1\" style=\"font-size:90%;\">41.3</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.5.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr ltx_border_t\" id=\"S4.T3.5.5.5.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T3.5.5.5.1.1\" style=\"font-size:90%;\">+Video-level Propagation </span><span class=\"ltx_text\" id=\"S4.T3.5.5.5.1.2\" style=\"font-size:90%;\"> (Eq.\u00a0</span><a class=\"ltx_ref\" href=\"#S3.E2\" style=\"font-size:90%;\" title=\"2 \u2023 3 Methodology \u2023 Rethinking Cross-modal Interaction from a Top-down Perspective for Referring Video Object Segmentation\"><span class=\"ltx_text ltx_ref_tag\">2</span></a><span class=\"ltx_text\" id=\"S4.T3.5.5.5.1.3\" style=\"font-size:90%;\">)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.5.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.2.1\" style=\"font-size:90%;\">49.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.5.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.3.1\" style=\"font-size:90%;\">47.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.5.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.5.5.5.4.1\" style=\"font-size:90%;\">50.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_rr\" id=\"S4.T3.6.6.6.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T3.6.6.6.1.1\" style=\"font-size:90%;\">+Transformer-based Grounding </span><span class=\"ltx_text\" id=\"S4.T3.6.6.6.1.2\" style=\"font-size:90%;\"> (Eq.\u00a0</span><a class=\"ltx_ref\" href=\"#S3.E6\" style=\"font-size:90%;\" title=\"6 \u2023 3 Methodology \u2023 Rethinking Cross-modal Interaction from a Top-down Perspective for Referring Video Object Segmentation\"><span class=\"ltx_text ltx_ref_tag\">6</span></a><span class=\"ltx_text\" id=\"S4.T3.6.6.6.1.3\" style=\"font-size:90%;\">)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.6.6.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.2.1\" style=\"font-size:90%;\">56.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.6.6.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.3.1\" style=\"font-size:90%;\">54.8</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.6.6.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.6.4.1\" style=\"font-size:90%;\">58.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.6.8.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_rr\" id=\"S4.T3.6.6.8.1.1\" style=\"padding-left:4.0pt;padding-right:4.0pt;\">\n<span class=\"ltx_text\" id=\"S4.T3.6.6.8.1.1.1\" style=\"font-size:90%;\">+Sequence-NMS (Eq.\u00a0</span><a class=\"ltx_ref\" href=\"#S3.E3\" style=\"font-size:90%;\" title=\"3 \u2023 3 Methodology \u2023 Rethinking Cross-modal Interaction from a Top-down Perspective for Referring Video Object Segmentation\"><span class=\"ltx_text ltx_ref_tag\">3</span></a><span class=\"ltx_text\" id=\"S4.T3.6.6.8.1.1.2\" style=\"font-size:90%;\">) &amp; Model Ensemble</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.6.6.8.1.2\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.8.1.2.1\" style=\"font-size:90%;\">61.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.6.6.8.1.3\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.8.1.3.1\" style=\"font-size:90%;\">60.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.6.6.8.1.4\" style=\"padding-left:4.0pt;padding-right:4.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.6.6.8.1.4.1\" style=\"font-size:90%;\">62.7</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.12.1\">Ablation study</span> of essential components on <span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T3.13.2\">test-dev</span>.\n</figcaption>\n</figure>",
42
+ "capture": "Table 3: Ablation study of essential components on test-dev.\n"
43
+ }
44
+ },
45
+ "image_paths": {
46
+ "1": {
47
+ "figure_path": "2106.01061v2_figure_1.png",
48
+ "caption": "Figure 1: An illustration of our motivation.\nPrevious bottom-up methods (a) perform cross-modal interaction at grid level, and fail to capture crucial object-level relations as top-down approach (b).",
49
+ "url": "http://arxiv.org/html/2106.01061v2/x1.png"
50
+ },
51
+ "2": {
52
+ "figure_path": "2106.01061v2_figure_2.png",
53
+ "caption": "Figure 2: Pipeline of our proposed method, which contains two major stages, i.e\\onedot, object tracklet generation (left column) and tracklet-language grounding (right column).",
54
+ "url": "http://arxiv.org/html/2106.01061v2/x2.png"
55
+ },
56
+ "3": {
57
+ "figure_path": "2106.01061v2_figure_3.png",
58
+ "caption": "Figure 3: Representative visual results on RVOS-D test-challenge set. Each referent and the corresponding textual description are highlighted in the same color.",
59
+ "url": "http://arxiv.org/html/2106.01061v2/x3.png"
60
+ }
61
+ },
62
+ "validation": true,
63
+ "references": [
64
+ {
65
+ "1": {
66
+ "title": "https://youtube-vos.org/challenge/2021/.",
67
+ "author": "The 3rd large-scale video object segmentation challenge.",
68
+ "venue": null,
69
+ "url": null
70
+ }
71
+ },
72
+ {
73
+ "2": {
74
+ "title": "Hybrid task cascade for instance segmentation.",
75
+ "author": "Kai Chen, Jiangmiao Pang, Jiaqi Wang, Yu Xiong, Xiaoxiao Li, Shuyang Sun,\nWansen Feng, Ziwei Liu, Jianping Shi, Wanli Ouyang, et al.",
76
+ "venue": "In CVPR, 2019.",
77
+ "url": null
78
+ }
79
+ },
80
+ {
81
+ "3": {
82
+ "title": "Bert: Pre-training of deep bidirectional transformers for language\nunderstanding.",
83
+ "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.",
84
+ "venue": "In NAACL-HLT, 2018.",
85
+ "url": null
86
+ }
87
+ },
88
+ {
89
+ "4": {
90
+ "title": "Actor and action video segmentation from a sentence.",
91
+ "author": "Kirill Gavrilyuk, Amir Ghodrati, Zhenyang Li, and Cees GM Snoek.",
92
+ "venue": "In CVPR, 2018.",
93
+ "url": null
94
+ }
95
+ },
96
+ {
97
+ "5": {
98
+ "title": "Fast r-cnn.",
99
+ "author": "Ross Girshick.",
100
+ "venue": "In ICCV, 2015.",
101
+ "url": null
102
+ }
103
+ },
104
+ {
105
+ "6": {
106
+ "title": "Rich feature hierarchies for accurate object detection and semantic\nsegmentation.",
107
+ "author": "Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik.",
108
+ "venue": "In CVPR, 2014.",
109
+ "url": null
110
+ }
111
+ },
112
+ {
113
+ "7": {
114
+ "title": "Deep residual learning for image recognition.",
115
+ "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.",
116
+ "venue": "In CVPR, 2016.",
117
+ "url": null
118
+ }
119
+ },
120
+ {
121
+ "8": {
122
+ "title": "Deberta: Decoding-enhanced bert with disentangled attention.",
123
+ "author": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen.",
124
+ "venue": "In ICLR, 2021.",
125
+ "url": null
126
+ }
127
+ },
128
+ {
129
+ "9": {
130
+ "title": "Video object segmentation with language referring expressions.",
131
+ "author": "Anna Khoreva, Anna Rohrbach, and Bernt Schiele.",
132
+ "venue": "In ACCV, 2018.",
133
+ "url": null
134
+ }
135
+ },
136
+ {
137
+ "10": {
138
+ "title": "Adam: A method for stochastic optimization.",
139
+ "author": "Diederik P Kingma and Jimmy Ba.",
140
+ "venue": "In ICLR, 2015.",
141
+ "url": null
142
+ }
143
+ },
144
+ {
145
+ "11": {
146
+ "title": "Bart: Denoising sequence-to-sequence pre-training for natural\nlanguage generation, translation, and comprehension.",
147
+ "author": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed,\nOmer Levy, Ves Stoyanov, and Luke Zettlemoyer.",
148
+ "venue": "In ACL, 2020.",
149
+ "url": null
150
+ }
151
+ },
152
+ {
153
+ "12": {
154
+ "title": "Local-global context aware transformer for language-guided video\nsegmentation.",
155
+ "author": "Chen Liang, Wenguan Wang, Tianfei Zhou, Jiaxu Miao, Yawei Luo, and Yi Yang.",
156
+ "venue": "IEEE TPAMI, 2023.",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "13": {
162
+ "title": "Clawcranenet: Leveraging object-level relation for text-based video\nsegmentation.",
163
+ "author": "Chen Liang, Yu Wu, Yawei Luo, and Yi Yang.",
164
+ "venue": "arXiv preprint arXiv:2103.10702, 2021.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "14": {
170
+ "title": "Video instance segmentation with a propose-reduce paradigm.",
171
+ "author": "Huaijia Lin, Ruizheng Wu, Shu Liu, Jiangbo Lu, and Jiaya Jia.",
172
+ "venue": "arXiv preprint arXiv:2103.13746, 2021.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "15": {
178
+ "title": "Microsoft coco: Common objects in context.",
179
+ "author": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva\nRamanan, Piotr Doll\u00e1r, and C Lawrence Zitnick.",
180
+ "venue": "In ECCV, 2014.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "16": {
186
+ "title": "Generation and comprehension of unambiguous object descriptions.",
187
+ "author": "Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and\nKevin Murphy.",
188
+ "venue": "In CVPR, 2016.",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "17": {
194
+ "title": "Polar relative positional encoding for video-language segmentation.",
195
+ "author": "Ke Ning, Lingxi Xie, Fei Wu, and Qi Tian.",
196
+ "venue": "In IJCAI, 2020.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "18": {
202
+ "title": "Urvos: Unified referring video object segmentation network with a\nlarge-scale benchmark.",
203
+ "author": "Seonguk Seo, Joon-Young Lee, and Bohyung Han.",
204
+ "venue": "In ECCV, 2020.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "19": {
210
+ "title": "Deep high-resolution representation learning for human pose\nestimation.",
211
+ "author": "Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang.",
212
+ "venue": "In CVPR, 2019.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "20": {
218
+ "title": "Conditional convolutions for instance segmentation.",
219
+ "author": "Zhi Tian, Chunhua Shen, and Hao Chen.",
220
+ "venue": "In ECCV, 2020.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "21": {
226
+ "title": "Attention is all you need.",
227
+ "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,\nAidan N Gomez, Lukasz Kaiser, and Illia Polosukhin.",
228
+ "venue": "In NeurIPS, 2017.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "22": {
234
+ "title": "Context modulated dynamic networks for actor and action video\nsegmentation with language queries.",
235
+ "author": "Hao Wang, Cheng Deng, Fan Ma, and Yi Yang.",
236
+ "venue": "In AAAI, 2020.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "23": {
242
+ "title": "Asymmetric cross-guided attention network for actor and action video\nsegmentation from natural language query.",
243
+ "author": "Hao Wang, Cheng Deng, Junchi Yan, and Dacheng Tao.",
244
+ "venue": "In ICCV, 2019.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "24": {
250
+ "title": "Google\u2019s neural machine translation system: Bridging the gap between\nhuman and machine translation.",
251
+ "author": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang\nMacherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al.",
252
+ "venue": "arXiv preprint arXiv:1609.08144, 2016.",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "25": {
258
+ "title": "Youtube-vos: Sequence-to-sequence video object segmentation.",
259
+ "author": "Ning Xu, Linjie Yang, Yuchen Fan, Jianchao Yang, Dingcheng Yue, Yuchen Liang,\nBrian Price, Scott Cohen, and Thomas Huang.",
260
+ "venue": "In ECCV, 2018.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "26": {
266
+ "title": "Collaborative video object segmentation by foreground-background\nintegration.",
267
+ "author": "Zongxin Yang, Yunchao Wei, and Yi Yang.",
268
+ "venue": "In ECCV, 2020.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "27": {
274
+ "title": "Modeling context in referring expressions.",
275
+ "author": "Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg.",
276
+ "venue": "In ECCV, 2016.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "28": {
282
+ "title": "Resnest: Split-attention networks.",
283
+ "author": "Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Haibin Lin, Zhi Zhang, Yue\nSun, Tong He, Jonas Mueller, R Manmatha, et al.",
284
+ "venue": "arXiv preprint arXiv:2004.08955, 2020.",
285
+ "url": null
286
+ }
287
+ }
288
+ ],
289
+ "url": "http://arxiv.org/html/2106.01061v2"
290
+ }
20240119/2110.14014v6.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2111.13926v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2201.05158v3.json ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Towards Quantum Graph Neural Networks: An Ego-Graph Learning Approach",
3
+ "abstract": "Quantum machine learning is a fast-emerging field that aims to tackle machine learning using quantum algorithms and quantum computing. Due to the lack of physical qubits and an effective means to map real-world data from Euclidean space to Hilbert space, most of these methods focus on quantum analogies or process simulations rather than devising concrete architectures based on qubits. In this paper, we propose a novel hybrid quantum-classical algorithm for graph-structured data, which we refer to as the Ego-graph based Quantum Graph Neural Network (egoQGNN). egoQGNN implements the GNN theoretical framework using the tensor product and unity matrix representation, which greatly reduces the number of model parameters required. When controlled by a classical computer, egoQGNN can accommodate arbitrarily sized graphs by processing ego-graphs from the input graph using a modestly-sized quantum device. The architecture is based on a novel mapping from real-world data to Hilbert space. This mapping maintains the distance relations present in the data and reduces information loss. Experimental results show that the proposed method outperforms competitive state-of-the-art models with only 1.68% parameters compared to those models.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Quantum machine learning encapsulates a diverse variety of algorithms ranging from classical shallow learning techniques such as the Quantum Support Vector Machine (QSVM) [1 ###reference_1###, 2 ###reference_2###] and the quantum decision tree classifier [3 ###reference_3###] to the more recent quantum neural networks e.g. the Quantum Convolutional Neural Network (QCNN) [4 ###reference_4###], the Quantum Generative Adversarial Network (QGAN) [5 ###reference_5###] and the Quantum Graph Neural Network [6 ###reference_6###].\nDistinct from the data processed by existing machine learning and deep learning techniques, the data used in quantum machine learning reside in high-dimensional Hilbert spaces represented in the form of quantum states. Maria et al. [7 ###reference_7###] point out that quantum machine learning algorithms overcome the problems associated with working in reduced dimensional space by using classical machine learning. Computation in high-dimensional Hilbert space can bring performance improvement. Seeking quantum counterparts of the Support Vector Machine (SVM) [8 ###reference_8###], Havlicek et al. [1 ###reference_1###] propose a quantum binary classification algorithm similar to SVM (QSVM). A quantum circuit is designed to map data from Euclidean space to Hilbert space. However, the input dimension cannot exceed three. Schuld et al. [2 ###reference_2###] propose a quantum-classical kernel method. A quantum computer estimates the inner products of the data, while a classical computer calculates the estimation result and trains the algorithm. The Quantum Convolutional Neural Network (QCNN) [9 ###reference_9###] simulates the structure of the Convolutional Neural Network. For input sizes of qubits, the QCNN has only variational parameters. The Quantum Generative Adversarial Network (QGAN) of Hu et al. [5 ###reference_5###] has the potential for exponential acceleration relative to its classical counterpart. Hu et al. [5 ###reference_5###] demonstrate that after multiple rounds of training, the generator of QGAN can generate the corresponding quantum states with 98.9% fidelity. The generator is suitable for use on a medium-sized quantum computer in a noisy environment.\nRecent work has aimed to combine quantum machine learning with graph representations [6 ###reference_6###, 10 ###reference_10###, 11 ###reference_11###, 12 ###reference_12###, 13 ###reference_13###]. Early work [6 ###reference_6###, 10 ###reference_10###, 11 ###reference_11###] constructed quantum circuit based models and showed these could successfully handle graph data. However, the input to these methods is the entire graph. The size of existing quantum devices is insufficient to handle this situation. The above models can thus only be applied to small-scale or synthetic datasets. More recently, motivated by the Graph Neural Tangent Kernel (GNTK)[14 ###reference_14###], Tang and Yan propose a novel quantum kernel for graph classification, referred to as the Graph Quantum Neural Tangent Kernel (GraphQNTK). This is equivalent to an infinite-width GNN with attention.\nHowever, the above mentioned methods still suffer from two main limitations, namely (i) they lack a theoretical proof for being able to achieve graph isomorphism classification which we believe is a fundamental capability for expressive graph representation learning (in fact, we verify the discriminative ability of our model in our experiments) and (ii) there is no existing method for quantum graph learning that maps Euclidean data into a quantum Hilbert space (and which we aim to address in our paper).\nGraph-structured data or graph data are non-Euclidean and consist of nodes and edges. It is widely used in knowledge representation [15 ###reference_15###], social system analysis [16 ###reference_16###], [17 ###reference_17###], modelling higher-order interactions in physical systems [18 ###reference_18###], [19 ###reference_19###], and combinatorial optimization [20 ###reference_20###]. Recently, deep learning has achieved impressive results on a variety of tasks involving Euclidean data, and some studies have commenced generalizing deep learning to the graph domain.\nGraph Neural Networks (GNNs) [21 ###reference_21###], are recently proposed neural network structures for the processing of graph-structured data. The main idea underpinning GNNs is the neighborhood aggregation strategy. The strategy updates the representation of a node by recursively aggregating the representations of its neighbors. Xu et al. [22 ###reference_22###] prove that a GNN which satisfies certain conditions is as powerful as the Weisfeiler-Lehman (WL) test [23 ###reference_23###] and can effectively distinguish the isomorphisms of graphs. Mathematically they prove GNNs can achieve the same effect as the WL test when three critical functions employed by the GNN are injective. Different realizations of the GNN include but are not limited to Relational Graph Convolutional Networks (R-GCN) [24 ###reference_24###],\nedge Graph Neural Networks (edGNN) [25 ###reference_25###], Random Walk Graph Neural Networks (RW-GNN) [26 ###reference_26###] and Factorizable Graph Convolutional Networks (Factor GCN) [27 ###reference_27###].\nRecently, some studies have paid attention to introducing an ego-graph into the GNN to alleviate the limitations of GNNs [28 ###reference_28###] or to provide insights into their performance[29 ###reference_29###, 30 ###reference_30###]. An ego-graph consists of a central node and all of its connected neighbors. To address the scalability issue while applying an attention mechanism in GNNs, Zhao et al. [29 ###reference_29###] adopt a transformer model for ego-graphs rather than for the entire graph. Moreover, Zhu et al. [30 ###reference_30###] use ego-Graph information maximization to both analyze and provide theoretical guarantees on GNN transferability.\nMotivated by the above methods, this paper proposes a hybrid classical-quantum machine learning approach to Graph Neural Network embodiment. Compared to the existing quantum algorithm for graph-structured data, our method is scale-free and able to utilize the features of nodes. Additionally, we ground the GNN framework in the physical elements of quantum computing, i.e. qubits.\nOur main contributions are summarized as follows:\nA novel quantum-classical hybrid machine learning algorithm for graph-structured data is proposed, namely the Ego-graph based Quantum Graph Neural Network (egoQGNN). It utilizes the tensor product and unitary matrice representations to implement the theoretical framework of the GNN. Due to the fact that the unitary matrix only requires the specification of a rotation angle, i.e. the variational parameter, the egoQGNN achieves a similar performance but with only 1.68% of the parameters when compared with the GNN model with the least parameters, DGCNN [31 ###reference_31###].\nWe design an ego-graph decompositional processing strategy to decompose a large graph into small ego-graphs which can be handled by existing small-sized quantum devices. By exploiting this strategy, the egoQGNN can handle larger-sized graph-structured data with a fixed-sized quantum device, via the efficient use of qubits. In the current situation where the number of physical qubits is limited, this is an important feature of our method.\nA trainable method for mapping data from a Euclidean space to a quantum Hilbert space is proposed. This can both maintain distance relations and also reduces information loss during the mapping.\nThe remainder of this paper is organized as follows. Section II reviews the existing GNNs and quantum machine learning algorithms. Section III introduces the fundamental concepts of quantum machine learning and GNNs. Section IV introduces the trainable mapping method, the theoretical framework, and the resulting structure of the egoQGNN. Section V presents our experimental results obtained using the egoQGNN. Finally, Section VI concludes the paper and offers directions for future investigation."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "In this section, we briefly review two related topics, namely 1) graph neural networks, especially those for graph level embedding; and 2) quantum machine learning."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Graph Neural Networks",
21
+ "text": "The goal of graph neural networks is to map both node features and structural arrangement information to vectorized representations that can be used to embed graphs into lower-dimensional spaces or manifolds. GNNs leverage node features and edge features to learn the representation of each node within a graph. By embedding the nodes of a graph and combining all the node embeddings within a graph, GNNs obtain the required graph representation.\nSince the seminal work in [32 ###reference_32###], a diverse set of GNN models have been proposed.\nFor example, it has been proven [22 ###reference_22###] that a GNN that satisfies certain restrictive conditions can be as powerful as the Weisfeiler-Lehman (WL) test [23 ###reference_23###] and can effectively distinguish the isomorphisms of graphs. Mathematically it has been proved that GNNs can achieve the same effect as the WL test when three critical functions used by the GNN are injective. This analysis has been further extended to directed graphs by aggregating nodes and edges at the same time [25 ###reference_25###]. Spectrally-based GNNs [33 ###reference_33###, 34 ###reference_34###, 35 ###reference_35###] define spectral filter operations based on the Laplacian matrix of a graph. This is done by defining a series of filter coefficients based on the Laplacian eigenvalues and eigenvectors, and are thus computationally expensive. Fast and Deep Graph Neural Networks (FDGNN) [36 ###reference_36###] make it possible to combine the advantages of the deep architectural construction of GNNs with the extreme efficiency of randomized neural networks, and in particular RC, methodologies. Random Walk Graph Neural Network (RWGNN) [26 ###reference_26###] generates graph representations by comparing a set of trainable hidden graphs against input graphs using a variant of the -step random walk kernel.\nGNNs have achieved state-of-the-art results on different graph learning tasks, such as graph classification, link prediction and semi-supervised node classification. However, GNNs embed nodes into Euclidean space and cause significant distortion and structure. To overcome this problem, several GNN models based on non-Euclidean geometry have been proposed [37 ###reference_37###] [38 ###reference_38###] [39 ###reference_39###] [40 ###reference_40###]. These methods preserve the hierarchical structure of the graph and have improved performance compared to Euclidean GNNs. One common feature shared by these methods and our model is that of computing graph representation in a non-Euclidean space."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Quantum machine learning",
27
+ "text": "Recently, there has been increasing interest in generalizing quantum computation to the machine learning domain. Several studies [8 ###reference_8###, 2 ###reference_2###, 1 ###reference_1###, 9 ###reference_9###, 5 ###reference_5###] have been proven to be effective for classification. Unlike most classical machine learning methods, which process data in a Euclidean space, quantum machine learning methods map data to quantum states residing in a high dimensional complex-valued Hilbert space. Due to quantum entanglement, the dimensionality of the Hilbert space increases exponentially with the number of qubits. For example, the dimensionality of an qubit Hilbert space is . This means that quantum states in this Hilbert space are dimensional complex vectors. In Section III, we will introduce the fundamentals of quantum machine learning in more detail. Some observations that can be drawn from these methods are (i) the inputs of these methods are quantum states defined in terms of qubits; (ii) the methods utilize quantum gates with learnable parameters to train the model and a classical computer is responsible for training, measuring and controlling quantum devices; (iii) the methods classify data by measuring the quantum state rather than applying softmax or similar functions. More details given in Section IV.\nFor graph structured data, some studies have attempted to utilize quantum machine learning to capture structural information and obtain graph or node representations in a quantum Hilbert space. Verdon et al. [6 ###reference_6###] were the first to propose a quantum computing based graph classification model, the so-called Quantum Graph Neural Network (QGNN). This captures graph structure using a quadratic Hamiltonian and utilizing quantum circuits to extract relevant graph structural information. With the aim of overcoming the 1-WL limitation of existing GNNs, P\u00e9ter et al. introduce the Equivalent Quantum Graph Circuit (EQGC) [11 ###reference_11###]. This captures the permutation-invariant topologies of the input graphs. However, due to the number of required qubits scaling linearly with the number of nodes, EQGC can only handle small-scale synthetic datasets. The Graph Quantum Neural Tangent Kernel (GraphQNTK) is the only known quantum algorithm that can handle realistically sized graph data. Similar to the Graph Neural Tangent Kernel (GNTK) [14 ###reference_14###], GraphQNTK is essentially a graph kernel and is equivalent to an infinite-width GNN.\nThe above quantum models have been successfully applied to graph-related tasks. However, most accept only low-dimensional data or pure quantum states as input. Two important problems limit the application of these methods to real-world large-scale data. Firstly, there is a lack of general methods to map data from a Euclidean space into a quantum Hilbert space. Secondly, due to the limited number of available qubits, real-world high-dimensional graph data can not be loaded into these quantum circuits. Besides, all of the above methods lack solid theoretical guarantees for the graph isomorphism classification problem. We also note that\nthere are also related quantum computing studies on both node classification [41 ###reference_41###, 42 ###reference_42###, 12 ###reference_12###] and link prediction [43 ###reference_43###]. None of this work is grounded in a theoretical analysis or proof of graph isomorphism.\nIn this paper, we propose a novel hybrid quantum-classical machine learning method referred to as the Ego-graph based Quantum Graph Neural Network (egoQGNN), which is a quantum computing based model for graph classification. By introducing the ego-network decompositional processing strategy (Sec. 4.4 ###reference_###), the egoQGNN can be applied to real-world data and is not limited by the number of available qubits. The egoQGNN also implements the theoretical framework of GNNs using operations available in a quantum computer and utilizes a unitary weight matrix together with a Hilbert-space tensor product. As a result, the egoQGNN empowers a quantum computer with the ability to classify graphs.\nAs shown in Table I ###reference_###, as a hybrid quantum-classical algorithm, our model uses a hierarchical architecture to capture information within -hop neighbors of a node. Moreover, the proposed model is able to apply to graph isomorphism tests since it provides a theoretical analysis and proof of this capability in Sec. 4.1 ###reference_###. With the proposed decomposition strategy, our model can handle real-world datasets."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Preliminaries",
33
+ "text": "In this section, we briefly review the underlying concepts\nwhich will be exploited in the development of a novel quantum neural network reported in this paper. Firstly, we introduce the fundamental concepts of quantum machine learning. Secondly, we review the fundamental theory of the GNN. In order to make our description clearer, we list the notation used in this paper in Table II ###reference_###.\nSymbol\nDefinition\n\nnode \u2019s representation of -th iteration\n\na set of nodes adjacent to node .\n\nactivation function\n\nweight matrix of -th iteration\n\nQuantum state and its complex conjugate transpose\n\narbitrary unitary matrix and corresponding adjoint\n\nHadamard gate\n\nX gate, Y gate, Z gate\n\nrotation operators about the Pauli-X, Pauli-Y, Pauli-Z axes\n\ntensor product"
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Fundamentals of Quantum Machine Learning",
39
+ "text": "In order to introduce our new quantum neural network more clearly, we will introduce some of the fundamental elements of quantum machine learning including the quantum bit, quantum gate and quantum circuit.\nQuantum bit: Quantum bits or qubits for short, are analogous to the classical binary bit which are the fundamental elements of classical computation. The qubit is a mathematical construct with certain specific formal properties. Unlike the classical bit with binary states 0 or 1, the quantum state of a qubit is a 2-dimensional complex-valued vector formed by linear combinations of the basis vectors and :\nSuppose the quantum state of a qubit is :\nThe numbers and are complex numbers that satisfy the condition:\nDue to the above condition in Eq(3 ###reference_###), the quantum state of a qubit is a point on the unit 3-dimensional sphere, referred to as the Bloch sphere. The Bloch sphere resides in a Hilbert space, and there are three basis vectors in this space, namely Pauli-X, Pauli-Y, and Pauli-Z. Any quantum state on the Bloch sphere makes angles with the three bases, as shown in Fig. 3 ###reference_###.\nThere are an infinite number of points on the Bloch sphere. In other words, a qubit can represent an infinite number of quantum states, while a classical bit can only represent two states, i.e. 0 or 1. So, a qubit has a greater representational capacity than a classical bit. If a physical system, e.g. a quantum computer, has more than one qubit, for instance, qubits, then the quantum state of this physical system is the tensor product of all its constituent qubits or quantum states:\nThe tensor product of qubits is a -dimensional complex vector. So, the dimensionality of the quantum state of a system increases exponentially with the number of qubits that constitute it.\nQuantum gate: \nA classical computer has logic gates that change the states of bits, and these include the AND gate, OR gate, and NOT gate. For quantum computers, quantum gates are used to change the quantum states of the qubits. After the quantum gate acts on the quantum state represented by a qubit, this quantum state is transformed, i.e. rotated, to give another vector. Quantum gates can be divided into single qubit gates and multiple qubit gates.\nSingle qubit gates are applied to a single qubit state. Multiple qubit gates are applied to several qubit systems to transform their quantum states. Multiple qubit states can be obtained from the multiplication or tensor product of a single qubit gate. The transformation of states performed by the gate can be represented using a unitary matrix since the results of both a tensor product or a multiplication of quantum gates is a unitary transformation.\nQuantum circuits: \nA classical computer can be represented in terms of circuits containing connections and logic gates. Similarly, a quantum computer can be represented using quantum circuits containing connections and quantum gates. For a quantum computer, each connection represents a qubit, which is used to carry information, and the quantum gates perform manipulations to transform the quantum state."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Graph Neural Networks on Graph Classification",
45
+ "text": "Graph Neural Networks (GNNs) are effective machine learning tools for structured data and rely on a neighborhood aggregation strategy. Specifically, for each node in a graph, the GNN recursively aggregates the current representation with those for its neighbors, thus giving a new representation for use at the next iteration. For graph classification, Xu et al. [22 ###reference_22###] demonstrate that three crucial functions of GNNs, namely AGGREGATE, COMBINE and READOUT, must each be injective multi-set functions, for example, a sum. Formally, the -th interaction of a GNN can be expressed as:\nwhere is the feature vector of node at the -th iteration, and is the set of nodes adjacent to .\nHere, the READOUT function aggregates the representations of all nodes from the final iteration to obtain the complete graph representation :\nThe GNN model presented by Xu et al. [22 ###reference_22###] can be represented as follows:\nwhere and are trainable weight matrices at the -th iteration for node and its neighbors respectively."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Methodology",
51
+ "text": "In this paper, we develop a novel quantum-classical hybrid machine learning algorithm for graph-structured data. The idea is to develop a quantum GNN by designing a quantum circuit and replacing the Euclidean weight matrices of the GNN with unitary matrices, i.e. quantum gates. In this way, we incorporate theoretical ideas from quantum machine learning into deep learning in the graph domain.\nIn this section, we first introduce the theoretical framework underpinning our model and provide mathematical proof. Then, we introduce the quantum circuit to implement the egoQGNN, the trainable mapping method and the sequential decompositional processing strategy. Finally, we introduce the structure and explain how the egoQGNN can be used to classify quantum states. We then summarize the processes underpinning the egoQGNN."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Theoretical Framework of egoQGNN",
57
+ "text": "To distinguish graph isomorphisms by quantum machine learning, we follow the study of Xu et al. [22 ###reference_22###], and introduce the theoretical framework for GNNs using quantum machine learning. What makes the GNN so powerful is its injective aggregation strategy, which maps different nodes to different representational units. As a result, different graphs have different representations.\nThe GNN model [22 ###reference_22###] can be represented by:\nwhere is the representation of node at the -th iteration and is the neighbor set of node . The matrices and are respectively trainable weight matrices at the -th layer for node and its neighbors; is the network activation function.\nThe aim of this paper is to provide a route to implementing Eq. 8 ###reference_### on a quantum computer. For quantum computing, the state of a physical system composed of several qubits is obtained by a tensor product of the individual qubit quantum states, namely quantum entanglement. The tensor product is one of the fundamental constructs of quantum computing. Due to its ability to enlarge the space exponentially, the tensor product can be used to map different nodes to different representations. The tensor product is analogous to the summation operation in classical GNNs. In other words, both the tensor product and the summation are injective functions.\nThe tensor product is injective, for non-zero vectors with the same dimension. As a result, the tensor product maps them to different representations unless they are linearly dependent vectors.\nAll the proofs including that for this Lemma can be found in the Appendix. Since quantum states are complex vectors, according to Lemma 1, the tensor product maps different quantum states to different representations. Moreover, all quantum states are linearly independent.\nIf two quantum states and are linearly dependent, and , then .\nDue to the property of quantum states, Lemma 2 can be demonstrated easily using the proof in the Appendix. According to Lemma 1 and Lemma 2, the tensor product maps two quantum states to the same representation if and only if the two quantum states are identical.\nSo, for the egoQGNN, the tensor product replaces the summation of the GNN, and Eq. 8 ###reference_### becomes:\nwhere and are unitary matrices with trainable parameters, which correspond to the Euclidean weight matrices and in the GNN of Eq. 8 ###reference_###. Unlike the weight matrices of the classical GNN which have many numerical parameters which must be trained, the unitary matrices have only a single variational parameter that represents the rotation angles of the quantum states. Details regarding unitary matrices and their properties are given in the Appendix.\nNote that Eq. 10 ###reference_### implements on a quantum computer and returns the result to a classical computer. We will discuss the implementation of Eq. 10 ###reference_### in the next section. Eq. 9 ###reference_### is achieved by the proposed mapping method. The representation of the next layer can be obtained by von Neumann entropy in Eq. 11 ###reference_###.\nBoth Eq. 8 ###reference_### and Eq. 10 ###reference_### are able to distinguish non-isomorphic graphs, i.e. perform graph classification. To demonstrate the effectiveness of the egoQGNN for the graph isomorphism problem, we need to prove that Eq. 10 ###reference_### will map nodes with different features or neighbors to different representations.\nFor different nodes and , the outputs of Eq. 10 ###reference_###: and , meet .\nA natural follow-up theorem is that the egoQGNN can distinguish graphs that are decided as non-isomorphic graphs by the GNNs based on Eq. 8 ###reference_###.\nThe egoQGNN maps two non-isomorphic graphs and as decided via a GNN by Eq. 10 ###reference_### to different embeddings.\nCompared with the GNN, the main differences provided by the egoQGNN are the tensor product aggregation operator and the unitary matrix. The advantages of using the tensor product and unitary matrix are as follows:\n1) The tensor product enlarges the node representation space exponentially. As a result, nodes with different features can be mapped to different representations. Although alternative functions such as the dot product and matrix multiplication are both injective functions, their implementations on quantum devices are not as convenient as the tensor product. This is because the tensor product is a fundamental facet of quantum computing.\n2) Since the tensor product can enlarge the representation space exponentially, the egoQGNN has significantly fewer parameters than related deep learning models. A layer of the GNN requires a weight matrix to transform a -dimensional input into a -dimensional output. Such a layer has parameters that need to be trained. For the egoQGNN, the entanglement of qubits can generate -dimensional quantum states. The egoQGNN has only variational parameters for training if quantum gates are applied to each qubit. This is because a unitary matrix has only one variational parameter (the rotation of the quantum states).\n###figure_1###"
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Quantum Circuit of egoQGNN",
63
+ "text": "To implement Eq. 10 ###reference_### on a quantum device, we design a quantum circuit with a hierarchical structure similar to that of the GNN. The quantum circuit we have designed contains Ulayers to capture the information within the -hop neighbors of a node. A Ulayer includes three components: Uinit, Ucov and Uent. This is illustrated in Fig. 1 ###reference_###. For input nodes, the initial states of the quantum circuit are . The output is the tensor product of the quantum states over all the qubits.\nWe allocate a quantum circuit and the requisite qubits to the ego-graphs consisting of a node and its neighbors. After the required computations have been completed, we then free the quantum circuit and its qubits for future use.\nThe Uinit component maps the data to quantum states, which corresponds to the quantum circuit of the trainable mapping method (described later in this section). Specifically, it is responsible for mapping node features from Euclidean space to Hilbert space. The Ucov component, on the other hand, has three quantum gates on each qubit: , , . Each quantum gate accepts one trainable parameter. All neighbors of a node share parameters but do not include the node. For an arbitrary qubit of the Ucov component, if the input quantum state is , the output is :\nThe corresponds to or in Eq. 10 ###reference_###. are variational parameters of quantum gates. The output of Ulayer is a tensor product over the quantum states of all nodes, corresponding to in Eq. 10 ###reference_###. Therefore, the Ulayer component implements Eq. 10 ###reference_###. In a manner similar to the GNN, the egoQGNN can aggregate features of the -hop neighbors to a node by applying Ulayers. Ucov is followed by a Uent component, which applies a CNOT gate to each pair of qubits to entangle their information. The parameters of the Uinit component have been trained and will not be updated during the training of Ulayer.\nConsidering the likely effects of noise interference on quantum devices, we apply the three-bit error correction code[45 ###reference_45###] to the above circuit. According to Eq.(10 ###reference_###), the goal of the egoQGNN circuit is to aggregate the neighboring quantum states () into the quantum state of the node () to update the representation of the node . To avoid the interference of noise on the representation of the node , the components and are respectively applied before and after the application of the Ulayer. Specifically, first copies the information from the target bit to the two auxiliary qubits via two CNOT gates. After applying the Ulayer, applies three CNOT gates to assist the recovery of the state of the target qubit , as shown in Fig. 2 ###reference_###.\n###figure_2###"
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "Trainable Mapping Method",
69
+ "text": "Existing quantum machine learning techniques [1 ###reference_1###, 5 ###reference_5###, 46 ###reference_46###, 4 ###reference_4###, 47 ###reference_47###, 48 ###reference_48###]\nare mainly applied to artificial data or small-scale real-world data. One reason for this restriction is the lack of an effective mapping from a variety of real-world data types to quantum states. To address this problem, we propose trainable mapping to maintain the distances between data and to reduce information loss resulting from the mapping.\nQuantum states are distributed on the surface of the Bloch sphere that resides in Hilbert space. Mapping data from a Euclidean space to the Bloch sphere may incur a large distortion, as shown in Fig. 3 ###reference_###.\n###figure_3### The distance relations between the quantum states on the Bloch sphere are expected to be consistent with those between the data in Euclidean space. Any two sample points that are close to each other in Euclidean space should also be close after the mapping to Hilbert space. If this is not the case, the distance difference between the quantum states and the data will induce information loss and impair the subsequent analysis operations. Considering the above problem, we present a mapping method in which the loss is related to the difference in the distance relations between the Euclidean space and the Bloch sphere. The corresponding circuit implementation for this method is shown in the Uinit component of Fig. 1 ###reference_###. For -dimensional data , the mapping circuit repeatedly uses a qubit with quantum gates to map each sample to a corresponding quantum state. Each quantum gate accepts the product of the and the trainable parameter is . Obviously, the Euclidean distance is not suitable for distance measurement on the Bloch sphere, which is a unit sphere. We utilize the inverse cosine of the inner product of the quantum states for distance measure. The reason that we chose inverse cosine is the low computational complexity and natural consistency with the spherical representation. We believe other non-Euclidean distance measures are also available for our model, e.g. hyperbolic distances.\nWe construct a loss based on the correlation matrix and of the data and the corresponding quantum states:\nwhere is the normalized Euclidean distance between samples and . Similarly is the normalized distance defined in Eq. 13 ###reference_### between quantum state and .\nCompared with the existing mapping methods for quantum algorithms, the main difference of the proposed mapping method is applying trainable parameters to maintain the distance relationships between data and quantum states, and which reduces information loss. Our experiments will show the effectiveness of this approach."
70
+ },
71
+ {
72
+ "section_id": "4.4",
73
+ "parent_section_id": "4",
74
+ "section_name": "Ego-graph Decompositional Processing Strategy",
75
+ "text": "One bottleneck for quantum machine learning is the lack of available physical qubits in handling real-world data.\nFor efficient use of the available qubits, we propose a decompositional processing strategy based on an ego-graph decomposition. Concretely, according to Eq. 8 ###reference_### and Eq. 10 ###reference_###, the iterative updating of the representation of node is obtained by aggregating node together with its neighbors at the current iteration. As a result, the iterative updating of each node in a graph is only related to its neighbors. Therefore, we regard a node and its neighbors as an ego-graph. A graph containing nodes can be decomposed into ego-graph. The egoQGNN computes the representation of each ego-graph sequentially. As such, the process only requires a fixed number of qubits.\nTo commence, the entire graph is divided into ego-graph using a classical computer. The number of ego-graph is equal to the number of nodes. Next, the quantum circuits process each ego-graph sequentially and return their representations to the classical computer. Finally, the ego-graph representations are re-merged to reconstitute the original graph representation using the classical computer.\nFor a graph with nodes, the number of qubits required to compute its representation is accordingly reduced from to by introducing the above ego-graph decomposition, where is the maximum degree of the graph. If the number of available qubits , we can divide the neighbour sets of node into sets: . All sets satisfy the conditions:\nWe sequentially compute the representations of the sets as follows:\nEq. 16 ###reference_### is computed using the classical computer, as is stored in the classical computer.\nInput: , ; node features X={}.\nOutput: ego-graph set , ego-graph features set .\nFor the effectiveness of the decompositional processing strategy, we provide the following results.\nFor the same inputs, Eq. 16 ###reference_### is equivalent to Eq. 10 ###reference_###.\nWe provide proof of this theorem in the Appendix.\nInput: input features X={}, random initial parameters of mapping circuit P, the mapping circuit .\nOutput: trained initial parameters of mapping circuit ."
76
+ },
77
+ {
78
+ "section_id": "4.5",
79
+ "parent_section_id": "4",
80
+ "section_name": "Structure of the egoQGNN",
81
+ "text": "The output of the quantum circuit is a tensor product over the quantum states of all the nodes, giving a high-dimensional vector. To classify graphs, their von Neumann entropy is summed over the quantum states of all nodes and used as a characterization of the graph.\n###figure_4### The classical Shannon entropy measures the uncertainty associated with a classical probability distribution for a set of data. Quantum states can be described in a similar way using density operators in place of probability distributions, and the von Neumann entropy in place of the Shannon entropy. For a system with pure quantum state the density matrix\nis , and the graph representation is:\nFor the binary classification problem, the Quantum Support Vector Machine (QSVM) [1 ###reference_1###] classifies data by measuring components of quantum states along the Pauli-Z direction. The quantum states on the upper Bloch hemisphere are assigned to one class and the quantum states on the lower Bloch hemisphere are assigned to the other class. Our approach is essentially the same as QSVM. We use two quantum states corresponding to two classes, and these correspond to the upper and lower Bloch hemispheres. The von Neumann entropies of two quantum states are and . Suppose is the representation of graph to be classified, the label of graph is:\nMore details about the training of the egoQGNN are given in the Appendix. The egoQGNN consists of the following processing steps, where is the number of iterations.\nStep 1) Classical computer decomposes graph into ego-networks.\nStep 2) Train the circuit of the mapping method with node features. The trained parameters of the circuit are stored on a classical computer. After training, the node features of the ego-networks are mapped into quantum states by applying the circuit.\nStep 3) The quantum device runs the quantum circuit to compute the representations of the different ego-networks sequentially. The classical computer stores all the representations computed in this way.\nStep 4) A classical computer computes the entropies of the individual representations and combines them using Eq. 18 ###reference_###.\nStep 5) Steps 1-4 are repeated to obtain graph representations for classification. Fig. 4 ###reference_### shows the structure.\nInput: Ego-graph set with features , the quantum circuit of egoQGNN mentioned in Sec. 4.2 ###reference_### , Predefined von Neumann entropies of two classes: and .\nOutput: label prediction for graph : ."
82
+ },
83
+ {
84
+ "section_id": "4.6",
85
+ "parent_section_id": "4",
86
+ "section_name": "Summary of the egoQGNN Framework",
87
+ "text": "We summarize the framework of egoQGNN, whose decompositional processing strategy is shown in Alg. 1 ###reference_###. This corresponds to step 1 of Fig. 4 ###reference_###, running on a classical computer. Alg. 2 ###reference_### involves step 2 of Fig. 4 ###reference_###. After training, Alg. 2 ###reference_### outputs the initial parameters of the mapping circuit. Step 3 and step 4 of Fig. 4 ###reference_### are summarized in Alg. 3 ###reference_###, whose output is the label assigned to the input graph."
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "Experiments",
93
+ "text": "In this section, we perform evaluations of the proposed egoQGNN model on the graph classification task. We compare the egoQGNN to both state-of-the-art graph kernels and deep learning methods. We conduct experiments on six standard graph classification benchmarks."
94
+ },
95
+ {
96
+ "section_id": "5.1",
97
+ "parent_section_id": "5",
98
+ "section_name": "Datasets and Baselines",
99
+ "text": "An overview summary of the datasets used in our experiments is given in Table IV ###reference_###. The data includes MUTAG [53 ###reference_53###], four variants of PTC [54 ###reference_54###] and PROTEINS [55 ###reference_55###]. For all of the datasets, the nodes have the categorical input features required for egoQGNN.\nMUTAG: The graphs contained in this dataset represent heteroaromatic nitro or mutagenic aromatic compounds. The nodes and edges represent atoms and chemical bonds, respectively. The nodes labels represent the chemical identity of the atoms. All of the graphs in this dataset belong to one of two classes which represent whether the graph has a mutagenic effect or not.\nPTC: The Predictive Toxicology Challenge (PTC) has four variants, representing molecule carcinogenicity on male mice (PTC_MM), male rats (PTC_MR), female mice (PTC_FM), female rats (PTC_FR), respectively. The graphs from each variant are labeled by their carcinogenicity on male and female mice and rats.\nPROTEINS: This dataset includes two classes of proteins. The first is enzymes, while the second is non-enzymes. Nodes represent the different amino acids belonging to the proteins. Edges represent whether the distance between pairs of amino acids is less than 0.6 nanometres.\nTo the best of our knowledge, GraphQNTK [13 ###reference_13###] is one of the few currently available (and also the most up-to-date) quantum algorithms that can be applied to realistically sized real-world graph datasets. In fact, for the typical quantum algorithms, QSVM [1 ###reference_1###] and QCNN [9 ###reference_9###], due to the limitations on the number of qubits, high-dimensional graph data cannot be handled directly. Fortunately, Yanardag et al. [56 ###reference_56###] proposed a method based on the number of graphlets to encode high-dimensional graphs using low-dimensional representations. We, therefore, use this method to encode a high-dimensional graph as an 8-dimensional vector and then use this vector as the input to both QSVM and QCNN. Additionally, we compare egoQGNN with several state-of-the-art baselines for graph classification:\n(1) The kernel based models: WL subtree kernel [49 ###reference_49###] and Subgraph Matching Kernel (CSM) [50 ###reference_50###].\n(2) The state-of-the-art GNNs: These include Diffusion convolutional neural networks (DCNN) [57 ###reference_57###], Deep Graph CNN (DGCNN) [31 ###reference_31###] and Relational Graph Convolutional Networks (R-GCN) [24 ###reference_24###], Graph Isomorphism Network (GIN) [22 ###reference_22###] and edGNN [25 ###reference_25###], Random Walk Graph Neural Networks (RW-GNN)[26 ###reference_26###], Dropout Graph Neural Networks(DropGNN)[51 ###reference_51###], Invariant-Equivariant Graph Network(IEGN)[52 ###reference_52###]."
100
+ },
101
+ {
102
+ "section_id": "5.2",
103
+ "parent_section_id": "5",
104
+ "section_name": "Experimental Setup",
105
+ "text": "For experiments, we use three Ulayers in egoQGNN. All three Ulayers have identical structures but no shared parameters. In the quantum circuit of the mapping method, the feature of a node can be mapped to several qubits. In experiments, we map the feature of each node into a qubit since the node features of the datasets are simple (a scalar).\nThe quantum circuits of the proposed method are similar to QCNN and QSVM, and RX, RY, RZ gates are applied on each qubit sequentially.\nDuring the training, UOBYQA [58 ###reference_58###], which is based on a derivative-free optimization method, is used to optimize the egoQGNN.\nDue to the lack of qubits, QSVM only accepts 2-dimensional or 3-dimensional data. So, we use PCA to transform an 8-dimensional graphlet\nfrequency counts vector into a 3-dimensional vector. The quantum machine learning methods are run in simulation on a classical computer. The code for QSVM is provided by Qiskit [59 ###reference_59###], and the code for QCNN is provided by Tensorflow Quantum [60 ###reference_60###].\nThe results for the deep learning methods mostly come from the existing studies [25 ###reference_25###] [22 ###reference_22###]. Using the available codes provided by the authors, we perform 10-fold cross-validation to compute the accuracies of GIN and DGCNN on the PTC_FM, PTC_MM and PTC_FR datasets, and both R-GCN and edGNN on the PROTEINS dataset. The parameters for the deep learning methods are those provided by the authors.\nFor fairness, all of the methods compared are run on the same computing device, namely an Intel Xeon CPU E3-1270 v5 with 32GB RAM. The proposed method is implemented using the OriginQ[61 ###reference_61###] simulator on a classical computer. As yet existing quantum coding platforms[59 ###reference_59###, 60 ###reference_60###] are unable to provide the fast and effective interaction between a classical computer and a quantum computer required by the proposed framework. For this reason, all of the reported experiments are performed using a simulator running on a classical computer."
106
+ },
107
+ {
108
+ "section_id": "5.3",
109
+ "parent_section_id": "5",
110
+ "section_name": "Results and Discussion",
111
+ "text": "The performances on graph classification are assessed in terms of accuracy. We report the average and standard deviation of the 10-fold validation accuracies. The results achieved by egoQGNN are reported in Table IV ###reference_###.\nWe also give the performance achieved by egoQGNN without the trainable mapping method. This is done to demonstrate the effectiveness of the mapping strategy described in Section IV."
112
+ },
113
+ {
114
+ "section_id": "5.3.1",
115
+ "parent_section_id": "5.3",
116
+ "section_name": "5.3.1 Comparison with machine learning methods",
117
+ "text": "For the evaluation, we employ the same structure for the proposed egoQGNN model on all of the graph datasets studied. Results in Table IV ###reference_### indicate that egoQGNN achieves the best results on 5 out of the 7 benchmarks, showing that in many cases a clear improvement is obtained with respect to the GNN models. The accuracies of the egoQGNN with a trainable mapping method on PTC_FM and PTC_MM are and respectively, which are roughly equivalent to less powerful methods. For PTC_FR and PTC_MR, egoQGNN achieves and improvements over the next-best performing methods. Besides, even in the cases where egoQGNN does not achieve top performance, its accuracy is close to that of the GNNs studied."
118
+ },
119
+ {
120
+ "section_id": "5.3.2",
121
+ "parent_section_id": "5.3",
122
+ "section_name": "5.3.2 Comparison with alternative quantum machine learning methods",
123
+ "text": "Gra+QSVM and Gra+QCNN in Tabel IV ###reference_### refer to using the graphlet count vector as the input to QSVM and QCNN, respectively. QSVM has its mapping circuit shown in the Appendix but QCNN does not. We apply the mapping circuits shown in Fig. 1 ###reference_### to QCNN (Gra+QCNN (w/ M)). The results of Gra+QSVM, Gra+QCNN and Gra+QCNN (w/ M) represent no improvement on egoQGNN. Moreover, compared to Gra+QCNN, Gra+QCNN (w/ M) achieves higher accuracy. Notably, compared to GraphQNTK, our model performs better on all datasets except MUTAG. The reason for this may be the hierarchical structure of our model, which is not adopted in GraphQNTK. This demonstrates that our proposed trainable mapping method is also effective when combined with alternative quantum machine learning algorithms."
124
+ },
125
+ {
126
+ "section_id": "5.3.3",
127
+ "parent_section_id": "5.3",
128
+ "section_name": "5.3.3 Comparison of egoQGNN (w/o M) and egoQGNN",
129
+ "text": "We observe that egoQGNN when combined with our trainable mapping method slightly and consistently outperforms egoQGNN without this trainable mapping i.e. egoQGNN (w/o M). Since they have the same structure, the improvement may be attributed to less information loss compared to the egoQGNN without the mapping method. Note that the accuracy of egoQGNN is about 9% higher than egoQGNN (w/o M) on the PROTEINS dataset, while for the MUTAG dataset, the accuracy of egoQGNN (w/o M) is close to that of egoQGNN. Table IV ###reference_### shows that PROTEINS uses 61 node labels while MUTAG uses only seven. This means that the PROTEINS dataset requires higher dimensionality for the node features and suffers more serious information loss. The results show that our trainable mapping method reduces information loss and improves performance. Besides, for most datasets, the standard error for egoQGNN is less than that for egoQGNN (w/o M). For example, the standard errors for egoQGNN (w/o M) and egoQGNN on PTC_FR are respectively and . When there is no trainable mapping, the elements of the node feature vector are used as the gate parameters to map data to quantum states. This leads to a random distribution of quantum states in the Hilbert space."
130
+ },
131
+ {
132
+ "section_id": "5.3.4",
133
+ "parent_section_id": "5.3",
134
+ "section_name": "5.3.4 Comparison of parameters",
135
+ "text": "Compared with the deep learning methods, egoQGNN has fewer parameters but achieves comparable performance, as shown in Table V ###reference_###. One of the possible reasons is that as mentioned in [7 ###reference_7###], the Hilbert space is a high-dimensional space, and the performance of quantum machine learning algorithms can be improved even though their parameters are fewer in number. For example, for a 32-dimension input, a layer of a GNN model requires a weight matrix to transform the input to a 32-dimension output. As a result, this layer has 1024 parameters. For quantum machine learning, the entanglement of 5 qubits generates 32-dimensional quantum states. Suppose that three quantum gates are applied to each qubit and each quantum gate has a parameter. In this instance, the quantum machine learning model transforms a 32-dimensional quantum state to a new 32-dimensional quantum state as output using only 15 parameters. Besides, similar to [38 ###reference_38###], egoQGNN captures the feature of the nodes in a non-Euclidean space. This reduces the distortion and leads to an improvement in performance."
136
+ },
137
+ {
138
+ "section_id": "5.3.5",
139
+ "parent_section_id": "5.3",
140
+ "section_name": "5.3.5 Comparison of run-times",
141
+ "text": "We report the average run times of egoQGNN and the baselines on MUTAG in Fig 5 ###reference_###. For fairness, all the methods run on a machine with an Intel Xeon CPU E3-1270 v5. The run-time of the proposed method is comparable to the alternative quantum methods and superior to several of the deep learning methods. One important observation is that egoQGNN is less time-consuming than QCNN and close in performance to QSVM. It is worth noting that deep learning based methods, i.e. DCGNN, GIN and RGCN are no faster than the quantum computing methods since the comparison is executed on a conventional CPU-based machine.\n###figure_5###"
142
+ },
143
+ {
144
+ "section_id": "6",
145
+ "parent_section_id": null,
146
+ "section_name": "Conclusions and Outlook",
147
+ "text": "In this paper, we have developed a novel hybrid quantum-classical algorithm for graph-structured data, namely the Ego-graph based Quantum Graph Neural Network (egoQGNN). We have introduced the theoretical framework of the egoQGNN and provided mathematical proof concerning its ability to identify graph isomorphisms. We also propose a decompositional processing strategy, which liberates egoQGNN from the limitation of the number of qubits. With the aid of classical computers, egoQGNN can handle graphs with larger sizes as input on a quantum device of a given size. Moreover, to reduce information loss during the mapping of data to quantum states, a trainable method is proposed. Experimental results demonstrate this method is beneficial and leads to improvements in the performance of quantum machine learning algorithms. There are several potentially interesting directions for future work. As a long-term goal, quantum techniques need to be utilized to achieve exponential speed-up.In the short term, given the fast methodological developments [62 ###reference_62###, 63 ###reference_63###, 64 ###reference_64###] on the node-level embedding with classic computers, it is also imperative to develop competitive quantum counterparts that are feasible on near-term quantum devices. A natural extension for quantum node embedding is to develop quantum solvers for the problem of combinatorial optimization e.g. graph matching [65 ###reference_65###, 66 ###reference_66###], since there are a few quantum solvers [67 ###reference_67###] and these are yet not learnable."
148
+ }
149
+ ],
150
+ "appendix": [
151
+ {
152
+ "section_id": "Appendix 1",
153
+ "parent_section_id": null,
154
+ "section_name": "Appendix A Proof and Analysis",
155
+ "text": "Proof of the Lemma 1:\nFor four different quantum states satisfying the following condition:\nSuppose are dimensional quantum states, are dimensional quantum states. The are -th or -th components of respectively. So, the above formula can be rewritten as:\nSo, for any :\nObviously, the are linear correlations. Suppose , :\nThus, the are also linear correlations.\nEq.(22 ###reference_###) can be rewritten as:\nNaturally, , the are linear correlation. This means that the tensor product does not map non-zero vectors to the same representation unless they are linearly correlated.\nProof of the Lemma 2:\nFor two linearly dependent quantum states: and , . Suppose and are dimensional quantum states, the and are the -th components of and respectively. According to the property of quantum states:\nand are linearly dependent quantum states and . Thus, the above formula can be rewritten as:\nSo, or . It means that if and are linearly dependent quantum states, .\nProof of the Lemma 3:\nFor nodes and , suppose and are quantum states of and in -th layer respectively. There are two situations for and :\nand have different numbers of neighbors.\nIf and have different numbers of neighbors, the dimensions of and are different. The and are 2-dimensional quantum states, because each node is represented by a qubit. The quantum state of a qubit is 2-dimensional. So, if and have different numbers of neighbors, the dimensions of and are different, .\nand have the same number of neighbors.\nThe Eq. 10 can be rewritten as below:\nAccording to Lemma 1 and Lemma 2, the tensor product maps two quantum states, and , to the same representation if and only if or . Therefore, , .\nProof of Theorem 4:\nSuppose for two non-isomorphic graphs and , the collections of representations of all nodes in the last layer of GNN are and respectively. Similarly, the collections of representations of all nodes in the last layer of egoQGNN are and .\nIf GNNs decide and are non-isomorphic, . According to Lemma 1-3, for egoQGNN, . Moreover, according to the Eq. 11:\nThe representations of and are unequal to each other. So, egoQGNN can distinguish graphs that are decided as non-isomorphic by GNNs."
156
+ }
157
+ ],
158
+ "tables": {
159
+ "1": {
160
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Comparison of the existing quantum graph representation learning methods, shows each model: is a hybrid quantum-classical algorithm or not, is able to apply to graph isomorphism or not, has hierarchy architecture or not, can handle real-world datasets or not</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.1.1.1.1\">Models</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.1.1.1.2\">Hybrid</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.1.1.1.3\">Graph Isomorphism</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.1.1.1.4\">Hierarchy architecture</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S2.T1.1.1.1.5\">Real-World dataset</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.2.1.1\">QGNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib6\" title=\"\">6</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.2.1.2\">No</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.2.1.3\">No</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.2.1.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.2.1.5\">No</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.3.2.1\">QGCN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib10\" title=\"\">10</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.3.2.2\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.3.2.3\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.3.2.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.3.2.5\">No</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.4.3.1\">EQGC\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib11\" title=\"\">11</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.4.3.2\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.4.3.3\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.4.3.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.4.3.5\">No</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.5.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.5.4.1\">QGCNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib44\" title=\"\">44</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.5.4.2\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.5.4.3\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.5.4.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.5.4.5\">No</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.6.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.6.5.1\">GraphQNTK\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib13\" title=\"\">13</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.6.5.2\">Yes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.6.5.3\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.6.5.4\">No</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.6.5.5\">Yes</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.7.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.1.7.6.1\">egoQGNN(ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.1.7.6.2\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.1.7.6.3\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.1.7.6.4\">Yes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S2.T1.1.7.6.5\">Yes</td>\n</tr>\n</tbody>\n</table>\n</figure>",
161
+ "capture": "TABLE I: Comparison of the existing quantum graph representation learning methods, shows each model: is a hybrid quantum-classical algorithm or not, is able to apply to graph isomorphism or not, has hierarchy architecture or not, can handle real-world datasets or not"
162
+ },
163
+ "2": {
164
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Notations and their descriptions</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.14\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.14.15.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.14.15.1.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.14.15.1.1.1\">Symbol</p>\n</th>\n<th class=\"ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T2.14.15.1.2\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.14.15.1.2.1\">Definition</p>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.1.1.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.1.1.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T2.3.3.3\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.3.3.3.2.2\">node \u2019s representation of -th iteration</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.5\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.4.4.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.4.4.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T2.5.5.2\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.5.5.2.1.1\">a set of nodes adjacent to node .</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.6.6.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.6.6.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T2.6.6.2\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.6.6.2.1\">activation function</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.8.8\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.7.7.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.7.7.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T2.8.8.2\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.8.8.2.1.1\">weight matrix of -th iteration</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.9.9\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.9.9.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.9.9.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T2.9.9.2\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.9.9.2.1\">Quantum state and its complex conjugate transpose</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.10.10\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.10.10.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.10.10.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T2.10.10.2\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.10.10.2.1\">arbitrary unitary matrix and corresponding adjoint</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.11.11\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.11.11.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.11.11.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T2.11.11.2\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.11.11.2.1\">Hadamard gate</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.12\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.12.12.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.12.12.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T2.12.12.2\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.12.12.2.1\">X gate, Y gate, Z gate</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.13.13\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.13.13.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.13.13.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T2.13.13.2\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.13.13.2.1\">rotation operators about the Pauli-X, Pauli-Y, Pauli-Z axes</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.14.14\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T2.14.14.1\" style=\"width:42.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.14.14.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T2.14.14.2\" style=\"width:142.3pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T2.14.14.2.1\">tensor product</p>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
165
+ "capture": "TABLE II: Notations and their descriptions"
166
+ },
167
+ "3": {
168
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Statistics of the used graph datasets.</figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T3.1\" style=\"width:368.6pt;height:156.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(14.7pt,-6.2pt) scale(1.08676424459395,1.08676424459395) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.1.1.1.1\">Datasets</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.1.2\">MUTAG</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.1.3\">PTC_MR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.1.4\">PTC_FM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.1.5\">PTC_MM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.1.6\">PTC_FR</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.1.1.7\">PROTEINS</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.1.2.2.1\">Max # nodes</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.2.2.2\">28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.2.2.3\">109</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.2.2.4\">64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.2.2.5\">64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.2.2.6\">64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.2.2.7\">620</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.1.3.3.1\">Mean#nodes</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.3.3.2\">17.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.3.3.3\">14.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.3.3.4\">14.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.3.3.5\">14.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.3.3.6\">14.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.3.3.7\">39.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.1.4.4.1\">Mean # edges</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.4.4.2\">19.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.4.4.3\">14.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.4.4.4\">29.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.4.4.5\">28.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.4.4.6\">30.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.4.4.7\">72.82</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.1.5.5.1\"># graphs</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.5.5.2\">188</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.5.5.3\">344</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.5.5.4\">349</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.5.5.5\">336</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.5.5.6\">351</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.5.5.7\">1113</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.1.6.6.1\"># node labels</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.6.6.2\">6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.6.6.3\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.6.6.4\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.6.6.5\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.6.6.6\">19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.6.6.7\">61</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T3.1.1.7.7.1\"># edge labels</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.7.7.2\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.7.7.3\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.7.7.4\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.7.7.5\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.7.7.6\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.1.7.7.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S5.T3.1.1.8.8.1\"># classes</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.1.1.8.8.2\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.1.1.8.8.3\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.1.1.8.8.4\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.1.1.8.8.5\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.1.1.8.8.6\">2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S5.T3.1.1.8.8.7\">2</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
169
+ "capture": "TABLE III: Statistics of the used graph datasets."
170
+ },
171
+ "4": {
172
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE IV: </span>Evaluation for graph classification accuracy over six benchmarks (in % standard error).</figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T4.86\" style=\"width:433.6pt;height:242.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-40.6pt,22.7pt) scale(0.842417568370842,0.842417568370842) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.86.84\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.86.84.85.1\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S5.T4.86.84.85.1.1\">Datasets</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.86.84.85.1.2\">MUTAG</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.86.84.85.1.3\">PTC_FM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.86.84.85.1.4\">PTC_FR</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.86.84.85.1.5\">PTC_MM</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.86.84.85.1.6\">PTC_MR</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.86.84.85.1.7\">PROTEINS</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.8.6.6\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S5.T4.8.6.6.7\">WL\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib49\" title=\"\">49</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.3.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.4.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.5.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.6.4.4.4\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.7.5.5.5\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.8.6.6.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.13.11.11\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.13.11.11.6\">CSM\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib50\" title=\"\">50</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.9.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.10.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.11.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.12.10.10.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.13.11.11.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.13.11.11.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.19.17.17\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.19.17.17.7\">DGCNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib31\" title=\"\">31</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.14.12.12.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.15.13.13.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.16.14.14.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.17.15.15.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.18.16.16.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.19.17.17.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.24.22.22\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.24.22.22.6\">R-GCN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib24\" title=\"\">24</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.20.18.18.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.21.19.19.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.22.20.20.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.23.21.21.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.24.22.22.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.24.22.22.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.29.27.27\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.29.27.27.6\">edGNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib22\" title=\"\">22</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.25.23.23.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.26.24.24.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.27.25.25.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.28.26.26.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.29.27.27.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.29.27.27.7\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.35.33.33\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.35.33.33.7\">GIN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib25\" title=\"\">25</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.30.28.28.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.31.29.29.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.32.30.30.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.33.31.31.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.34.32.32.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.35.33.33.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.41.39.39\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.41.39.39.7\">RW-GNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib26\" title=\"\">26</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.36.34.34.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.37.35.35.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.38.36.36.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.39.37.37.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.40.38.38.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.41.39.39.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.47.45.45\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.47.45.45.7\">DropGNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib51\" title=\"\">51</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.42.40.40.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.43.41.41.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.44.42.42.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.45.43.43.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.46.44.44.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.47.45.45.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.53.51.51\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.53.51.51.7\">IEGN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib52\" title=\"\">52</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.48.46.46.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.49.47.47.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.50.48.48.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.51.49.49.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.52.50.50.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.53.51.51.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.59.57.57\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S5.T4.59.57.57.7\">Gra+QSVM\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib1\" title=\"\">1</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.54.52.52.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.55.53.53.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.56.54.54.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.57.55.55.4\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.58.56.56.5\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.59.57.57.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.65.63.63\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.65.63.63.7\">Gra+QCNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib4\" title=\"\">4</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.60.58.58.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.61.59.59.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.62.60.60.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.63.61.61.4\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.64.62.62.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.65.63.63.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.71.69.69\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.71.69.69.7\">Gra+QCNN (w/ M)\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib4\" title=\"\">4</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.66.64.64.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.67.65.65.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.68.66.66.3\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.69.67.67.4\">\n 1</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.70.68.68.5\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.71.69.69.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.74.72.72\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.T4.74.72.72.4\">GraphQNTK\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib13\" title=\"\">13</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.72.70.70.1\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.74.72.72.5\">-</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.74.72.72.6\">-</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.74.72.72.7\">-</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.73.71.71.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T4.74.72.72.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.80.78.78\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_t\" id=\"S5.T4.80.78.78.7\">egoQGNN (w/o M)</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.75.73.73.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.76.74.74.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.77.75.75.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.78.76.76.4\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.79.77.77.5\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S5.T4.80.78.78.6\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.86.84.84\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_b\" id=\"S5.T4.86.84.84.7\">egoQGNN</th>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.81.79.79.1\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.82.80.80.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.83.81.81.3\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.84.82.82.4\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.85.83.83.5\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S5.T4.86.84.84.6\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
173
+ "capture": "TABLE IV: Evaluation for graph classification accuracy over six benchmarks (in % standard error)."
174
+ },
175
+ "5": {
176
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">TABLE V: </span>Quantum model parameter size (the number of parameters) comparison. The parameters of GNN include all the parameters of the weight matrix to be trained. The parameters of the quantum algorithm are the angles required by all of the quantum gates (unitary matrix).</figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.T5.1\" style=\"width:151.8pt;height:110.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-35.4pt,25.8pt) scale(0.68177970284135,0.68177970284135) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T5.1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T5.1.1.1.1.1\">Quantum model</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T5.1.1.1.1.2\">Parameter size</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T5.1.1.2.2.1\">DGCNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib31\" title=\"\">31</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T5.1.1.2.2.2\">2,560</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T5.1.1.3.3.1\">R-GCN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib24\" title=\"\">24</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T5.1.1.3.3.2\">16,704</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T5.1.1.4.4.1\">edGNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib22\" title=\"\">22</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T5.1.1.4.4.2\">9,345</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T5.1.1.5.5.1\">GIN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib25\" title=\"\">25</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T5.1.1.5.5.2\">13,000</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T5.1.1.6.6.1\">QSVM\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib1\" title=\"\">1</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T5.1.1.6.6.2\">36</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T5.1.1.7.7.1\">QCNN\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib4\" title=\"\">4</a>]</cite>\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T5.1.1.7.7.2\">54</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T5.1.1.8.8.1\">egoQGNN (w/o M)</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T5.1.1.8.8.2\">36</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S5.T5.1.1.9.9.1\">egoQGNN</th>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T5.1.1.9.9.2\">43</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
177
+ "capture": "TABLE V: Quantum model parameter size (the number of parameters) comparison. The parameters of GNN include all the parameters of the weight matrix to be trained. The parameters of the quantum algorithm are the angles required by all of the quantum gates (unitary matrix)."
178
+ }
179
+ },
180
+ "image_paths": {
181
+ "1": {
182
+ "figure_path": "2201.05158v3_figure_1.png",
183
+ "caption": "Figure 1: Ulayer with Uinit, Ucov, and Uent. Uinit contains n\ud835\udc5bnitalic_n quantum gates to map n\ud835\udc5bnitalic_n-dimensional data X\ud835\udc4bXitalic_X into quantum states. In Ucov, RX, RY, RZ quantum gates are applied to each qubit. Uent utilizes CNOT gates to entangle all qubits.",
184
+ "url": "http://arxiv.org/html/2201.05158v3/x1.png"
185
+ },
186
+ "2": {
187
+ "figure_path": "2201.05158v3_figure_2.png",
188
+ "caption": "Figure 2: Following the three-bit error correction code proposed by [45], the error correction of egoQGNN can be achieved by applying Ucsubscript\ud835\udc48\ud835\udc50U_{c}italic_U start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT and Upsubscript\ud835\udc48\ud835\udc5dU_{p}italic_U start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT modules after and before Ulayer respectively.",
189
+ "url": "http://arxiv.org/html/2201.05158v3/x2.png"
190
+ },
191
+ "3": {
192
+ "figure_path": "2201.05158v3_figure_3.png",
193
+ "caption": "Figure 3: In Euclidean space, the distance between the green point and the orange one is smaller than that between the green and the red ones. After mapping the data points to Hilbert space, points close to each other in Euclidean space become more distant quantum states (the green and orange lines), while the distance between the points which are farther away in Euclidean space becomes closer (the red and green lines).",
194
+ "url": "http://arxiv.org/html/2201.05158v3/x3.png"
195
+ },
196
+ "4": {
197
+ "figure_path": "2201.05158v3_figure_4.png",
198
+ "caption": "Figure 4: An instance of the egoQGNN. Step 1 and Step 4 are implemented on a classical computer. The quantum circuits of Step 2 and Step 3 can run on a quantum circuit or simulator of a classical computer.",
199
+ "url": "http://arxiv.org/html/2201.05158v3/x4.png"
200
+ },
201
+ "5": {
202
+ "figure_path": "2201.05158v3_figure_5.png",
203
+ "caption": "Figure 5: Running time comparison on MUTAG. The blue and orange bars indicate the per-epoch training time and test time respectively.",
204
+ "url": "http://arxiv.org/html/2201.05158v3/x5.png"
205
+ }
206
+ },
207
+ "validation": true,
208
+ "references": [],
209
+ "url": "http://arxiv.org/html/2201.05158v3"
210
+ }
20240119/2203.09773v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2205.05359v3.json ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Article Title",
3
+ "abstract": "The increased predictive power of machine learning models comes at the cost of increased complexity and loss of interpretability, particularly in comparison to parametric statistical models. This trade-off has led to the emergence of eXplainable AI (XAI) which provides methods, such as local explanations (LEs) and local variable attributions (LVAs), to shed light on how a model use predictors to arrive at a prediction. These provide a point estimate of the linear variable importance in the vicinity of a single observation. However, LVAs tend not to effectively handle association between predictors. To understand how the interaction between predictors affects the variable importance estimate, we can convert LVAs into linear projections and use the radial tour. This is also useful for learning how a model has made a mistake, or the effect of outliers, or the clustering of observations. The approach is illustrated with examples from categorical (penguin species, chocolate types) and quantitative (soccer/football salaries, house prices) response models. The methods are implemented in the R package cheem, available on CRAN.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "There are different reasons and purposes for fitting a model. According\nto the taxonomies of breiman_statistical_2001 and\nshmueli_explain_2010, it can be useful to group models into two\ntypes: explanatory and predictive. Explanatory modeling is used for\ninferential purposes, while predictive modeling focuses solely on the\nperformance of an objective function. The intended use of the model has\nimportant implications for its selection and development.\nInterpretability is critical in explanatory modeling to draw meaningful\ninferential conclusions, such as which variables most contribute to a\nprediction or whether some observations are less well fit.\nInterpretability becomes more difficult when the model is nonlinear.\nNonlinear models occur in statistical models with polynomial or\ninteraction terms between quantitative predictors, and almost all\ncomputational models such as random forests, support vector machines, or\nneural networks\n[e.g. breiman_random_2001, boser_training_1992, anderson_introduction_1995].\nIn linear models interpretation of the importance of variables is\nrelatively straightforward, one adjusts for the covariance of multiple\nvariables when examining the relationship with the response. The\ninterpretation is valid for the full domain of the predictors. In\nnonlinear models, one needs to consider the model in small neighborhoods\nof the domain to make any assessment of variable importance. Even though\nthis is difficult, it is especially important to interpret model fits as\nwe become more dependent on nonlinear models for routine aspects of life\nto avoid issues described in stahl-ethics. Understanding how\nnonlinear models behave when usage extrapolates outside the domain of\npredictors, either in sub-spaces where few samples were provided in the\ntraining set, or extending outside the domain. It is especially\nimportant because nonlinear models can vary wildly and predictions can\nbe dramatically wrong in these areas.\nExplainable Artificial Intelligence (XAI) is an emerging field of\nresearch focused on methods for the interpreting of models\n[adadi_peeking_2018, arrieta_explainable_2020]. A class of\ntechniques, called local explanations (LEs), provide methods to\napproximate linear variable importance, called local variable\nattributions (LVAs), at the location of each observation or the\npredictions at a specific point in the data domain. Because these are\npoint-specific, it is challenging to comprehensively visualize them to\nunderstand a model. There are common approaches for visualizing\nhigh-dimensional data as a whole, but what is needed are new approaches\nfor viewing these individual LVAs relative to the whole.\nFor multivariate data visualization, a tour\n[asimov_grand_1985, buja_grand_1986, lee_state_2021] of linear\ndata projections onto a lower-dimensional space, could be an element of\nXAI, complementing LVAs. Applying tours to model interpretation is\nrecommended by wickham_visualizing_2015 primarily to examine the\nfitted model in the space of the data. cook_interactive_2007\ndescribe the use of tours for exploring classification boundaries and\nmodel diagnostics\n[Caragea2008, lee_pptree_2013, da_silva_projection_2021]. There\nare various types of tours. In a manual or radial tour\n[cook_manual_1997, spyrison_spinifex_2020], the path of linear\nprojections is defined by changing the contribution of a selected\nvariable. We propose to use this to scrutinize the LVAs. This approach\ncould be considered to be a counter-factual, what-if analysis, such as\nceteris paribus (\u201cother things held constant\u201d) profiles\n[biecek_ceterisparibus_2020].\nThe remainder of this paper is organized as follows. Section\n2 ###reference_### covers the background of the LEs and the\ntraditional visuals produced. Section 3 ###reference_### explains the tours\nand particularly the radial manual tour. Section 4 ###reference_###\ndiscusses the visual layout in the graphical user interface and how it\nfacilitates analysis, data pre-processing, and package infrastructure.\nIllustrations are provided in Section 5 ###reference_### for a range\nof supervised learning tasks with categorical and quantitative response\nvariables. These show how the LVAs can be used to get an overview of the\nmodel\u2019s use of predictors and to investigate errors in the model\npredictions. Section 6 ###reference_### concludes with a summary\nof the insights gained. The methods are implemented in the R\npackage cheem."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Local Explanations",
15
+ "text": "LVAs shed light on machine learning model fits by estimating linear\nvariable importance in the vicinity of a single observation. There are\nmany approaches for calculating LVAs. A comprehensive summary of the\ntaxonomy of currently available methods is provided in Figure 6 by\narrieta_explainable_2020. It includes a large number of\nmodel-specific explanations such as deepLIFT\n[shrikumar_not_2016, shrikumar_learning_2017], a popular recursive\nmethod for estimating importance in neural networks. There are fewer\nmodel-agnostic methods, of which LIME [ribeiro_why_2016] and\nSHaply Additive exPlanations (SHAP) [lundberg_unified_2017], are\npopular.\nThese observation-level explanations are used in various ways depending\non the data. In image classification, where pixels correspond to\npredictors, saliency maps overlay or offset a heatmap to indicate\nimportant pixels [simonyan_deep_2014]. For example, pixels\ncorresponding to snow may be highlighted as important contributors when\ndistinguishing if a picture contains a coyote or husky. In text\nanalysis, word-level contextual sentiment analysis highlights the\nsentiment and magnitude of influential words [vanni_textual_2018].\nIn the case of numeric regression, they are used to explain additive\ncontributions of variables from the model intercept to the observation\u2019s\nprediction [ribeiro_why_2016].\nWe will be focusing on SHAP values in this paper, but the approach is\napplicable to any method used to calculate the LVAs. SHAP calculates the\nvariable contributions of one observation by examining the effect of\nother variables on the predictions. The term \u201cSHAP\u201d refers to\nshapley_value_1953\u2019s method to evaluate an individual\u2019s\ncontribution in cooperative games by assessing this player\u2019s performance\nin the presence or absence of other players.\nstrumbelj_efficient_2010 introduced SHAP for LEs in machine\nlearning models. Variable importance can depend on the sequence in which\nvariables are entered into the model fitting process, thus for any\nsequence we get a set of variable contribution values for a single\nobservation. These values will add up to the difference between the\nfitted value for the observation, and the average fitted value for all\nobservations. Using all possible sequences, or permutations, gives\nmultiple values for each variable, which are averaged to get the SHAP\nvalue for an observation. It can be helpful to standardize variables\nprior to computing SHAP values if they have been measured on different\nscales.\nThe approach is related to partial dependence plots (for example see\nchapter 8 of molnar2022), used to explain the effect of a\nvariable by predicting the response for a range of values on this\nvariable after fixing the value of all other variables to their mean.\nThough partial dependence plots are a global approximation of the\nvariable importance, while SHAP is specific to one observation.\n###figure_1### We use 2020 season FIFA data [leone_fifa_2020] to illustrate SHAP\nfollowing the procedures described in biecek_explanatory_2021.\nThere are 5000 observations of nine predictor variables measuring\nplayers\u2019 skills and one response variable, wages (in euros). A random\nforest model is fit regressing players\u2019 wages on the skill variables. In\nthis illustration in Figure 1 ###reference_### the SHAP values are\ncompared for a star offensive player (L. Messi) and a prominent\ndefensive player (V. van Dijk). We are interested in knowing how the\nskill variables locally contribute to the wage prediction of each\nplayer. A difference in the attribution of the variable importance\nacross the two positions of the players can be expected. This would be\ninterpreted as how a player\u2019s salary depends on which combination of\nskills. Panel (a) is a version of a breakdown plot\n[gosiewska_ibreakdown_2019] where just three sequences of\nvariables are shown, for two observations. A breakdown plot shows the\nabsolute values of the variable attribution for an observation, usually\nsorted from the highest value to the lowest. There is no scale on the\nhorizontal axis here because values are considered relative to each\nother. Here we can see how the variable contribution can change\ndepending on sequence, relative to both players. (Note that the order of\nthe variables is different in each plot because they have been sorted by\nthe biggest average contribution across both players.) For all\nsequences, and for both players reaction has the strongest\ncontribution, with perhaps more importance for the defensive player.\nThen it differs by player: for Messi offense and\nmovement have the strongest contributions, and for van Dijk it\nis defense and power, regardless of the variable\nsequence.\nPanel (b) shows the differences in the player\u2019s median values (large\ndots) for 25 such sequences (tick marks). We can see that the wage\npredictions for the two players come from different combinations of\nskill sets, as might be expected for players whose value on the team\ndepends on their offensive or defensive prowess. It is also interesting\nto see from the distribution of values across the different sequences of\nvariables, that there is some multimodality. For example, look at the\nSHAP values for reaction for Messi, and in some sequences,\nreaction has a much lower contribution than others. This suggests that\nother variables (offense, movement probably) can\nsubstitute for reaction in the wage prediction.\nThis can also be considered similar to examining the coefficients from\nall subsets regression, as described in\nwickham_visualizing_2015. Various models that are similarly good\nmight use different combinations of the variables. Examining the\ncoefficients from multiple models helps to understand the relative\nimportance of each variable in the context of all other variables. This\nis similar to the approach here with SHAP values, that by examining the\nvariation in values across different permutations of variables, we can\ngain more understanding of the relationship between the response and\npredictors.\nFor the application, we use tree SHAP, a variant of SHAP that\nenjoys a lower computational complexity\n[lundberg_consistent_2018]. Instead of aggregating over sequences\nof the variables, tree SHAP calculates observation-level variable\nimportance by exploring the structure of the decision trees. Tree SHAP\nis only compatible with tree-based models. so random forests are used\nfor illustration.\nThere are numerous R packages currently available on CRAN that provide functions\nfor computing SHAP and other LVA values, including treeshap [kominsarczyk_treeshap_2023], fastshap [fastshap],\nkernelshap [kernelshap], shapr [shapr],\nshapviz [shapviz], PPtreeregViz\n[PPtreeregViz], ExplainPrediction\n[ExplainPrediction], flashlight [flashlight], and\nthe package DALEX has many resources [biecek_dalex_2018].\nmolnar2022 provides good explanations of the\ndifferent methods and how to apply them to different models."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Tours and the Radial Tour",
21
+ "text": "A tour enables the viewing of high-dimensional data by animating\nmany linear projections with small incremental changes. It is achieved\nby following a path of linear projections (bases) of high-dimensional\nspace. One key variable of the tour is the object permanence of the data\npoints; one can track the relative change of observations in time and\ngain information about the relationships between points across multiple\nvariables. There are various types of tours that are distinguished by\nhow the paths are generated [lee_state_2021, cook_grand_2008].\nThe manual tour [cook_manual_1997] defines its path by changing a\nselected variable\u2019s contribution to a basis to allow the variable to\ncontribute more or less to the projection. The requirement constrains\nthe contribution of all other variables that a basis needs to be\northonormal (columns correspond to vectors, with unit length, and\northogonal to each other). The manual tour is primarily used to assess\nthe importance of a variable to the structure visible in a projection.\nIt also lends itself to pre-computation queued in advance or computed on\nthe fly for human-in-the-loop analysis\n[karwowski_international_2006].\nA version of the manual tour called a radial tour is implemented\nin spyrison_spinifex_2020 and forms the basis of this new work.\nIn a radial tour, the selected variable can change its magnitude of\ncontribution but not its angle; it must move along the direction of its\noriginal contribution. The implementation allows for pre-computation and\ninteractive re-calculation to focus on a different variable. In this work, the radial tour allows us to explore the sensitivity of LVA to the prediction of a model.\n###figure_2###"
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "The Cheem Viewer",
27
+ "text": "To explore the LVAs, coordinated views [roberts_state_2007]\n[also known as ensemble graphics, unwin_ensemble_2018] are\nprovided in the cheem viewer application. There are two primary\nplots: the global view to give the context of all of the SHAP\nvalues and the radial tour view to explore the LVAs with\nuser-controlled rotation. There are numerous user inputs, including\nvariable selection for the radial tour and observation selection for\nmaking comparisons. There are different plots used for the categorical\nand quantitative responses. Figures 3 ###reference_### and\n4 ###reference_### are screenshots showing the cheem viewer for\nthe two primary tasks: classification (categorical response) and\nregression (quantitative response)."
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "Global View",
33
+ "text": "The global view provides context for all observations and facilitates\nthe exploration of the separability of the data and attribution spaces.\nThe attribution space refers to the SHAP values for each observation.\nThese spaces both have dimensionality , where is the\nnumber of observations and is the number of variables.\nThe visualization is composed of the first two principal components of\nthe data (left) and the attribution (middle) spaces. These single 2D\nprojections will not reveal all of the structure of higher-dimensional\nspace, but they are helpful visual summaries. In addition, a plot of the\nobserved against predicted response values is also provided (Figures\n3 ###reference_###c, 4 ###reference_###b) to help\nidentify observations poorly predicted by the model. For classification\ntasks, color indicates the predicted class and misclassified\nobservations are circled in red. Linked brushing between the plots is\nprovided (click and drag), and a tabular display of selected points helps to facilitate\nthe exploration of the spaces and the model (shown in Figure\n4 ###reference_###d).\nWhile the comparison of these spaces is interesting, the primary purpose\nof the global view is to enable the selection of particular observations\nto explore in detail. We have designed it to enable a comparison between\nan observation that is interesting in some way, perhaps misclassified,\nor poorly predicted, relative to an observation with similar predictor\nvalues but a more expected prediction. For brevity, we call the\ninteresting observation the primary investigation (PI), and the other is\nthe comparison investigation (CI). These observations are highlighted as\nan asterisk and , respectively."
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "Radial Tour",
39
+ "text": "The radial tour is used to explore how the SHAP value of a variable relates to it\u2019s effect on the predicted value. In a similar way as explained in Section 3 ###reference_###, where the radial tour is used to understand a variable\u2019s contribution to cluster structure, for model prediction explanations, the radial tour is used to understand a variable\u2019s contribution to the observation\u2019s predicted value. By altering the contribution using the radial tour, we see how the predicted value might change. If a small change in the variable contribution results in a big change in predicted value, then this variable substantially explains the model prediction. The SHAP values are estimates of the local importance, and provide a good starting place from which to begin a radial tour. They can be misleading, and the radial tour can help to assess the strength of the explanatory power of the SHAP value. Because the SHAP values are local, using linear projections to explore a local neighborhood of a nonlinear model is reasonable.\nThere are two plots in this part of the interface. The first (Figures\n3 ###reference_###e and 4 ###reference_###e) is a\ndisplay of the SHAP values for all observations. This will generally\ngive the global view of variables important for the fit as a whole, but\nit will also highlight observations that have different patterns. The\nsecond plot is the radial tour, which for classification is a density\nplot of a 1D projection (Figure 3 ###reference_###f), and for\nregression are scatterplots of the observed response values, and\nresiduals, against a 1D projection (Figure 4 ###reference_###f).\nThe LVAs for all observations are normalized (sum of squares equals 1),\nand thus, the relative importance of variables can be compared across\nall observations. These are depicted as a vertical parallel coordinate\nplot [ocagne_coordonnees_1885]. (The SHAP values of the PI and CI\nare shown as dashed and dotted lines, respectively.) One should obtain a\nsense of the overall importance of variables from this plot. The more\nimportant variables will have larger values, and in the case of\nclassification tasks variables that have different magnitudes for\ndifferent classes are more globally important. For example, Figure\n3 ###reference_###e suggests that bl is important for\ndistinguishing the green class from the other two. For regression, one\nmight generally observe which variables have low values for all\nobservations (not important). For example, BMI and pwr\nin Figure 4 ###reference_###e, have a range of high and low values\n(e.g., off, def), suggesting they are important for\nsome observations and not important for others.\nA bar chart is overlaid to represent the projection shown in the radial\ntour on the right. It starts from the SHAP values of the PI, but if the\nuser changes the projection the length of these bars will reflect this\nchange. By scaling the SHAP value it becomes an (attribution)\nprojection.\nThe attribution projection of the PI is the initial 1D basis in a radial\ntour, displayed as a density plot for a categorical response (Figure\n3 ###reference_###f) and as scatterplots for a quantitative\nresponse (Figure 4 ###reference_###f). The PI and CI are indicated\nby vertical dashed and dotted lines, respectively. The radial tour\nvaries the contribution of the selected variable. This is viewed as an\nanimation of the projections from many intermediate bases. Doing so\ntests the sensitivity of structure (class separation or strength of\nrelationship) to the variable\u2019s contribution. The CI attribution of\nthe CI does not impact the bases but it highlighted from context.\nFor classification, if the separation between classes diminishes when the variable contribution is\nreduced, this suggests that the variable is important for class\nseparation. For regression, if the relationship scatterplot weakens when\nthe variable contribution is reduced, indicating that the variable is\nimportant for accurately predicting the response.\nThe purpose of using both the PI and CI when using the radial tour is comparison. Remember the CI is a representative individual with an expected prediction ( correct class or small residual) and the PI is a particularly interesting individual with a less expected prediction. The radial tour would start from the attribution projection corresponding to the SHAP values of the PI, and vary the contribution of a variable where the SHAP values differ from those of the CI. The goal is then to examine how the model prediction would change for the PI if the variable contribution changed, to be more similar to that of the CI."
40
+ },
41
+ {
42
+ "section_id": "4.3",
43
+ "parent_section_id": "4",
44
+ "section_name": "Classification Task",
45
+ "text": "Selecting a misclassified observation as PI and a correctly classified\npoint nearby in data space as CI makes it easier to examine the\nvariables most responsible for the error. The global view (Figure\n3 ###reference_###c) displays the model confusion matrix. The\nradial tour is 1D and displays as density where color indicates class.\nAn animation slider enables users to vary the contribution of variables\nto explore the sensitivity of the separation to that variable.\n###figure_3###"
46
+ },
47
+ {
48
+ "section_id": "4.4",
49
+ "parent_section_id": "4",
50
+ "section_name": "Regression Task",
51
+ "text": "Selecting an inaccurately predicted observation as PI and an accurately\npredicted observation with similar variable values as CI is a helpful\nway to understand how the model is failing or not. The global view\n(Figure 4 ###reference_###a) shows a scatterplot of the observed\nvs predicted values, which should exhibit a strong relationship if the\nmodel is a good fit. The points can be colored by a statistic, residual,\na measure of outlyingness (log Mahalanobis distance), or correlation to\naid in understanding the structure identified in these spaces.\nIn the radial tour view, the observed response and the residuals\n(vertical) are plotted against the attribution projection of the PI\n(horizontal). The attribution projection can be interpreted similarly to\nthe predicted value from the global view plot. It represents a linear\ncombination of the variables, and a good fit would be indicated when\nthere is a strong relationship with the observed values. This can be\nviewed as a local linear approximation if the fitted model is nonlinear.\nAs the contribution of a variable is varied, if the value of the PI does\nnot change much, it would indicate that the prediction for this\nobservation is NOT sensitive to that variable. Conversely, if the\npredicted value varies substantially, the prediction is very sensitive\nto that variable, suggesting that the variable is very important for the\nPI\u2019s prediction.\n###figure_4###"
52
+ },
53
+ {
54
+ "section_id": "4.5",
55
+ "parent_section_id": "4",
56
+ "section_name": "Interactive Variables",
57
+ "text": "The application has several reactive inputs that affect the data used,\naesthetic display, and tour manipulation. These reactive inputs make the\nsoftware flexible and extensible (Figure 3 ###reference_###a\n& d). The application also has more exploratory interactions to help\nlink points across displays, reveal structures found in different\nspaces, and access the original data.\nA tooltip displays the observation number/name and classification\ninformation while the cursor hovers over a point. Linked brushing allows\nthe selection of points (left click and drag) where those points will be\nhighlighted across plots (Figure 4 ###reference_###a & b).\nThe information corresponding to the selected points is populated on a\ndynamic table (Figure 4 ###reference_###d). These interactions\naid the exploration of the spaces and, finally, the identification of\nprimary and comparison observations."
58
+ },
59
+ {
60
+ "section_id": "4.6",
61
+ "parent_section_id": "4",
62
+ "section_name": "Preprocessing",
63
+ "text": "It is vital to mitigate the render time of visuals, especially when\nusers may want to iterate many explorations. All computational\noperations should be prepared before run time. The work remaining when\nan application is run solely reacts to inputs and rendering visuals and\ntables. Below discusses the steps and details of the reprocessing.\nData: predictors and response are unscaled complete numerical\nmatrix. Most models and local explanations are scale-invariant. Keep\nthe normality assumptions of the model in mind.\nModel: any model and compatible explanation could be explored\nwith this method. Currently, random forest models are applied via the\npackage randomForest [liaw_classification_2002],\ncompatibility tree SHAP. Modest hyperparameters are used. Namely,\nclassification models use 125 trees, number of variables at each\nsplit (mtry) of , and minimum terminal node size of\n. While regression models use 125 tree, variables at split, and\n minimum terminal node size.\nLocal explanation: Tree SHAP is calculated for each\nobservation using the package treeshap\n[kominsarczyk_treeshap_2023]. We opt to find the attribution of\neach observation in the training data and not fit to fit variable\ninteractions.\nCheem viewer: after the model and full explanation space are\ncalculated, each variable is scaled by standard deviations away from\nthe mean to achieve common support for visuals. Statistics for mapping\nto color are computed on the scaled spaces.\nThe time to preprocess the data will vary significantly with the\ncomplexity of the model and the LE. For reference, the FIFA data\ncontained 5000 observations of nine explanatory variables that took 2.5\nseconds to fit a random forest model of modest hyperparameters.\nExtracting the tree SHAP values of each observation took 270 seconds in\ntotal. PCA and statistics of the variables and attributions took 2.8\nseconds. These run times were from a non-parallelized session on a\nmodern laptop, but suffice it to say that most of the time will be spent\non the LVA. An increase in model complexity or data dimensionality will\nquickly become an obstacle. Its reduced computational complexity makes\ntree SHAP an excellent candidate to start. Alternatively, some package\nand methods use approximate calculations of LEs, such as\nfastshap greenwell_fastshap_2020."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Case Studies",
69
+ "text": "To illustrate the cheem method it is applied to modern data sets, two\nclassification examples and then two of regression."
70
+ },
71
+ {
72
+ "section_id": "5.1",
73
+ "parent_section_id": "5",
74
+ "section_name": "Palmer Penguins, Species\nClassification",
75
+ "text": "The Palmer penguins data\n[gorman_ecological_2014, horst_palmerpenguins_2020] was collected\non three species of penguins foraging near Palmer Station, Antarctica.\nThe data is publicly available to substitute for the overly-used iris\ndata and is quite similar in form. After removing incomplete\nobservations, there are 333 observations of four physical measurements,\nbill length (bl), bill depth (bd), flipper length\n(fl), and body mass (bm) for this illustration. A\nrandom forest model was fit with species as the response variable.\n###figure_5### Figure 5 ###reference_### shows plots from the cheem viewer for\nexploring the random forest model on the penguins data. Panel (a) shows\nthe global view, and panel (b) shows several 1D projections generated\nwith the radial tour. Penguin 243, a Gentoo (purple), is the PI because\nit has been misclassified as a Chinstrap (orange).\n###figure_6### There is more separation visible in the attribution space than in the\ndata space, as would be expected. The predicted vs observed plot reveals\na handful of misclassified observations. A Gentoo which has been wrongly\nlabeled as a Chinstrap is selected for illustration. The PI is a\nmisclassified point (represented by the asterisk in the global view and\na dashed vertical line in the tour view). The CI is a correctly\nclassified point (represented by an and a vertical dotted\nline).\nThe radial tour is used here to examine which variable most contributed to the incorrect classification of the PI, to understand why the model was prediction differed from that of the CI. It starts from the attribution projection of the\nmisclassified observation (b, left). The important variables identified\nby SHAP in the (wrong) prediction for this observation are mostly\nbl and bd with small contributions of fl and\nbm. This projection is a view where the Gentoo (purple) looks\nmuch more likely for this observation than Chinstrap. That is, this\ncombination of variables is not particularly useful because the PI looks\nvery much like other Gentoo penguins. The radial tour is used to vary\nthe contribution of flipper length (fl) to explore this. (In\nour exploration, this was the third variable explored. It is typically\nhelpful to explore the variables with more significant contributions,\nhere bl and bd. Still, when doing this, nothing was\nrevealed about how the PI differed from other Gentoos). On varying\nfl, as it contributes increasingly to the projection (b,\nright), more and more, this penguin looks like a Chinstrap. This\nsuggests that fl should be considered an important variable for\nexplaining the (wrong) prediction.\nFigure 6 ###reference_### confirms that flipper length\n(fl) is vital for the confusion of the PI as a Chinstrap. Here,\nflipper length and body length are plotted, and the PI can be seen to be\ncloser to the Chinstrap group in these two variables, mainly because it\nhas an unusually low value of flipper length relative to other Gentoos.\nFrom this view, it makes sense that it is a hard observation to account\nfor, as decision trees can only partition only vertical and horizontal\nlines."
76
+ },
77
+ {
78
+ "section_id": "5.2",
79
+ "parent_section_id": "5",
80
+ "section_name": "Chocolates, Milk/Dark\nClassification",
81
+ "text": "The chocolates data set consists of 88 observations of ten nutritional\nmeasurements determined from their labels and labeled as either milk or\ndark. Dark chocolate is considered healthier than milk. Students\ncollected the data during the Iowa State University class STAT503 from\nnutritional information on the manufacturer\u2019s websites and were\nnormalized to 100g equivalents. The data is available in the\ncheem package. A random forest model is used for the\nclassification of chocolate types.\nIt could be interesting to examine the nutritional properties of any\ndark chocolates that have been misclassified as milk. A reason to do\nthis is that a dark chocolate, nutritionally more like milk should not\nbe considered a healthy alternative. It is interesting to explore which\nnutritional variables contribute most to the misclassification.\n###figure_7### This type of exploration is shown in Figure 7 ###reference_###,\nwhere a chocolate labeled dark but predicted to be milk is chosen as the\nPI (observation 22). It is compared with a CI that is a correctly\nclassified dark chocolate (observation 7). The PCA plot and the tree\nSHAP PCA plots (a) show a big difference between the two chocolate types\nbut with confusion for a handful of observations. The misclassifications\nare more apparent in the observed vs predicted plot and can be seen to\nbe mistaken in both ways: milk to dark and dark to milk.\nThe attribution projection for chocolate 22 suggests that Fiber, Sugars,\nand Calories are most responsible for its incorrect prediction. The way\nto read this plot is to see that Fiber has a large negative value while\nSugars and Calories have reasonably large positive values. In the\ndensity plot, observations on the very left of the display would have\nhigh values of Fiber (matching the negative projection coefficient) and\nlow values of Sugars and Calories. The opposite would be interpreting a\npoint with high values in this plot. The dark chocolates (orange) are\nprimarily on the left, and this is a reason why they are considered to\nbe healthier: high fiber and low sugar. The density of milk chocolates\nis further to the right, indicating that they generally have low fiber\nand high sugar.\nThe PI (dashed line) can be viewed against the CI (dotted line). Now,\none needs to pay attention to the parallel coordinate plot of the SHAP values,\nwhich are local to a particular observation, and the density plot, which\nis the same projection of all observations as specified by the SHAP\nvalues of the PI. The variable contribution of the two different\npredictions can be quickly compared in the parallel coordinate plot. The\nPI differs from the comparison primarily on the Fiber variable, which\nsuggests that this is the reason for the incorrect prediction.\nFrom the density plot, which is the attribution projection corresponding\nto the PI, both observations are more like dark chocolates. Using the radial tour to vary the contribution of Sugars, results in it being removed and replaced by Fiber, and reason for the wrong classification becomes apparent. In this 1D projection observation 22 is more similar to milk chocolates, suggests that Fiber is the culprit for the model mistakenly seeing it as a milk chocolate.\nIt would also be interesting to explore an inverse misclassification, where a milk chocolate is misclassified as a dark chocolate. Chocolate 84 is selected and is compared with a correctly predicted milk chocolate (observation 71). The corresponding\nglobal view and radial tour frames are shown in Figure\n8 ###reference_###.\n###figure_8### Comparing the attributions of the PI and the CI, large differences in the values of Sodium and Fiber can be seen. The contribution of Sodium is\nselected to be varied in the radial tour. From the density plot of the initial attribution projection, the PI is equally likely to be milk or dark dark\nchocolate. When the contribution of Sodium is increased, the balance shifts, and the PI is more likely to be correctly considered to be a milk chocolate. This supports that the model prediction was erroneous because it didn\u2019t adequately consider the value of Sodium in making the prediction."
82
+ },
83
+ {
84
+ "section_id": "5.3",
85
+ "parent_section_id": "5",
86
+ "section_name": "FIFA, Wage Regression",
87
+ "text": "The 2020 season FIFA data [leone_fifa_2020, biecek_dalex_2018]\ncontains many skill measurements of soccer/football players and wage\ninformation. Nine higher-level skill groupings were identified and\naggregated from highly correlated variables. A random forest model is\nfit from these predictors, regressing on player wages [2020 euros]. The\nmodel was fit from 5000 observations before being thinned to 500 players\nto mitigate occlusion and render time. Continuing from the information\nin Section 2 ###reference_###, we are interested to\nsee the difference in attribution based on what is known about different players, that is a leading offensive fielder (L. Messi) as compared with a top defensive fielder (V. van Dijk). (These same observations were shown in Figure 1 ###reference_###.) With the radial tour we can explore how these players wages might be predicted if their skill sets were different.\n###figure_9### Figure 9 ###reference_### tests the support of the LVA for the PI (Messi). The contribution from def is varied in the radial tour, in contrast to\noffensive skills (off). As the contribution of defensive skills increases,\nMessi\u2019s wage plummets. Messi\u2019s predicted wage would be much lower defensive skills played a larger role in the prediction - the model prediction reinforces that he is clearly not getting paid for his ability to defend.\nAlthough we don\u2019t show it here, offensive and reaction (rct) skills are both crucial to explaining the star offensive player\u2019s predicted wage. If the contribution of either is changed, the other substitutes! That is, rotating one variable out, results in the other rotating in, when the radial tour is used, and the wage value does not change, remaining in a far-right location in the plot. Some change in predicted wage is seen if instead the contribution of a variable with low importance is varied.\n###figure_10###"
88
+ },
89
+ {
90
+ "section_id": "5.4",
91
+ "parent_section_id": "5",
92
+ "section_name": "Ames Housing, Sales Price\nRegression",
93
+ "text": "Ames housing data [de_cock_ames_2011] was\nsubset to North Ames, with 338 house sales. A random forest model was fit, predicting\nthe sale price [USD] from the property variables: Lot Area\n(LtA), Overall Quality (Qlt), Year the house was Built\n(YrB), Living Area (LvA), number of Bathrooms\n(Bth), number of Bedrooms (Bdr), the total number of\nRooms (Rms), Year the Garage was Built (GYB), and\nGarage Area (GrA). Using interactions with the global view, a\nhouse with an extreme negative residual and an accurate observation with\na similar prediction is selected.\nFigure 10 ###reference_### illustrates the exploration of the model predictions for the house sale 74 (PI), which is under-valued by the model. The CI has a similar predicted price though the prediction was accurate. The SHAP values for the PI and CI have very different values of Lot Area. The attribution projection would give the PI a higher value than the CI, suggesting that the Lot Area value is important for the predicted value of the PI but not for that of the CI. As the\ncontribution of Lot Area is decreased in the radial tour, the predict value of PI decreases while the CI increases. This is quite interesting, that the SHAP value picks up the importance of Lot Area. And it appears that the model does not adequately use this variable. For the attribution projection, with a large contribution from Lot Area, the PI is better predicted than in the model, and would have a smaller residual."
94
+ },
95
+ {
96
+ "section_id": "6",
97
+ "parent_section_id": null,
98
+ "section_name": "Discussion",
99
+ "text": "There is a clear need to provide more tools interpret black box\nmodels. Techniques such as SHAP, LIME, Break-down\ncalculate LEs for each observation in the data. They estimate\nhow important variables are for the model\u2019s prediction of a single observation.\nThis paper has provided additional interactive graphics tools to utilize LEs to explore and understand model predictions. Several diagnostic plots\nare provided to assist with understanding the sensitivity of a\nprediction to particular variables. A global view shows the data space,\nexplanation space, and residual plot, to get an overview of the distribution of LEs across all observations. The user can interactively select\nobservations to compare, contrast, and study further. The LE is converted into an LVA (linear projection) where the radial tour can be used to understand the prediction\u2019s sensitivity to a particular variable.\nThis approach has been illustrated using four data examples of random\nforest models with the tree SHAP LVA. LEs focus on the model fit and\nhelp to dissect which variables are most responsible for the fitted\nvalue. They can also form the basis of learning how the model has got it\nwrong, when the observation is misclassified or has a large residual.\nIn the penguins example, we showed how the misclassification of a\npenguin arose due to it having an unusually small flipper size compared\nto others of its species. This was verified by making a follow-up plot\nof the data. The chocolates example shows how a dark chocolate was\nmisclassified primarily due to its attribution to Fiber, and a milk\nchocolate was misclassified as dark due to its lowish Sodium value. In\nthe FIFA example, we show how low Messi\u2019s salary would be if it depended\non their defensive skill. In the Ames housing data, an inaccurate\nprediction for a house was likely due to the lot area not being\neffectively used by the random forest model.\nThis analysis is manually intensive and thus only feasible for\ninvestigating a few observations. The recommended approach is to\ninvestigate an observation where the model has not predicted accurately\nand compare it with an observation with similar predictor values where\nthe model fitted well. The radial tour launches from the attribution\nprojection to enable exploration of the sensitivity of the prediction to\nany variable. It can be helpful to make additional plots of the\nvariables and responses to cross-check interpretations made from the\ncheem viewer. This methodology provides an additional tool in the box\nfor studying model fitting.\nThese tools work better for smaller data, because being able to interact with the plots is necessary. XAI has been developed to tackle large data. To work with bigger data sets, would involve subsetting it after modeling and computing the LEs, to keep a representative sample of well-fitted observations, along with the observations that are especially interesting to investigate.\nThere are many additional future directions for this work. Primarily, development should make it easier to focus on what can be learned from the LEs, to be able to compare different versions, to flag or annotate values, and output of log the results of interactive analysis."
100
+ },
101
+ {
102
+ "section_id": "7",
103
+ "parent_section_id": null,
104
+ "section_name": "Package Infrastructure",
105
+ "text": "An implementation is provided in the open-source R package\ncheem, available on CRAN spyrison_cheem_2023. Example data sets are\nprovided. You can upload your own data after model fitting and computing\nthe LVAs. The LVAs need to be pre-computed, possibly using the cheem_ls() function, and saved as an rds file. Examples show\nhow to do this for tree SHAP values, using treeshap (tree-based\nmodels from gbm, lightgbm, randomForest,\nranger, or xgboost greenwell_gbm_2020;\nshi_lightgbm_2022; liaw_classification_2002;\nwright_ranger_2017; chen_xgboost_2021, respectively).\nThe SHAP and oscillation explanations could be easily added using\nDALEX::explain()\n[biecek_dalex_2018, biecek_explanatory_2021].\nThe application was made with shiny [chang_shiny_2021].\nThe tour visual is built with spinifex\n[spyrison_spinifex_2020]. Both views are created first with\nggplot2 [wickham_ggplot2_2016] and then rendered as\ninteractive html widgets with plotly\n[sievert_interactive_2020]. DALEX\n[biecek_dalex_2018] and Explanatory Model Analysis\n[biecek_explanatory_2021] are helpful for understanding LEs and\nhow to apply them.\nThe package can be installed from CRAN, and the application can be run\nusing the following R code:\nA version of the cheem application can be accessed at https://nicholas-spyrison.shinyapps.io/cheem/ ###reference_em/###, the development version of the package is available at https://github.com/nspyrison/cheem ###reference_###, and documentation of the package can be found at https://nspyrison.github.io/cheem/ ###reference_###.\nAcknowledgments\nKim Marriott provided advice on many aspects of this work, especially on\nthe explanations in the applications section. This research was\nsupported by the Australian Government Research Training Program (RTP)\nscholarships. Thanks to Jieyang Chong for helping proofread this\narticle. The namesake, Cheem, refers to a fictional race of humanoid\ntrees from Doctor Who lore. DALEX pulls on from that universe,\nand we initially apply tree SHAP explanations specific to tree-based\nmodels."
106
+ }
107
+ ],
108
+ "appendix": [],
109
+ "tables": {},
110
+ "image_paths": {
111
+ "1": {
112
+ "figure_path": "2205.05359v3_figure_1.png",
113
+ "caption": "Figure 1: Illustration of SHAP values for a random forest model FIFA 2020 player wages from nine skill predictors. A star offensive and defensive player are compared, L. Messi and V. van Dijk, respectively. Panel (a) shows breakdown plots of three sequences of the variables. The sequence of the variables impacts the magnitude of their attribution. Panel (b) shows the distribution of attribution for each variable across 25 sequences of predictors, with the mean displayed as a dot for each player. Reaction skills are important for both players. Offense and movement are important for Messi but not van Dijk, and conversely, defense and power are important for van Dijk but not Messi.",
114
+ "url": "http://arxiv.org/html/2205.05359v3/extracted/5349320/figures/shap_distr_bd.png"
115
+ },
116
+ "2": {
117
+ "figure_path": "2205.05359v3_figure_2.png",
118
+ "caption": "Figure 2: The radial tour allows the user to remove a variable from a projection, to examine the importance of this variable to the structure in the plot. Here we have a 1D projection of the penguins data displayed as a density plot. The line segments on the bottom correspond to the coefficients of the variables making up the projection. The structure in the plot is bimodality (left), and the importance of the variable bd is being explored. As this variable contribution is reduced in the plot (middle, right) we can see that the bimodality decreases. Thus bd is an important variable contributing to the bimodal structure.",
119
+ "url": "http://arxiv.org/html/2205.05359v3/extracted/5349320/figures/radial_tour.png"
120
+ },
121
+ "3": {
122
+ "figure_path": "2205.05359v3_figure_3.png",
123
+ "caption": "Figure 3: Overview of the cheem viewer for classification tasks (categorical response). Global view inputs, (a), set the PI, CI, and color statistic. Global view, (b) PC1 by PC2 approximations of the data- and attribution-space. (c) prediction by observed y\ud835\udc66yitalic_y (visual of the confusion matrix for classification tasks). Points are colored by predicted class, and red circles indicate misclassified observations. Radial tour inputs (d) select variables to include and which variable is changed in the tour. (e) shows a parallel coordinate display of the distribution of the variable attributions while bars depict contribution for the current basis. The black bar is the variable being changed in the radial tour. Panel (f) is the resulting data projection indicated as density in the classification case.",
124
+ "url": "http://arxiv.org/html/2205.05359v3/x1.png"
125
+ },
126
+ "4": {
127
+ "figure_path": "2205.05359v3_figure_4.png",
128
+ "caption": "Figure 4: Overview of the cheem viewer for regression tasks (quantitative response) and illustration of interactive variables. Panel (a) PCA of the data- and attributions- spaces and the (b) observed vs predicted values. Four selected points are highlighted in the PC spaces and tabularly displayed. Coloring on a statistic (c) highlights the structure organized in the attribution space. Interactive tabular display (d) populates when observations are selected. Contribution of the 1D basis affecting the horizontal position (e) parallel coordinate display of the variable attribution from all observations, and horizontal bars show the contribution to the current basis. Regression projection (f) uses the same horizontal projection and fixes the vertical positions to the observed y\ud835\udc66yitalic_y and residuals (middle and right).",
129
+ "url": "http://arxiv.org/html/2205.05359v3/x2.png"
130
+ },
131
+ "5": {
132
+ "figure_path": "2205.05359v3_figure_5.png",
133
+ "caption": "Figure 5: Examining the SHAP values for a random forest model classifying Palmer penguin species. The PI is a Gentoo (purple) penguin that is misclassified as a Chinstrap (orange), marked as an asterisk in (a) and the dashed vertical line in (b). The radial view shows varying the contribution of \u2018fl\u2018 from the initial attribution projection (b, left), which produces a linear combination where the PI is more probably (higher density value) a Chinstrap than a Gentoo (b, right). (The animation of the radial tour is at https://vimeo.com/666431172.)",
134
+ "url": "http://arxiv.org/html/2205.05359v3/extracted/5349320/figures/case_penguins.png"
135
+ },
136
+ "6": {
137
+ "figure_path": "2205.05359v3_figure_6.png",
138
+ "caption": "Figure 6: Checking what is learned from the cheem viewer. This is a plot of flipper length (\u2018fl\u2018) and bill length (\u2018bl\u2018), where an asterisk highlights the PI. A Gentoo (purple) misclassified as a Chinstrap (orange). The PI has an unusually small \u2018fl\u2018 length which is why it is confused with a Chinstrap.",
139
+ "url": "http://arxiv.org/html/2205.05359v3/extracted/5349320/figures/case_penguins_BlFl.png"
140
+ },
141
+ "7": {
142
+ "figure_path": "2205.05359v3_figure_7.png",
143
+ "caption": "Figure 7: Examining the LVA for a PI which is dark (orange) chocolate incorrectly predicted to be milk (green). From the attribution projection, this chocolate correctly looks more like dark than milk, which suggests that the LVA does not help understand the prediction for this observation. So, the contribution of Sugar is varied\u2014reducing it corresponds primarily with an increased magnitude from Fiber. When Sugar is zero, Fiber contributes strongly toward the left. In this view, the PI is closer to the bulk of the milk chocolates, suggesting that the prediction put a lot of importance on Fiber. This chocolate is a rare dark chocolate without any Fiber leading to it being mistaken for a milk chocolate. (A video of the tour animation can be found at https://vimeo.com/666431143.)",
144
+ "url": "http://arxiv.org/html/2205.05359v3/extracted/5349320/figures/case_chocolates.png"
145
+ },
146
+ "8": {
147
+ "figure_path": "2205.05359v3_figure_8.png",
148
+ "caption": "Figure 8: Examining the LVA for a PI which is a milk (green) chocolate incorrectly predicted to be a dark (orange). From the density plot of the attribution projection the PI could equally likely be milk or dark, where as the CI is more definitely milk. Sodium and Fiber have the largest differences in attributed variable importance, with values lose to zero, instead of large negative values like other milk chocolates. The lack of importance attributed to these variables is suspected of contributing to the mistake. When the contribution of Sodium is changedr, we see if the model had used a larger contribution of Sodium to make the prediction the PI would have likely been predicted to be a milk chocolate. (A video of the tour animation can be found at https://vimeo.com/666431148.)",
149
+ "url": "http://arxiv.org/html/2205.05359v3/extracted/5349320/figures/case_chocolates_inverse.png"
150
+ },
151
+ "9": {
152
+ "figure_path": "2205.05359v3_figure_9.png",
153
+ "caption": "Figure 9: Exploring the wages relative to skill measurements in the FIFA 2020 data. Star offensive player (L. Messi) is the PI, and he is compared with a top defensive player (V. van Dijk): (a) global view, (b) observed values vs linear combination of predictors (predicted values). The attribution projection produces a view where Messi has very high predicted (and observed) wages. Defense (\u2018def\u2018) is the chosen variable to vary. It starts very low, and Messi\u2019s predicted wages decrease dramatically as its contribution increases (right plot). The increased contribution in defense comes at the expense of offensive and reaction skills. The interpretation is that Messi\u2019s high wages are most attributable to his offensive and reaction skills, as initially provided by the LVA. (A video of the animated radial tour can be found at https://vimeo.com/666431163.)",
154
+ "url": "http://arxiv.org/html/2205.05359v3/extracted/5349320/figures/case_fifa.png"
155
+ },
156
+ "10": {
157
+ "figure_path": "2205.05359v3_figure_10.png",
158
+ "caption": "Figure 10: Exploring an observation with an extreme residual as the PI in relation to an observation with an accurate prediction for a similarly priced house in a random forest fit to the Ames housing data: (a) global view, (b) observed values vs linear combination of predictors (predicted values). The LVA indicates a sizable attribution to Lot Area (L\u2062t\u2062A\ud835\udc3f\ud835\udc61\ud835\udc34LtAitalic_L italic_t italic_A), while the CI has minimal attribution to this variable. The PI has a higher predicted value than the CI in the attribution projection. Reducing the contribution of Lot Area brings these two prices in line. This suggests that if the model did not value Lot Area so highly for this observation, then the observed sales price would be quite similar. That is, the large residual in the model is due to a lack of factoring Lot Area into the prediction of PI\u2019s sales price. (A video showing the animation is at https://vimeo.com/666431134.)",
159
+ "url": "http://arxiv.org/html/2205.05359v3/extracted/5349320/figures/case_ames2018.png"
160
+ }
161
+ },
162
+ "validation": true,
163
+ "references": [],
164
+ "url": "http://arxiv.org/html/2205.05359v3"
165
+ }
20240119/2206.01409v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2206.11828v5.json ADDED
@@ -0,0 +1,379 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "On the Complexity of Problems on Tree-structured Graphs",
3
+ "abstract": "In this paper, we introduce a new class of parameterized problems, which we call XALP: the class of all parameterized\nproblems that can be solved in time and space on a non-deterministic Turing Machine\nwith access to an auxiliary stack (with only top element lookup allowed).\nVarious natural problems on \u2018tree-structured graphs\u2019 are complete for this class: we show that List Colouring and All-or-Nothing Flow parameterized by treewidth are XALP-complete. Moreover, Independent Set and Dominating Set parameterized by treewidth divided by , and Max Cut parameterized by cliquewidth are also XALP-complete.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "A central concept in complexity theory is completeness for a\nclass of problems. Establishing completeness of a problem for a class\npinpoints its difficulty, and gives implications on resources (time, memory or\notherwise) to solve the problem (often, conditionally on complexity\ntheoretic assumptions). The\nintroduction of the W-hierarchy by Downey and Fellows in the 1990s\nplayed an essential role in the analysis of the complexity of\nparameterized problems [19 ###reference_19###, 20 ###reference_20###, 21 ###reference_21###]. Still, several problems are\nsuspected not to be complete for a class in the W-hierarchy, and other\nclasses of parameterized problems with complete problems were introduced,\ne.g., the A-, AW-, and M-hierarchies. (See e.g., [1 ###reference_1###, 21 ###reference_21###, 26 ###reference_26###].)\nIn this paper, we introduce a new class of parameterized complexity,\nwhich appears to be the natural home of several \u2018tree structured\u2019 parameterized\nproblems.\nThis class, which we call XALP, can be seen as the parameterized\nversion of a class known in classic complexity theory as\nNAuxPDA[] (see [5 ###reference_5###]), or ASPSZ(, ) [31 ###reference_31###].\nIt can also be seen as the \u2018tree\nvariant\u2019 of the class XNLP, which is the class of parameterized problems that can be solved by a non-deterministic Turing machine using space in time for some computable function , where denotes the parameter and the input size.\nIt was introduced in 2015 by Elberfeld et al. [23 ###reference_23###]. Recently, several parameterized problems\nwere shown to be complete for XNLP [6 ###reference_6###, 11 ###reference_11###, 9 ###reference_9###]; in this collection,\nwe find many problems for \u2018path-structured graphs\u2019, including well known\nproblems that are in XP with pathwidth or other linear width measures\nas parameter, and linear ordering graph problems like Bandwidth.\nThus, we can view XALP as the \u2018tree\u2019 variant of XNLP and as such, we\nexpect that many problems known to be in XP (and expected not to be in FPT) when parameterized by treewidth will\nbe complete for this class.\nWe will prove the following problems to be XALP-complete in this paper:\nBinary CSP, List Colouring and All-or-Nothing Flow parameterized by treewidth;\nIndependent Set and Dominating Set parameterized by treewidth divided by , where is the number of vertices of the input graph;\nMax Cut parameterized by cliquewidth.\nThe problems listed in this paper should be regarded as examples of a general technique,\nand we expect that many other problems parameterized by treewidth, cliquewidth and\nsimilar parameters will be XALP-complete.\nIn many cases, a simple modification of an XNLP-hardness proof with\npathwidth as parameter shows XALP-hardness for the same problem with treewidth as parameter.\nIn addition to pinpointing the exact\ncomplexity class for these problems, such results have further consequences.\nFirst, XALP-completeness implies XNLP-hardness, and thus hardness for\nall classes , . Second, a conjecture by\nPilipczuk and Wrochna [30 ###reference_30###], if true, implies that every algorithm for an XALP-complete problem that works in XP time (that is,\n time) cannot simultaneously use FPT space (that is,\n space). Indeed, typical XP algorithms for problems\non graphs of bounded treewidth use dynamic programming, with tables\nthat are of size ."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Definitions",
15
+ "text": "We assume that the reader is familiar with a number of well-known notions from\ngraph theory and parameterized complexity,\ne.g., FPT, the W-hierarchy, clique, independent set, etc. (See e.g., [17 ###reference_17###].)\nA tree decomposition of a graph is a pair , with\n a tree and a family of (not necessarily disjoint)\nsubsets of (called bags) such that ,\nfor all edges , there is an with , and for all\n, the nodes form a connected subtree of .\nThe width of a tree decomposition is\n, and the treewidth of a graph \nis the maximum width over all tree decompositions of . A\npath decomposition is a tree decomposition , \nwith a path, and the pathwidth is the minimum width over all path\ndecompositions of ."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Turing Machines and Classes",
21
+ "text": "We assume the reader to be familiar with the basic concept of a Turing Machine (TM).\nHere, we\nconsider TMs that have access to both a fixed input tape (where the machine can only read), and a work tape of specified size (where the machine can both\nread and write). We consider Non-deterministic Turing Machines (NTM), where\nthe machine can choose between different transitions, and accepts, if at\nleast one choice of transitions leads to an accepting state, and\nAlternating Turing Machines (ATM), where the machine can both make\nnon-deterministic steps (accepting when at least one choice leads to\nacceptance), and co-non-deterministic steps (accepting when both choices\nlead to acceptance). We assume a co-non-deterministic step always makes a\nbinary choice, i.e, there are exactly two transitions that can be done.\nAcceptance of an ATM can be modelled by a rooted binary tree , sometimes called a run or a computation tree of the machine. Each\nnode of is labelled with a configuration of : the 4-tuple consisting of the machine state, work tape contents, location of\nwork tape pointer, and location of input tape pointer. Each\nedge of is labelled with a transition. The starting configuration\nis represented by the root of .\nA node with one\nchild makes a non-deterministic step, and the arc is labelled with a\ntransition that leads to acceptance; a node with two children makes\na co-non-deterministic step, with the children the configurations after\nthe co-non-deterministic choice. Each leaf is a configuration with\nan accepting state. The time of the computation is the depth of\nthe tree; the treesize is the total number of nodes in this computation tree.\nFor more information, see e.g., [31 ###reference_31###, 30 ###reference_30###].\nA computation path is a path from root to leaf in the tree.\nWe also consider NTMs which additionally have access to an auxiliary stack. For those, a transition can also move the top element of the stack to\nthe current location of the work tape (\u2018pop\u2019), or put a symbol at the top of the stack (\u2018push\u2019). We stress that only the top element can be accessed or modified, the machine cannot freely read other elements on the stack.\nWe use the notation N to denote languages recognisable by a NTM running in time with working space and A to denote languages recognisable by an ATM running in treesize with working space. We note that we are free to put the constraint that all runs have treesize at most , since we can add a counter that keeps track of the number of remaining steps, and reject when this runs out (similarly to what is done in the proof of Theorem 3.1 ###reference_heorem1###).\nWe write NAuxPDA to denote languages recognisable by a NTM with a stack (AUXiliary Push-Down Automaton) running in time with working space.\nRuzzo [31 ###reference_31###] showed that for any function , NAuxPDA[ time, space] = A[ treesize, space]. Allender et al. [5 ###reference_5###] provided natural complete problems when for all (via a circuit model called SAC, which we will not use in our paper). Our interest lies in the case , where it turns out the parameterized analogue is the natural home of \u2018tree-like\u2019 problems.\nAnother related work by Pilipczuk and Wrochna [30 ###reference_30###] shows that there is a tight relationship between the complexity of 3-Colouring on graphs of treedepth, pathwidth, or treewidth and problems that can be solved by TMs with adequate resources depending on ."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "From classical to parameterized",
27
+ "text": "In this paper, we introduce the class XALP NAuxPDA. Following [11 ###reference_11###],\nwe use the name XNLP for the class N; is shorthand notation for for\nsome computable function , and shorthand notation for .\nThe crucial difference between the existing classical results and our results is that we consider parameterized complexity classes.\nThese classes are closed under parameterized reductions, i.e. reductions where the parameter of the reduced instance must be bounded by the parameter of the initial instance.\nIn our context, we have an additional technicality due to the relationship between time and space constraints.\nWhile a logspace reduction is also a polynomial time reduction,\na reduction using space (XL) could use up to\n time (XP).\nXNLP and XALP are closed under pl-reductions where the space bound is (which implies FPT time), and under ptl-reductions running in time and space.\nWe now give formal definitions.\nA parameterized reduction\nfrom a parameterized problem to a parameterized problem is a function\n such that the following holds.\nFor all , if and only if .\nThere is a computable function such that for all , if , then .\nIf there is an algorithm that computes in space , with a computable function and the number of bits to denote , then the reduction is a parameterized logspace reduction or pl-reduction.\nIf there is an algorithm that computes in time and space , with computable functions and the number of bits to denote , then the reduction is a parameterized tractable logspace reduction or ptl-reduction."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Equivalent characterisations of XALP",
33
+ "text": "In this section, we give a number of equivalent characterisations of XALP.\nThe following parameterized complexity classes are all equal.\nNAuxPDA[, ], the class of parameterized decision\nproblems for which instances of size with parameter can be solved by a non-deterministic Turing machine with memory in time when given a stack, for some computable function .\nThe class of parameterized decision\nproblems for which instances of size with parameter can be solved by an alternating Turing machine with memory whose computation tree is a binary tree on nodes, for some computable function .\nThe class of parameterized decision\nproblems for which instances of size with parameter can be solved by an alternating Turing machine with memory whose computation tree is obtained from a binary tree of depth by subdividing each edge times, for some computable function .\nThe class of parameterized decision\nproblems for which instances of size with parameter can be solved by an alternating Turing machine with memory, for which the computation tree has size and uses co-non-deterministic steps per computation path, for some computable function .\nThe proof is similar to the equivalence proofs for the classical analogues, and added for convenience of the reader.\nWe prove the theorem by proving the series of inclusions 1 ###reference_1### 2 ###reference_2### 3 ###reference_3### 4 ###reference_4### 1 ###reference_1###.\n1 ###reference_1### 2 ###reference_2###. Consider a problem that can be solved by a non-deterministic Turing Machine with a stack and memory in time. We will simulate using an alternating Turing machine .\nWe place three further assumptions on , which can be implemented by changing the function slightly if needed.\nThe Turing machine has two counters. One keeps track of the height of the stack, and the other keeps track of the number of computation steps. A single computation step may involve several operations; we just need that the running time is polynomially bounded in the number of steps.\nWe assume that only halts with acceptance when the stack is empty. (Otherwise, do not yet accept,\nbut pop the stack using the counter that tells the height of the stack, until the stack is empty.)\nEach pop operation performed by is a deterministic step. This can be done by adding an extra state to and splitting a non-deterministic step into a non-deterministic step and a deterministic step if needed.\nWe define a configuration as a tuple which includes the state of , the value of the two pointers and the content of the memory. In particular, this does not contain the contents of the stack and so a configuration can be stored using bits. (Note that the value of both pointers is bounded by .)\nWe will build a subroutine which works as follows.\nThe input , consists of two configurations with the same stack height.\nThe output is whether has an accepting run from to without popping the top element from the stack in ; the run may pop elements that have yet to get pushed.\nWe write Apply(, POP()) for the configuration that is obtained when we perform a pop operation in configuration and obtain from the stack. This is only defined if can do a pop operation in configuration (e.g. it needs to contain something on the stack).\nWe define the configuration Apply(, PUSH()) in a similar manner, where this time gets pushed onto the stack.\nWe let simulate starting from configuration as follows.\nOur alternating Turing machine will start with the following non-deterministic step: guess the configuration that accepts\nat the end of the run. It then performs the subroutine .\nWe implement as follows.\nA deterministic or non-deterministic step of is carried out as usual.\nIf is in some configuration and wants to push to the stack, then let Apply(, PUSH()) and let perform a non-deterministic step that guesses a configuration with the same stack height as for which the next step is to pop (and the number of remaining computation steps is plausible). Let Apply(, POP()). We make do a co-non-deterministic step consisting of two branches:\nperforms the subroutine .\nperforms the subroutine .\nWe ensure that in configuration , the number of steps taken is larger than in configuration . This ensures that will terminate.\nSince a configuration can be stored using and always stores at most a bounded number of configurations, requires only bits of memory.\nThe computation tree for is binary.\nThe total number of nodes of the computation tree of is since each computation step of appears at most once in the tree (informally: our co-non-deterministic steps split up the computation path of into two disjoint parts), and we have added at most a constant number of steps per step of .\nTo see this, the computation tree of may split a computation path of into two parts: one branch will simulate and the other branch will simulate .\nAt most a constant number of additional nodes (e.g. the node which takes the co-non-deterministic step) are added to facilitate this. Importantly, the configurations implicitly stored a number of remaining computation steps,\nand so can calculate from how many steps is supposed to take to move between and .\n2 ###reference_2### 3 ###reference_3###. The intuition behind this proof is to use that any -vertex tree has a tree decomposition of bounded treewidth of depth .\nLet be an alternating Turing machine for some parameterized problem with a computation tree of size and bits of memory.\nWe build an alternating Turing machine that simulates for which the computation tree is a binary tree which uses co-non-deterministic steps per computation branch and memory. We can after that ensure that there are steps between any two co-non-deterministic steps by adding \u2018idle\u2019 steps if needed.\nWe ensure that always has advice in memory: 1 configuration for which accepts. In particular, if is the configuration stored as advice when is in configuration with a bound of steps, then checks if can get from to within steps.\nWe also maintain a counter for the number of remaining steps: the number of nodes that are left in the computation tree of , when rooted at the current configuration not counting the node of itself. In particular, the counter is if is supposed to be a leaf.\nWe let simulate as follows. Firstly, if no advice is in memory, it makes a non-deterministic step to guess a configuration as advice.\nSuppose that is in configuration with steps left. We check the following in order. If equals the advice, then we accept. If , then we reject.\nIf the next step of is non-deterministic or deterministic step, then we perform the same step.\nThe interesting things happen when is about to perform a co-non-deterministic step starting from with steps left. If , then we reject: there is no space for such a step.\nOtherwise, we guess such that , and children of in the computation tree of . Renumbering if needed, we may assume that the advice is supposed to appear in the subtree of . We also guess an advice for .\nWe create a co-non-deterministic step with two branches, one for the computation starting from with steps, and the other from with . We describe how we continue the computation starting from ; the case of is analogous.\nRecall that some configuration has been stored as advice. We want to ensure that the advice is limited to one configuration. First, we non-deterministically guess a configuration . We non-deterministically guess whether is an ancestor of . We perform different computation depending on the outcome.\nSuppose that we guessed that is an ancestor of . We guess integers with . We do a co-non-deterministic step: one branch starts in with as advice and steps, the other branch starts in with as advice and steps.\nSuppose that is not an ancestor of . We guess a configuration , corresponding to the least common ancestor of and in the computation tree. We guess integers with . We perform a co-non-deterministic branch to obtain four subbranches: starting in with as advice and steps, with as advice and steps, starting in with as advice and steps and starting in with no advice and steps.\nIn order to turn our computation tree into a binary tree, we may choose to split the single co-non-deterministic step into two steps.\nSince at any point, we store at most a constant number of configurations, this can be performed using bits in memory.\nIt remains to show that performs co-non-deterministic steps per computation path. The computation of starts with a counter for the number of steps which is at most ; every time performs a co-non-deterministic step, this counter is multiplied by a factor of at most . The claim now follows from the fact that .\n3 ###reference_3### 4 ###reference_4###. Let be an alternating Turing machine using memory whose computation fits in a tree obtained from a binary tree of depth by subdividing each edge times. Then uses time (with possibly a different constant in the -term) and performs at most co-non-deterministic steps per computation path. Hence this inclusion is immediate.\n4 ###reference_4### 1 ###reference_1###. We may simulate the alternating Turing machine using a non-deterministic Turing machine stack as follows. Each time we wish to do a co-non-deterministic branch, we put the current configuration onto our stack and continue to the left-child of . Once we have reached an accepting state, we pop an element of the stack and next continue to the right child of . The total computation time is bounded by the number of nodes in the computation tree and the memory requirement does not increase by more than a constant factor. (Note that in particular, our stack will never contain more than elements.)\nAlready in the classical setting, it is expected that NL A[poly treesize, log space]. We stress the fact that this would imply XNLP XALP, since we can always ignore the parameter. It was indeed noted in [5 ###reference_5###, Corollary 3.13] that the assumption NL A[poly treesize, log space] separates the complexity of SAT instances of logarithmic pathwidth from SAT instances of logarithmic treewidth. Allender et al. [5 ###reference_5###] formulate this result in terms of SAC instead of the equivalent A[poly treesize, log space]. We expect that a parameterized analogue of SAC can be added to the equivalent characterisation above, but decided to not pursue this here. The definition of such a circuit class requires a notion of \u2018uniformity\u2019 that ensures that the circuits have a \u2018small description\u2019, which makes it more technical."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "XALP-completeness for a tree-chained variant of Multicolour Clique",
39
+ "text": "Our first XALP-complete problem is a \u2018tree\u2019 variant of the well-known Multicolour Clique problem.\nTree-Chained Multicolour Clique\n\n\nInput: A binary tree , an integer , and for each , a collection of pairwise disjoint sets of vertices , and a graph with vertex set .\nParameter: .\nQuestion: Is there a set of vertices such that contains exactly one vertex from each (), and for each pair with or , , , the vertex in is adjacent to the vertex in ?\nThis problem is the XALP analogue of the XNLP-complete problem Chained Multicolour Clique, in which the input tree is a path instead. This change of \u2018path-like\u2019 computations to \u2018tree-like\u2019 computations is typical when going from XNLP to XALP.\nFor the Tree-Chained Multicolour Independent Set problem, we have a similar input and question except that we ask for the vertex in and the vertex in not to be adjacent.\nIn both cases, we may assume that edges of the graphs are only between vertices of and with or , , .\nWe call tree-chained multicolour clique (resp. independent set) a set of vertices satisfying the respective previous conditions.\nMembership of these problems in XNLP seems unlikely, since it is difficult to handle the \u2018branching\u2019 of the tree. However, in XALP this is easy to do using the co-non-deterministic steps and indeed the membership follows quickly.\nTree-chained Multicolour Clique is in XALP.\nWe simply traverse the tree with an alternating Turing machine that uses a co-non-deterministic step when it has to check two subtrees. When at , the machine first guesses a vertex for each , . It then checks that these vertices form a multicolour clique with the vertices chosen for the parent of . The vertices chosen for the parent can now be forgotten and the machine moves to checking children of .\nThe machine works in polynomial treesize, and uses only space to keep the indices of chosen vertices for up to two nodes of , the current position on .\nWe next show that Tree-Chained Multicolour Clique is XALP-hard.\nWe will use the characterisation of XALP where the computation tree of the alternating Turing machine is a specific tree (3 ###reference_3###), which allows us to control when co-non-deterministic steps can take place.\nLet be an alternating Turing machine with computation tree , let be its input of size , and be the parameter.\nThe plan is to encode the configuration of at the step corresponding to node by the choice of the vertices in (for some ). The possible transitions of the Turing Machine are then encoded by edges between and for , where .\nA configuration of contains the same elements as in the proof of Theorem 3.1 ###reference_heorem1###:\nthe current state of ,\nthe position of the head on the input tape,\nthe working space which is bits long, and\nthe position of the head on the work tape.\nWe partition the working space in pieces of consecutive bits, and have a set of vertices for each.\nFormally, we have a vertex in for each tuple where is the state of the machine, is the position of the head on the input tape, indicates if the block of the work tape is before or after the head, or its position in the block, and is the current content of the th block of the work tape.\n###figure_1### The edges between vertices of enforce that possible choices of vertices\ncorrespond to valid configurations.\nThere is an edge between and with corresponding tuples and , if and only if , , and either , or and , or and .\nIf is path with , then at most one of the can encode a block with the work tape head, blocks before the head have , blocks after the head have , and all blocks encode the same state and position of the input tape head.\nThe edges between vertices of and for enforce that the configurations chosen in and encode configurations with a transition from one to the other.\nThere is an edge between and with corresponding tuples and , such that if and only if .\nThere is an edge between and with corresponding tuples and , such that , if and only if, there is a transition of from state to state that would write when reading on the input tape and on the work tape, move the input tape head by and the work tape by (where and ), and for .\nIf induce a \u2018multicolour grid\u2019 (i.e. is a path with , is a path with , there are edges for , , and encodes a valid configuration), then encodes a valid configuration that can reach the configuration encoded by using one transition of .\nThis follows easily from the construction but we still detail why this is sufficient when the work tape head moves to a different block.\nWe consider the case when the head moves to the block before it. That is we consider the case where encodes and encodes . First, note that there is an edge from allowing this. We use Observation 4 ###reference_### and conclude that (if it exists) must encode head position after for its block. The edge then enforces that encodes head position 1 but it can also exist only if there is a transition of that moves the work tape head to the previous block and the written character at the beginning of the block encoded by corresponds to such transition. Moving to the next block is a symmetric case.\nWe have further constraints on the vertices placed in each based on what is in .\nIf is in a leaf of , then we only have vertices with a corresponding tuple with an accepting state.\nIf is in a \u2018branching\u2019 vertex of (i.e. has two children), then we only have vertices with a corresponding tuple with a universal state.\nIf is the root, then only vertices corresponding to the initial configuration are allowed.\nOtherwise, we only have vertices with tuples encoding an existential state.\nFurthermore, we have to make sure that when branching we take care of the two distinct transitions.\nWe actually assume that has an order on children for vertices with two children. Then for the edge of to the first (resp. second) child, we only allow the first (resp. second) transition from the configuration of the parent (which must have a universal state).\nWe now complete the graph with edges that do not enforce constraints so that we may find a multicolour clique instead of only a multicolour grid.\nFor every , and such that , we add all edges between and . For every , and such that , we add all edges between and .\nIt should be clear that to find a multicolour clique for some edge after adding these edges is equivalent to finding a \u2018multicolour grid\u2019 before they were added111Asking for these multicolour grids for each edge of the tree instead of multicolour cliques also leads to an XALP-complete problem but we do not use this problem for further reductions. It could however be used as a starting point for new reductions, the corresponding problem can be expressed as binary CSP on the Cartesian product of and a tree..\nThe constructed graph admits a tree-chained multicolour clique, if and only if, there is an accepting run for with input and computation tree .\nThe statement follows from a straight-forward induction on showing that for each configuration of that can be encoded by the construction at , its encoding can be extended to a tree-chained multicolour clique of the subtree of rooted at , if and only if there is an accepting run of from with as computation tree the subtree of rooted at .\nEach has vertices (for the set of states). Edges are only between and such that or . We conclude that there are vertices and edges in the constructed graph per vertex of , which is itself of size so the constructed instance has size , for computable functions. The construction can even be performed using only space for some computable function .\nNote also that : the new parameter is bounded by a function of the initial parameter. This shows that our reduction is a parameterized pl-reduction, and we conclude XALP-hardness. Combined with Lemma 4.1 ###reference_heorem1###, we proved the following result.\nTree-chained Multicolour Clique is XALP-complete.\nOne may easily modify this to the case where each colour class has the same size, by adding isolated vertices.\nBy taking local complements of the graph, i.e. for each node , we complement the subgraph induced by , and for each edge , we complement the edge set , we directly obtain the following result.\nTree-Chained Multicolour Independent Set is XALP-complete.\nMulticolour Clique, Chained Multicolour Clique,\nand Tree-Chained Multicolour Clique can be seen as Binary CSP problems, by replacing vertex choice by assignment choice.\nIn the Binary CSP problem, we are given a graph ,\na set of colours ,\nfor each vertex a set of colours , and for each edge ,\na set of pairs of colours , and ask if we can assign to each vertex \na colour , such that for each edge , .\nBinary CSP is XALP-complete with each of the following parameters:\ntreewidth,\ntreewidth plus degree,\ntree-partition width.\nMembership for treewidth as parameter follows as usual. The colour of (uncoloured) vertices is non-deterministically chosen when they are introduced. We maintain the colour of vertices of the current bag in the working space. We use co-non-deterministic steps when the tree decomposition branches. We check that introduced edges satisfy the colour constraint. This uses space, and runs in polynomial total time. Membership for the two other parameterisations follows from this as well.\nHardness for treewidth plus degree follows from Theorem 4.5 ###reference_heorem5###.\nSuppose we are given an instance of Tree-Chained Multicolour Clique, as described earlier in this section.\nWe build a graph by taking for each set a vertex\n with , i.e., the vertices in\nthe Tree-Chained Multicolour Clique now become colours\nin the Binary CSP instance. We take an edge in whenever or , , , and allow for such an edge a pair of\ncolours if and only . The transformation is mainly\na reinterpretation of a version of Multicolour Clique as\na version of Binary CSP. One easily observes solutions of\nthe Tree-Chained Multicolour Clique instance and solutions\nof the Binary CSP instance correspond one-to-one to each other, and thus\nwe have a correct reduction.\nNote that has degree at most and treewidth at most :\nuse a tree decomposition , by\nchoosing an arbitrary root in , and letting contain\nall vertices of the form and with the parent of in . Hardness for treewidth plus degree as parameter now follows.\nGraphs of treewidth and maximum degree have\ntree-partition width (see [34 ###reference_34###]), and thus\nXALP-hardness for tree-partition width as parameter follows."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "More XALP-complete problems",
45
+ "text": "In this section, we prove a collection of problems on graphs, given with a tree-structure, to be complete\nfor the class XALP. The proofs are of different types: in some cases, the proofs are new, in some cases,\nreformulations of existing proofs from the literature, and in some cases, it suffices to observe that an\nexisting transformation from the literature keeps the width-parameter at hand bounded."
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "List colouring",
51
+ "text": "The problems List Colouring and Pre-colouring Extension with pathwidth as parameter are\nXNLP-complete [11 ###reference_11###]. We give a simple proof (using a well-known reduction) of XALP-completeness with treewidth\nas parameter. Previously, Jansen and Scheffler [28 ###reference_28###] showed that these problem are in XP,\nand Fellows et al. [25 ###reference_25###] showed -hardness.\nList Colouring and Pre-colouring Extension are XALP-complete with treewidth as\nparameter.\nMembership follows as the problems are special cases\nof Binary CSP.\nWe first show XALP-hardness of List colouring. We reduce from\nBinary CSP with treewidth as parameter.\nSuppose we are given a graph , with for each vertex a colour\nset , and for each edge a set of allowed colour pairs .\nFirst, we can assume that the colour sets are disjoint.\nThe hardness proof that gives Corollary 4.7 ###reference_heorem7###\ngives such disjoint\nsets. (Alternatively, we can rename for each vertex its colours and adjust the\nconstraints accordingly.)\nFor each vertex , its list of colours .\nNow, for each edge , we remove the edge, but add for each pair\nof colours a new vertex \nwith , and make this new vertex adjacent to and to .\nThis new vertex enforces that we cannot use the colour pair for the vertices\n and ; as we do this for each not allowed colour pair, this ensures that\nthe restriction of the colouring of satisfies all colour constraints of the Binary CSP-instance.\nThe treewidth of the resulting graph is the maximum of the treewidth of and 2;\ntake a tree decomposition of , and for each new vertex incident to\n and , we take a bag consisting of and make that bag\nincident to a bag that contains and . This shows the result\nfor List Colouring.\nThe standard reduction from Pre-colouring Extension to List colouring that adds for\neach forbidden colour of a vertex a new neighbour to pre-coloured with does not\nincrease the treewidth, which shows XALP-hardness for Pre-colouring Extension with treewidth as parameter."
52
+ },
53
+ {
54
+ "section_id": "5.2",
55
+ "parent_section_id": "5",
56
+ "section_name": "Tree variants of Weighted Satisfiability",
57
+ "text": "From Tree-Chained Multicolour Independent Set, we can show XALP-completeness of tree variants of what in [11 ###reference_11###] was called Chained Weighted CNF-Satisfiability and its variants (which in turn are analogues of Weighted CNF-Satisfiability, see e.g. [21 ###reference_21###, 22 ###reference_22###]).\nTree-Chained Weighted CNF-Satisfiability\n\n\nInput: A tree , sets of variables , and clauses , each with either only variables of for some , or only variables of and for some .\nParameter: .\nQuestion: Is there an assignment of at most variables in each that satisfies all clauses?\nPositive Partitioned Tree-Chained Weighted CNF-Satisfiability\n\n\nInput: A tree , sets of variables , and clauses of positive literals , each with either only variables of for some , or only variables of and for some . Each is partitioned into .\nParameter: .\nQuestion: Is there an assignment of exactly one variable in each that satisfies all clauses?\nNegative Partitioned Tree-Chained Weighted CNF-Satisfiability\n\n\nInput: A tree , sets of variables , and clauses of negative literals , each with either only variables of for some , or only variables of and for some . Each is partitioned into .\nParameter: .\nQuestion: Is there an assignment of exactly one variable in each that satisfies all clauses?\nPositive Partitioned Tree-Chained Weighted CNF-Satisfiability, Negative Partitioned Tree-Chained Weighted CNF-Satisfiability, and Tree-Chained Weighted CNF-Satisfiability are XALP-complete.\nWe first show membership for Tree-Chained Weighted CNF-Satisfiability, which implies membership for the more structured versions. We simply follow the tree shape of our instance by branching co-non-deterministically when the tree branches. We keep the indices of the variables chosen non-deterministically for the \u2018local\u2019 clauses in the working space. We then check that said clauses are satisfied.\nWe first show hardness for Negative Partitioned Tree-Chained Weighted CNF-Satisfiability by reducing from Tree-Chained Multicolour Independent Set.\nFor each vertex , we have a Boolean variable . We denote by the set of variables , and by the set of variables . This preserves the partition properties.\nFor each edge , we add the clause .\nis multicolour independent set if and only if is a satisfying assignment.\nTo reduce to Positive Partitioned Tree-Chained Weighted CNF-Satisfiability, we simply replace negative literals for by a disjunction of positive literals . This works because, due to the partition constraint, a variable is assigned if and only if another variable is assigned .\nTo reduce to Tree-Chained Weighted CNF-Satisfiability, we simply express the partition constraints using clauses. For each , we add the clauses , and for each pair the clause . This enforces that we pick at least one variable, and at most one variable, for each ."
58
+ },
59
+ {
60
+ "section_id": "5.3",
61
+ "parent_section_id": "5",
62
+ "section_name": "Logarithmic Treewidth",
63
+ "text": "Although XALP-complete problems are in XP and not in FPT, there is a link between XALP and single exponential FPT algorithms on tree decompositions. Indeed, by considering instances with treewidth , where is the parameter, the single exponential FPT algorithm becomes an XP algorithm. We call this parameter logarithmic treewidth.\nIndependent Set parameterized by logarithmic treewidth\n\n\nInput: A graph , with a given tree decomposition of width at most , and an integer .\nParameter: .\nQuestion: Is there an independent set of of size at least ?\nIndependent Set with logarithmic treewidth as parameter is XALP-complete.\nWe start with membership which follows from the usual dynamic programming on the tree decomposition.\nWe maintain for each vertex in the current bag whether is in the independent set or not. When introducing a vertex , we non-deterministically decide if is put in the independent set or not. We reject if an edge is introduced between two vertices of the independent set. We make a co-non-deterministic step whenever the tree decomposition is branching. Since we only need one bit of information per vertex in the bag, this requires only working space, as for the running time we simply do a traversal of the tree decomposition which is only polynomial treesize.\nWe show hardness by reducing from Positive Partitioned Tree-Chained Weighted CNF-Satisfiability.\nWe can simply reuse the construction from [11 ###reference_11###] and note that the constructed graph has bounded logarithmic treewidth instead of logarithmic pathwidth because we reduced from the tree-chained SAT variant instead of the chained SAT variant. We describe the gadgets for completeness.\nFirst, the SAT instance is slightly adjusted for technical reasons. For each , we add a clause containing exactly its initial variables. This makes sure that the encoding of the chosen variable is valid. We assume the variables in each to be indexed starting from 0.\nVariable gadget. For each , let . We add edges , .\nClause gadget. For each clause with literals, we assume to be even by adding a dummy literal if necessary. We add paths , and .\nFor , we add the edge . We then add vertex for , which represents the th literal of the clause. Let be the binary representation of the index of the corresponding variable of . Then is adjacent to and the vertices for . For the dummy literal, there is no vertex .\nThe clause gadget has an independent set of size if and only if it contains a vertex .\nWhen the variable gadgets have one vertex in the independent set on each edge, a vertex of a clause can be added to the independent set only if the independent set contains exactly the vertices of the variable gadget that give the binary representation of the variable corresponding to .\nHence, the SAT instance is satisfiable if and only if there is an independent set of size in our construction.\nThe following problems are XALP-complete with logarithmic treewidth as parameter: Vertex Cover,\nRed-Blue Dominating Set, Dominating Set.\nThe result for Vertex Cover follows directly from Theorem 5.5 ###reference_heorem5### and the well known fact that a graph\nwith vertices has a vertex cover\nof size at most , iff it has an independent set of size at least .\nViewing Vertex Cover as a special case of Red-Blue Dominating Set gives the following graph: subdivide all edges of , and ask if a set of original (blue) vertices dominates all new (red)\nsubdivision vertices; as the subdivision step does not increase the treewidth,\nXALP-hardness of Red-Blue Dominating Set with treewidth as parameter follows.\nTo obtain XALP-hardness of Dominating Set, add to the instance of Red-Blue Dominating Set,\ntwo new vertices and and\nedges from to and all blue vertices; the treewidth increases by at most one, and the minimum size of a dominating set\nin the new graph is exactly one larger than the minimum size of a red-blue dominating set in .\nMembership in XALP is shown similarly to the proof of Theorem 5.5 ###reference_heorem5###."
64
+ },
65
+ {
66
+ "section_id": "5.4",
67
+ "parent_section_id": "5",
68
+ "section_name": "All-or-Nothing Flow",
69
+ "text": "A flow network is a tuple with a\ndirected graph, a vertex called the source,\n a vertex called the sink, and a capacity function, assigning to each arc a\npositive capacity, given in unary.\nA flow in flow network is a function\n, that assigns to each arc a non-negative flow value, such that\nfor each : , and\nfor each : .\nThe value of a flow is .\nFor more background on flow, we refer to the various text books on algorithms or flow, e.g., [2 ###reference_2###].\nA flow is an all-or-nothing flow if for each\n: , i.e., when there is flow over an\narc then all capacity of the arc is used. Deciding whether\nthere is an all-or-nothing flow of a given value in a given flow network is NP-complete [4 ###reference_4###]. In [6 ###reference_6###], it was shown that this problem is XNLP-complete with pathwidth as parameter. As that proof uses a reduction\nfrom a problem that has no \u2018tree variant\u2019 yet, we use here a different proof.\nAll-or-Nothing Flow parameterized by treewidth\n\n\nInput: A flow network , with a given tree decomposition of width at most , and an integer .\nParameter: .\nQuestion: Is there an all-or-nothing flow from to in with value exactly ?\nWe remark that the proof below can also be used (without changes)\nto show XALP-completeness for the variant where we ask whether there\nis a flow of value at least .\nAll-or-Nothing Flow parameterized by treewidth is\nXALP-complete.\nMembership can be shown in the usual way. For each introduced arc , we guess whether it is used () or not (), and the status of a bag is a function that gives\nfor each vertex in the bag the difference of the total inflow so far and the total outflow so far (). Because the capacities are given in unary, storing these values requires only bits.\nFor the hardness proof, we reduce from BinaryCSP with\ntreewidth plus degree as parameter.\nFirst, we build an\nequivalent instance where all sets of colours are\ndisjoint: . This\ncan be easily done by simple adaption of the instance.\nSo, we assume we are given a graph of treewidth at most and degree at most , a set of colours\n, for each a set with these sets disjoint, and for each ordered pair of vertices\n that forms an edge, a set of allowed colour pairs\n; and finally, we have\na tree decomposition of of width at most .\nIn the proof, we use the technique of representing colours by\nflow values in a Sidon set; a similar technique was used in\n[13 ###reference_13###].\nA Sidon set is a set of positive integers such that each pair of integers from the set has\na different sum, i.e., for , . Sidon sets are also known as Golomb rulers.\nErd\u0151s and Tur\u00e1n [24 ###reference_24###] gave a method to construct\nSidon sets; as discussed in [14 ###reference_14###], their construction\nimplies the following.\nA Sidon set with elements in can be found in time and logarithmic space.\nThe next step in the construction is to build a Sidon set\nwith elements, following the construction\nof Erd\u00f6s and Tur\u00e1n [24 ###reference_24###]. Write .\nNote that if we take a Sidon set, and add the same number to\neach element of the set, we again obtain a Sidon set. Now, we\nadd to each element of the just created Sidon set. Each of\nthese numbers is between and ; we assign to each\ncolor a unique element from this latter set. I.e., for each , we have , and\ndifferent pairs of colours have a different sum of their values.\nIn the flow network we are constructing, each vertex from is represented\nby vertices, with the degree of in .\nCall these vertices .\nThe construction relies on two gadgets: one that models assigning a colour to a vertex, and one that models checking for an edge that the\nassigned colours are an allowed pair.\nIn the description, we allow first parallel arcs with different\ncapacities. As a final step, we will subdivide each arc once \u2014 if we subdivide an arc with some capacity , then both\nresulting arcs get capacity as well. Clearly, the network without subdivisions has an all-or-nothing flow with the required value, if and only if the network with subdivisions has such. Also,\ngiven a tree decomposition of the network with parallel arcs, we\ncan build one of the same width\n(assuming the width is at least 2) for the network with subdivisions,\nas follows: if we subdivide an arc to and , then we\nadd a new bag containing , , and and make that bag incident to a bag that contains and \u2014 the latter exists due to the definition of a tree decomposition.\nTo model the assignment of a colour to a vertex, we have a gadget\nwith one addition vertex . We have an arc from to \nwith capacity ; for each colour , we have\nan arc from to with capacity and an arc\nfrom to with capacity .\nThe intuition is as follows: setting to colour of to \ncorresponds to sending from to , from\n to and from to .\nSee Figure 2 ###reference_###.\n###figure_2### Next, we describe the gadget that models a check that the pair of\ncolours assigned to the endpoints of an edge is\nin .\nWe assume we have an ordering of the vertices; for each\nvertex, order its neighbours accordingly.\nSuppose is the th neighbour of ,\nand is the th neighbour of ; thus, ,\nand .\nThe gadget has two additional vertices: and .\n(We have one gadget per edge rather than per arc, and so misuse notation a little:\n and represent the same vertex, likewise for\n and .)\nFor each , we have an arc from to \nwith capacity , and an arc from to with\ncapacity .\nFor each , we have an arc from to \nwith capacity , and an arc from to with\ncapacity .\nFor each , we have an arc\nfrom to with capacity .\nSee Figure 3 ###reference_###.\n###figure_3### The intuition here is as follows. If has colour \nand colour , then we send from to\n, from to ,\n from to , from to\n and from to . The property of\nSidon sets ensures that we cannot reroute flow in another way, i.e.,\nthe amount of flow that departs from equals the amount of flow that arrives\nat .\nFinally, we have for each vertex , and ,\nan arc from to with capacity .\nLet be the resulting graph, with the source, the sink.\nWe first will show how to build a tree decomposition of width\n of . Take a tree decomposition of . For each , we take a bag that contains ; ; for each vertex , the vertices ,\n; and for each edge in ,\nif , the vertices and . It is easy\nto see that this indeed is a tree decomposition of (still with parallel arcs), and each\nbag clearly is of size . As discussed above, we can obtain\nan equivalent instance without parallel arcs by subdividing arcs\nand adding bags with three vertices.\nSuppose has vertices.\nThere is an all-or-nothing flow with value from to\n in , if and only has a colouring with for\neach , , and for each , .\nFirst, suppose that has a colouring that fulfils the demands.\nBuild a flow as follows. For each vertex coloured with , send from to , from\n to , from to ,\n from each to the corresponding , from\neach to the corresponding , and from\n to .\nFor each edge , if is coloured and is coloured\n, then send flow from to .\nOne can check that this is an all-or-nothing flow from to ; its value is , as sends flow to each vertex .\nNow, suppose we have a flow with value from to .\nAs the total capacity of all outgoing arcs from equals ,\neach of these arcs is used, so each receives inflow.\nOutgoing arcs from have capacities of the form or\n, but as for each colour , , the only possible way to have an outflow of exactly is to send\n flow over one outgoing arc, and flow over another outgoing arc,\nfor some colour .\nThus, each vertex receives flow over one of its\nincoming arcs, for some colour . By construction .\nLet be the colouring of obtained by colouring each \nwith the colour such that the flow from to equals\n. We claim that each vertex receives flow,\nand that we send for each edge in , flow from to .\nWe show this by induction. Note that the claim holds for for all .\nObserve that is acyclic.\nSuppose is the th neighbour of and is the th neighbour of .\nBy the induction hypothesis, receives flow, and \nreceives flow. So, receives \nflow which it sends to . Now, we use the Sidon property: the only\npossible way for to send out this flow is to\nsend flow to and flow to ; any\nother combination of flows would imply a second pair of values\nin the Sidon set with the same sum. This shows that the induction\nhypothesis holds.\nNow, we use the Sidon property for the second time. As we send flow from to , there must\nbe an arc between these vertices with this capacity. So,\nthere is a pair with . By the Sidon property, , and as the sets for the vertices are\ndisjoint, we have and , so . As this holds for each edge, we have a colouring that\nsatisfies the constraints.\nBy observing that the transformation can be done in logarithmic space, the result now follows."
70
+ },
71
+ {
72
+ "section_id": "5.5",
73
+ "parent_section_id": "5",
74
+ "section_name": "Other problems",
75
+ "text": "Several XALP-hardness proofs follow from known reductions. Membership is usually easy to prove, by observing that the known\nXP-algorithms can be turned into XALP-membership by guessing table entries, and using the stack to store the information for a\nleft child when processing a right subtree.\nThe following problems are XALP-complete:\nChosen Maximum Outdegree, Circulating Orientation, Minimum Maximum Outdegree, Outdegree Restricted Orientation, and\nUndirected Flow with Lower Bounds, with the treewidth as parameter.\nMax Cut and Maximum Regular Induced Subgraph with clique-width as parameter.\n(1): The reductions given in [6 ###reference_6###] and [32 ###reference_32###] can be used; one easily observes that these reductions keep the treewidth of the constructed instance bounded by a function of the treewidth of the original instance (often, a small additive constant is added.)\n(2): The reductions given in [9 ###reference_9###] can be reused with minimal changes, only the bound on linear clique-width becomes a bound on clique-width because of the \u2018tree-shape\u2019 of the instance to reduce.\nChosen Maximum Outdegree, Circulating Orientation, Minimum Maximum Outdegree,\nOutdegree Restricted Orientation, and Undirected Flow with Lower Bounds,\ntogether with All-or-Nothing Flow were shown to be XNLP-complete with pathwidth as\nparameter in [6 ###reference_6###]. Gima et al. [27 ###reference_27###] showed that Minimum Maximum\nOutdegree with vertex cover as parameter is -hard. For related results, see also [32 ###reference_32###]."
76
+ },
77
+ {
78
+ "section_id": "6",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusions",
81
+ "text": "We expect many (but not all) problems that are (W[1]-)hard and in XP for treewidth as parameter to be XALP-complete;\nour paper gives good starting points for such proofs. Let us give an explicit example. The Pebble Game Problem [22 ###reference_22###, 29 ###reference_29###]\nparameterized by the number of pebbles is complete for XP, which is equal to XAL=A[]. The problem corresponds to deciding whether there is a winning strategy in an adversarial two-player game with pebbles on a graph where the possible moves depend on the positions of all pebbles. We can expect variants with at most moves to be complete for XALP.\nCompleteness proofs give a relatively precise complexity classification of problems. In particular, XALP-hardness proofs indicate that we do not expect a deterministic algorithm to use less than XP space if it runs in XP time. Indeed the inclusion of XNLP in XALP is believed to be strict, and already for XNLP-hard problems we have the following conjecture.\nNo XNLP-hard problem has an algorithm that runs in time and space, with a computable function, the parameter, the input size.\nWhile XNLP and XALP give a relatively simple framework to classify problems in terms of simultaneous bound on space and time, the parameter is allowed to blow up along the reduction chain. One may want to mimic the fine grained time complexity results based on the (Strong) Exponential Time Hypothesis. In this direction, one could assume that Savitch\u2019s theorem is optimal as was done in [15 ###reference_15###].\nSince XNLP is above the W-hierarchy, it could be interesting to study the relationship of XALP with some other hierarchies like the A-hierarchy and the AW-hierarchy.\nIt is also unclear where to place List-colouring parameterized by tree-partition-width222A tree-partition of a graph is a partition of into (disjoint) bags , where is a tree, such that implies that the bags of and are the same or adjacent in .\nThe width is the size of the largest bag, and the tree-partition-width of is found by taking the minimum width over all tree-partitions of ..\nIt was shown to be in XL and W[1]-hard [7 ###reference_7###] but neither look like good candidates for completeness."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {},
86
+ "image_paths": {
87
+ "1": {
88
+ "figure_path": "2206.11828v5_figure_1.png",
89
+ "caption": "Figure 1: Local structure of a satisfying assignment in the constructed instance of Tree-Chained Multicolor Clique (k\u2032=4superscript\ud835\udc58\u20324k^{\\prime}=4italic_k start_POSTSUPERSCRIPT \u2032 end_POSTSUPERSCRIPT = 4). The blue edges enforce that the positions of the heads on tapes and the state of the TM are consistent. The black edges enforce that what is written on the work tape does not change in blocks where the head is not present. The red edges enforce that the state, head positions, and the bit at the position of the head on the work tape can be changed exactly by the transitions of the TM. We then add further edges to form cliques, but they do not enforce any constraints.",
90
+ "url": "http://arxiv.org/html/2206.11828v5/x1.png"
91
+ },
92
+ "2": {
93
+ "figure_path": "2206.11828v5_figure_2.png",
94
+ "caption": "Figure 2: Gadget that chooses a colour for a vertex. For each \u03b3\u2208C\u2062(v)\ud835\udefe\ud835\udc36\ud835\udc63\\gamma\\in C(v)italic_\u03b3 \u2208 italic_C ( italic_v ), there is an arc to v0subscript\ud835\udc630v_{0}italic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT with capacity S\u2062(\u03b3)\ud835\udc46\ud835\udefeS(\\gamma)italic_S ( italic_\u03b3 ) and for each \u03b3\u2208C\u2062(v)\ud835\udefe\ud835\udc36\ud835\udc63\\gamma\\in C(v)italic_\u03b3 \u2208 italic_C ( italic_v ), there is an arc to t\ud835\udc61titalic_t with capacity 6\u2062L\u2212S\u2062(\u03b3)6\ud835\udc3f\ud835\udc46\ud835\udefe6L-S(\\gamma)6 italic_L - italic_S ( italic_\u03b3 ). (The shorthand notations S\u2062(C\u2062(v))\ud835\udc46\ud835\udc36\ud835\udc63S(C(v))italic_S ( italic_C ( italic_v ) ) and 6\u2062L\u2212S\u2062(C\u2062(v))6\ud835\udc3f\ud835\udc46\ud835\udc36\ud835\udc636L-S(C(v))6 italic_L - italic_S ( italic_C ( italic_v ) ) indicate these values respectively.)",
95
+ "url": "http://arxiv.org/html/2206.11828v5/x2.png"
96
+ },
97
+ "3": {
98
+ "figure_path": "2206.11828v5_figure_3.png",
99
+ "caption": "Figure 3: Gadget that checks if an edge is properly coloured. Shorthand notation: S\u2062(C\u2062(v))\ud835\udc46\ud835\udc36\ud835\udc63S(C(v))italic_S ( italic_C ( italic_v ) ): for each \u03b3\u2208C\u2062(v)\ud835\udefe\ud835\udc36\ud835\udc63\\gamma\\in C(v)italic_\u03b3 \u2208 italic_C ( italic_v ), there is an arc with capacity S\u2062(\u03b3)\ud835\udc46\ud835\udefeS(\\gamma)italic_S ( italic_\u03b3 ). Similar for S\u2062(C\u2062(w))\ud835\udc46\ud835\udc36\ud835\udc64S(C(w))italic_S ( italic_C ( italic_w ) ).",
100
+ "url": "http://arxiv.org/html/2206.11828v5/x3.png"
101
+ }
102
+ },
103
+ "validation": true,
104
+ "references": [
105
+ {
106
+ "1": {
107
+ "title": "Fixed-parameter tractability and completeness IV: On completeness\nfor and PSPACE analogues.",
108
+ "author": "Karl A. Abrahamson, Rodney G. Downey, and Michael R. Fellows.",
109
+ "venue": "Annals of Pure and Applied Logic, 73:235\u2013276, 1995.",
110
+ "url": null
111
+ }
112
+ },
113
+ {
114
+ "2": {
115
+ "title": "Network flows - theory, algorithms and applications.",
116
+ "author": "Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin.",
117
+ "venue": "Prentice Hall, 1993.",
118
+ "url": null
119
+ }
120
+ },
121
+ {
122
+ "3": {
123
+ "title": "Satisfiability, branch-width and tseitin tautologies.",
124
+ "author": "Michael Alekhnovich and Alexander A. Razborov.",
125
+ "venue": "Computational Complexity, 20(4):649\u2013678, 2011.",
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "4": {
131
+ "title": "NP-complete variants of some classical graph problems.",
132
+ "author": "Per Alexandersson.",
133
+ "venue": "arXiv, abs/2001.04120, 2020.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "5": {
139
+ "title": "Width-parametrized SAT: time\u2013space tradeoffs.",
140
+ "author": "Eric Allender, Shiteng Chen, Tiancheng Lou, Periklis A. Papakonstantinou, and\nBangsheng Tang.",
141
+ "venue": "Theory of Computing, 10:297\u2013339, 2014.",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "6": {
147
+ "title": "Problems hard for treewidth but easy for stable gonality.",
148
+ "author": "Hans L. Bodlaender, Gunther Cornelissen, and Marieke van der Wegen.",
149
+ "venue": "In Michael A. Bekos and Michael Kaufmann, editors, 48th\nInternational Workshop on Graph-Theoretic Concepts in Computer Science, WG\n2022, volume 13453 of Lecture Notes in Computer Science, pages 84\u201397.\nSpringer, 2022.",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "7": {
155
+ "title": "List colouring trees in logarithmic space.",
156
+ "author": "Hans L. Bodlaender, Carla Groenland, and Hugo Jacob.",
157
+ "venue": "In Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz\nHerman, editors, 30th Annual European Symposium on Algorithms, ESA\n2022, volume 244 of LIPIcs, pages 24:1\u201324:15. Schloss Dagstuhl -\nLeibniz-Zentrum f\u00fcr Informatik, 2022.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "8": {
163
+ "title": "On the parameterized complexity of computing tree-partitions.",
164
+ "author": "Hans L. Bodlaender, Carla Groenland, and Hugo Jacob.",
165
+ "venue": "In Holger Dell and Jesper Nederlof, editors, 17th International\nSymposium on Parameterized and Exact Computation, IPEC 2022, volume 249 of\nLIPIcs, pages 7:1\u20137:20. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr\nInformatik, 2022.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "9": {
171
+ "title": "Xnlp-completeness for parameterized problems on graphs with a linear\nstructure.",
172
+ "author": "Hans L. Bodlaender, Carla Groenland, Hugo Jacob, Lars Jaffke, and Paloma T.\nLima.",
173
+ "venue": "In Holger Dell and Jesper Nederlof, editors, 17th International\nSymposium on Parameterized and Exact Computation, IPEC 2022, volume 249 of\nLIPIcs, pages 8:1\u20138:18. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr\nInformatik, 2022.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "10": {
179
+ "title": "On the complexity of problems on tree-structured graphs.",
180
+ "author": "Hans L. Bodlaender, Carla Groenland, Hugo Jacob, Marcin Pilipczuk, and Michal\nPilipczuk.",
181
+ "venue": "In Holger Dell and Jesper Nederlof, editors, 17th International\nSymposium on Parameterized and Exact Computation, IPEC 202, volume 249 of\nLIPIcs, pages 6:1\u20136:17. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr\nInformatik, 2022.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "11": {
187
+ "title": "Parameterized problems complete for nondeterministic FPT time and\nlogarithmic space.",
188
+ "author": "Hans L. Bodlaender, Carla Groenland, Jesper Nederlof, and C\u00e9line M. F.\nSwennenhuis.",
189
+ "venue": "In Proceedings 62nd IEEE Annual Symposium on Foundations of\nComputer Science, FOCS 2021, pages 193\u2013204, 2021.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "12": {
195
+ "title": "Parameterized complexity of binary CSP: vertex cover, treedepth,\nand related parameters.",
196
+ "author": "Hans L. Bodlaender, Carla Groenland, and Michal Pilipczuk.",
197
+ "venue": "In Kousha Etessami, Uriel Feige, and Gabriele Puppis, editors, 50th International Colloquium on Automata, Languages, and Programming,\nICALP, volume 261 of LIPIcs, pages 27:1\u201327:20. Schloss Dagstuhl -\nLeibniz-Zentrum f\u00fcr Informatik, 2023.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "13": {
203
+ "title": "The parameterised complexity of integer multicommodity flow.",
204
+ "author": "Hans L. Bodlaender, Isja Mannens, Jelle J. Oostveen, Sukanya Pandey, and\nErik Jan van Leeuwen.",
205
+ "venue": "In Neeldhara Misra and Magnus Wahlstr\u00f6m, editors, 18th\nInternational Symposium on Parameterized and Exact Computation, IPEC 2023,\nSeptember 6-8, 2023, Amsterdam, The Netherlands, volume 285 of LIPIcs,\npages 6:1\u20136:19. Schloss Dagstuhl - Leibniz-Zentrum f\u00fcr Informatik,\n2023.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "14": {
211
+ "title": "Parameterized complexity of scheduling chains of jobs with delays.",
212
+ "author": "Hans L. Bodlaender and Marieke van der Wegen.",
213
+ "venue": "In Proceedings 15th International Symposium on Parameterized and\nExact Computation, IPEC 2020, pages 4:1\u20134:15, 2020.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "15": {
219
+ "title": "The parameterized space complexity of model-checking bounded variable\nfirst-order logic.",
220
+ "author": "Yijia Chen, Michael Elberfeld, and Moritz M\u00fcller.",
221
+ "venue": "Logical Methods in Computer Science, 15(3), 2019.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "16": {
227
+ "title": "The complexity of theorem-proving procedures.",
228
+ "author": "Stephen A. Cook.",
229
+ "venue": "In Michael A. Harrison, Ranan B. Banerji, and Jeffrey D. Ullman,\neditors, Proceedings of the 3rd Annual ACM Symposium on Theory of\nComputing, STOC 1971, pages 151\u2013158. ACM, 1971.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "17": {
235
+ "title": "Parameterized Algorithms.",
236
+ "author": "Marek Cygan, Fedor V. Fomin, Lukasz Kowalik, Daniel Lokshtanov, D\u00e1niel\nMarx, Marcin Pilipczuk, Michal Pilipczuk, and Saket Saurabh.",
237
+ "venue": "Springer, 2015.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "18": {
243
+ "title": "On the parameterized complexity of the perfect phylogeny problem.",
244
+ "author": "Jorke M. de Vlas.",
245
+ "venue": "arXiv, abs/2305.02800, 2023.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "19": {
251
+ "title": "Fixed-parameter tractability and completeness I: Basic results.",
252
+ "author": "Rodney G. Downey and Michael R. Fellows.",
253
+ "venue": "SIAM Journal on Computing, 24(4):873\u2013921, 1995.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "20": {
259
+ "title": "Fixed-parameter tractability and completeness II: On completeness\nfor W[1].",
260
+ "author": "Rodney G. Downey and Michael R. Fellows.",
261
+ "venue": "Theoretical Computer Science, 141(1&2):109\u2013131, 1995.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "21": {
267
+ "title": "Parameterized Complexity.",
268
+ "author": "Rodney G. Downey and Michael R. Fellows.",
269
+ "venue": "Springer, 1999.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "22": {
275
+ "title": "Fundamentals of Parameterized Complexity.",
276
+ "author": "Rodney G. Downey and Michael R. Fellows.",
277
+ "venue": "Texts in Computer Science. Springer, 2013.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "23": {
283
+ "title": "On the space and circuit complexity of parameterized problems:\nClasses and completeness.",
284
+ "author": "Michael Elberfeld, Christoph Stockhusen, and Till Tantau.",
285
+ "venue": "Algorithmica, 71(3):661\u2013701, 2015.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "24": {
291
+ "title": "On a problem of Sidon in additive number theory, and on some\nrelated problems.",
292
+ "author": "P. Erd\u0151s and P. Tur\u00e1n.",
293
+ "venue": "Journal of the London Mathematical Society, s1-16(4):212\u2013215,\n1941.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "25": {
299
+ "title": "On the complexity of some colorful problems parameterized by\ntreewidth.",
300
+ "author": "Michael R. Fellows, Fedor V. Fomin, Daniel Lokshtanov, Frances A. Rosamond,\nSaket Saurabh, Stefan Szeider, and Carsten Thomassen.",
301
+ "venue": "Information and Compututation, 209(2):143\u2013153, 2011.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "26": {
307
+ "title": "Parameterized Complexity Theory.",
308
+ "author": "J\u00f6rg Flum and Martin Grohe.",
309
+ "venue": "Springer, 2006.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "27": {
315
+ "title": "Exploring the gap between treedepth and vertex cover through vertex\nintegrity.",
316
+ "author": "Tatsuya Gima, Tesshu Hanaka, Masashi Kiyomi, Yasuaki Kobayashi, and Yota\nOtachi.",
317
+ "venue": "Theoretical Computer Science, 918:60\u201376, 2022.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "28": {
323
+ "title": "Generalized coloring for tree-like graphs.",
324
+ "author": "Klaus Jansen and Petra Scheffler.",
325
+ "venue": "Discrete Applied Mathematics, 75(2):135\u2013155, 1997.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "29": {
331
+ "title": "Classes of pebble games and complete problems.",
332
+ "author": "Takumi Kasai, Akeo Adachi, and Shigeki Iwata.",
333
+ "venue": "SIAM Journal on Computing, 8(4):574\u2013586, 1979.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "30": {
339
+ "title": "On space efficiency of algorithms working on structural\ndecompositions of graphs.",
340
+ "author": "Michal Pilipczuk and Marcin Wrochna.",
341
+ "venue": "ACM Transactions on Computation Theory, 9(4):18:1\u201318:36,\n2018.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "31": {
347
+ "title": "Tree-size bounded alternation.",
348
+ "author": "Walter L. Ruzzo.",
349
+ "venue": "Journal of Computer and System Sciences, 21(2):218\u2013235, 1980.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "32": {
355
+ "title": "Not so easy problems for tree decomposable graphs.",
356
+ "author": "Stefan Szeider.",
357
+ "venue": "In Advances in Discrete Mathematics and Applications: Mysore,\n2008, volume 13 of Ramanujan Math. Soc. Lect. Notes Ser., pages\n179\u2013190. Ramanujan Math. Soc., Mysore, 2010.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "33": {
363
+ "title": "Properties that characterize LOGCFL.",
364
+ "author": "H. Venkateswaran.",
365
+ "venue": "Journal of Computer and System Sciences, 43(2):380\u2013404, 1991.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "34": {
371
+ "title": "On tree-partition-width.",
372
+ "author": "David R. Wood.",
373
+ "venue": "European Journal of Combinatorics, 30(5):1245\u20131253, 2009.",
374
+ "url": null
375
+ }
376
+ }
377
+ ],
378
+ "url": "http://arxiv.org/html/2206.11828v5"
379
+ }
20240119/2208.06551v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2208.09424v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2209.00315v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2210.02428v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2210.08302v2.json ADDED
@@ -0,0 +1,518 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Projective Integration Methods in the Runge-Kutta Framework and the Extension to Adaptivity in Time",
3
+ "abstract": "Projective Integration methods are explicit time integration schemes for stiff ODEs with large spectral gaps. In this paper, we show that all existing Projective Integration methods can be written as Runge-Kutta methods with an extended Butcher tableau including many stages. We prove consistency and order conditions of the Projective Integration methods using the Runge-Kutta framework. Spatially adaptive Projective Integration methods are included via partitioned Runge-Kutta methods. New time adaptive Projective Integration schemes are derived via embedded Runge-Kutta methods and step size variation while their accuracy, stability, convergence, and error estimators are investigated numerically.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Many problems in science and engineering include the solution of stiff ODEs [6 ###reference_6###] of the following form\nwhere is the time variable, is the unknown solution, and the right hand side function is sufficiently smooth and scales with , where the stiffness is governed by a parameter . Small values of thus lead to a stiff problem (1 ###reference_###) with fast modes requiring small time steps of explicit time integration schemes. While the solution of (1 ###reference_###) is a function of time, note that the right-hand side function might originate from the discretization of spatial derivatives, in which case the left-hand side would be a partial derivative. For the rest of this paper, we assume autonomous ODEs and write for conciseness.\nParticularly difficult test cases include a scale separation with fast and slow modes present in the solution , occurring in many physical applications [1 ###reference_1###, 6 ###reference_6###, 11 ###reference_11###]. This is the case if the eigenvalues of the right-hand side derivative are given by two clusters around and . Even though the slow modes (represented by the modes clustered around ) might be most interesting, e.g., in fluid dynamics [1 ###reference_1###, 11 ###reference_11###, 14 ###reference_14###], the existing fast modes (represented by the modes clustered around ) need to be resolved in a stable way. Explicit time stepping methods thus result in a severe time step constraint for stable integration of the fast modes. Implicit time stepping methods require a possibly expensive non-linear solver.\nRunge-Kutta (RK) methods are well-known methods for the solution of ODEs. Standard explicit RK methods are not suitable to solve stiff ODEs, due to the time step constraint. Modified RK methods like Implicit-Explicit (IMEX) RK methods, are used if the right hand side function of the ODE can be split into a stiff and a non-stiff term, which can be solved separately in an efficient way [23 ###reference_23###]. A similar splitting is necessary to use Multirate Runge-Kutta (MRK) methods [4 ###reference_4###]. Exponential Runge-Kutta (ERK) methods can be applied for stiff ODEs but have difficulties with scale separation [7 ###reference_7###].\nProjective Integration (PI) is an explicit numerical method for the time integration of stiff ODEs that include a scale separation [3 ###reference_3###, 16 ###reference_16###, 20 ###reference_20###]. Its main idea is to first damp the fast modes using a few small time steps of an inner integrator and then extrapolate using a large time step for the long time behavior. For stability, the small (inner) time step size is of the order of the stiffness parameter and the large (outer) time step size is typically chosen according to a standard CFL condition that corresponds to an underlying limiting equation for , which might not be explicitly given.\nPI schemes have been used successfully for many applications, including multiscale ODEs [19 ###reference_19###], power systems [26 ###reference_26###], stochastic DNA models [2 ###reference_2###], shallow water flows [1 ###reference_1###], and kinetic equations [16 ###reference_16###, 20 ###reference_20###, 11 ###reference_11###]. The method has been extended to higher-order using RK methods for the outer integrator [18 ###reference_18###, 15 ###reference_15###], which resulted in Projective Runge-Kutta (PRK) methods. Using more projective levels results in the Telescopic Projective Integration (TPI) method for multiple eigenvalue clusters [22 ###reference_22###, 21 ###reference_21###]. A spatially adaptive version was first derived in [12 ###reference_12###].\nIn this paper we show that all of the aforementioned PI methods can be written as RK methods and derive the corresponding Butcher tableaus. The inner steps and the subsequent extrapolation step translate to many stages of the corresponding RK method and this generalizes to the higher-order, telescopic, and spatially adaptive versions of the PI method. Writing PI methods in the framework of standard RK schemes allows to make use of the vast number of mathematical tools developed for RK methods throughout the past decades. This includes a simple assessment of consistency and order conditions up to higher order as well as accuracy and numerical stability. Using the RK setup we derive time adaptive PI schemes with the help of embedded RK methods. We compare the new embedded PI methods with other time adaptive methods that are obtained via step size variation, which is a variant of Richardson extrapolation, and on-the-fly error estimation [17 ###reference_17###]. Interestingly, we show that the original on-the-fly error estimation from [17 ###reference_17###] leads to an instable scheme and we derive a new versions of the schemes using the RK framework for which we prove stability for stiff systems with spectral gaps.\nLastly, we apply the rewritten and newly derived methods do a two-scale model problem and investigate convergence and error estimators to confirm the analytical results obtained before.\nWhile this paper focuses mostly on the method derivation and analysis, it is the necessary preparation for further extensions, more detailed stability analysis, and additional numerical applications, for example using the tools for stability analysis of RK schemes from [9 ###reference_9###] or test cases from fluid dynamics.\nThe rest of this paper is organized as followed: In Section 2 ###reference_###, we briefly recall the general form of a RK method before deriving the corresponding RK formulas for standard PI methods: the first order Projective Forward Euler (PFE) method, the higher order Projective Runge-Kutta (PRK) method, and the telescopic Projective Forward Euler (TPFE) method. Spatially Adaptive Projective Integration (SAPI) methods are written in the framework of partitioned RK methods in Section 3 ###reference_###. Time adaptive Projective Integration (TAPI) methods are derived based on embedded RK methods and step size variation using error estimators in Section 4 ###reference_###. We use the techniques developed in the first section to rewrite the on-the-fly error estimation schemes as a RK scheme and develop a new version of the scheme in Section 5 ###reference_###. We perform accuracy and stability analysis in Section 6 ###reference_### and numerically assess the convergence properties as well as the error estimators in Section 7 ###reference_###. The paper ends with a short conclusion and future work."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Projective Integration as Runge-Kutta",
15
+ "text": "In this section, we write recall standard Runge-Kutta (RK) schemes and Projective Integration (PI) methods and write various PI methods as Runge-Kutta schemes including the corresponding Butcher tableau: the first order Projective Forward Euler (PFE) method, the higher order Projective Runge-Kutta (PRK) method, and the telescopic Projective Forward Euler (TPFE) method."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Runge-Kutta schemes",
21
+ "text": "A standard Runge-Kutta scheme with stages computes the intermediate function evaluations , for at every stage according to\nand combines those to a new time iterate\nIn this setting, the are the intermediate nodes, the are the weights, and the are the coefficients for the computation of the intermediate function evaluations .\nRunge-Kutta schemes are typically represented in a Butcher tableau\nwith nodes vector , weight vector and coefficient matrix . In this paper, we will exclusively consider explicit schemes for which is a lower triangular matrix. Additionally, we note the following conditions for standard properties of explicit Runge-Kutta schemes:\nconsistency:\nfirst-order accuracy:\nsecond-order accuracy:"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Projective Forward Euler as Runge-Kutta",
27
+ "text": "The simplest form of a PI method is the Projective Forward Euler method (PFE) [3 ###reference_3###, 20 ###reference_20###]. It consists of small forward Euler time steps of size and one subsequent extrapolation step of the remaining time step . The method is written using and\nTo write the PFE in the standard form of a RK scheme, we use the following notation\nso that the stages , for are the slopes or the right-hand side evaluations at the intermediate points, i.e.\nWe note that the extrapolation step in fact reads\nWhile this looks significantly simpler than the previous formula for the extrapolation step, still function evaluations for with are necessary.\nIn a similar way, we can write the extrapolation step as follows\nWe use the notation in the following. Note that stiff problems typically require , such that .\nThe Butcher tableau of the corresponding RK scheme then reads\nFor a concise notation of the Butcher tableau in the next sections, we use the following definitions\nsuch that the Butcher tableau can be written in concise form as\nFor consistency and first order accuracy of the PFE schemes, we refer to the later theorem 1 ###reference_rem1###, which proves this for PRK schemes as a superset of PFE methods."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Projective Runge-Kutta as Runge-Kutta",
33
+ "text": "Originating from the first-order accurate PFE, higher-order schemes can be constructed using a standard stage RK method as outer integrator with usual parameters , , and . The result is a Projective Runge-Kutta scheme (PRK) [18 ###reference_18###, 14 ###reference_14###], consisting of outer stages, which each include small inner time steps of size and one subsequent extrapolation step of the remaining time step .\nThe first stage\u2019s slopes are computed as\nand the other stages are subsequently given by\nThe new time step is then extrapolated to\nTo write the PRK in the standard RK form, we use the following notation\nfor inner iterations . The new time step is given by\nAdapting the notation of the PFE method 2.2 ###reference_###, the PRK method can then be written as RK method in a Butcher tableau in the following way\nwith\nwhere the last column contains the entry\nand the weights are given by\nConsistency and order conditions of a PRK method can be easily derived using the representation as a RK method. It is one example for the advantage of writing the existing PI schemes as RK schemes, since the rewriting allows to use the consistency conditions of RK schemes for PI schemes. This is shown in the next theorem.\n(Consistency and order conditions)\nA PRK method is consistent and at least first order accurate.\nThe method recovers second order accuracy of the outer RK method in the case of vanishing .\nWe proof Theorem 1 ###reference_rem1### up to second order by showing that the RK version of the method fulfills the consistency and order conditions.\nConsistency demands . We check the condition for each outer stage. For the first outer stages , i.e. , we obtain for each inner stage\nFor the later outer stages , we obtain for each inner stage\nwhere we have used that the outer RK scheme is consistent and fulfills .\nFirst order accuracy demands and this can be checked by\nwhere we used that the outer RK scheme is first order accurate and fulfills .\nSecond order accuracy demands and we compute\nWhere we have used that the outer RK scheme is second order accurate and therefore fulfills as well as .\nThe scheme is formally only second order if . This is due to the first order inner integrator. However, since by potentially several orders of magnitude, the error of the inner integrator can often be neglected. In the limit we obtain full second order.\n\u220e\nWe note that due to the first order accuracy of the inner integrator, no higher order can formally be obtained in the PRK method. However, the method can yield higher order in the limit of vanishing if higher order inner integrators are used.\nFrom the Butcher tableau representation, it can be shown that the original outer RK scheme is recovered (with some redundant zero rows) in the limiting case of vanishing .\n(PRK limit)\nIn the limit of vanishing and constant inner steps , the PRK scheme reduces to its outer RK scheme.\nWe show that in the limit of , the PRK Butcher tableau collapses to the underlying outer RK Butcher tableau\nFirst we show that all submatrices in have rank one, because of\nFurthermore, we obtain\nand\nTherefore, the inner stages can be eliminated and the Butcher tableau of the PRK scheme collapses to\n\u220e\n(Multirate Runge-Kutta methods)\nIf allows for a splitting into a fast/stiff part and a slow/non-stiff part, e.g. , tailored RK methods can be applied that can be written as a multirate generalized additive Runge-Kutta (mGARK) method with an extended Butcher tableau [4 ###reference_4###]. For an autonomous system, the Butcher tableau reads\nThe work of this section can readily be extended to mGARK methods in which the stiff integrator uses a PI scheme in its RK form. We do not pursue this further as the main advantage of PI schemes is to be applicable without explicit splitting into a stiff and a non-stiff part. However, it might be an interesting option to investigate systems with multiple relaxation rates for which some can be resolved by a splitting approach and others are dealt with non-intrusively using PI.\nIn the appendix A ###reference_###, we give two explicit examples of fourth order PRK schemes that can easily be derived using the notation in this section, for two different numbers of inner time steps. Both methods will be analyzed with respect to stability and numerical convergence in sections 6.2 ###reference_### and 7 ###reference_###."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Telescopic Projective Forward Euler as Runge-Kutta",
39
+ "text": "If the eigenvalue spectrum of the right-hand side function in (1 ###reference_###) features at least three distinct clusters, Telescopic Projective Integration (TPI) can be used for an acceleration of the method. TPI is a consistent extension of PI in the sense that it uses a nested PI approach [22 ###reference_22###]. It consists of several inner integration layers with different time step sizes that need to be chosen to achieve stability.\nFor simplicity we only focus on the case of two inner integrators and on the Forward Euler scheme (FE) as inner and outer integrator here. The extension towards a Telescopic Projective Runge-Kutta (TPRK) method is then straightforward. In the TPRK scheme a RK scheme is used as the outermost integrator and PFE is used on all the inner levels, where not accuracy but stability is the limiting factor.\nBased on [22 ###reference_22###] we write the TPFE scheme as follows\nwhere the time step size on each layer is denoted by and the remaining extrapolation (or projective) time step size is . Once the time step size of the innermost layer is chosen the others satisfy\nIn the Butcher tableau, the TPFE is written in the following way for the two inner layers and outer layer using\nwith the new notation\nfor weights\n(Consistency and order conditions)\nThe TPFE method is consistent and first order accurate.\nConsistency demands .\nFor inner stages , and outer stages , we thus compute\nwhere we have used that the outer time step ratio is given by .\nFirst order accuracy demands and we analogously compute\nwhere we have used that due to Equation (52 ###reference_###).\n\u220e\nNote that the TPFE scheme cannot be expected to be more than first order, because the outer scheme is a forward euler scheme. The extension to Telescopic Projective Runge-Kutta (TPRK) methods is straightforward but left out for conciseness."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Space Adaptive Projective Integration as partitioned Runge-Kutta",
45
+ "text": "In this section, we consider the extension to spatial adaptivity, outlined in [12 ###reference_12###]. To that extent, we consider the case that (1 ###reference_###) is the result of a spatial discretization on a grid in physical space. The vector of unknowns is then a composition of variables at distinct grid points in physical space. Spatial adaptivity is beneficial if several scales are involved. This means that some variables, e.g., on a subdomain of the physical grid, relax faster than others. We assume that the model is autonomous and consists of a stiff part for and a non-stiff part for\nA partitioned Runge-Kutta scheme for this model is given by the stages\nThe next step is then combined using the stage values to\nA partitioned Runge-Kutta scheme is typically written using two Butcher tableaus, one for the stiff and one for the non-stiff region, respectively."
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "Space Adaptive Forward Euler as partitioned Runge-Kutta",
51
+ "text": "As a preparation for the space adaptive PI methods, we first consider the Space Adaptive Forward Euler (SAFE) method from [12 ###reference_12###] as illustrated by Figure 1 ###reference_###. The method uses a forward Euler scheme with small step size for the variables in the stiff domain, and a forward Euler scheme with large time step size for the variables in the non-stiff domain. The boundary cells are computed by linear interpolation [12 ###reference_12###].\n###figure_1### This scheme can be written as a partitioned Runge-Kutta method with stages using the two Butcher tableaus\nwhere the left Butcher tableau in (65 ###reference_###) is for the forward Euler scheme with small step size and the right Butcher tableau in (65 ###reference_###) is for the forward Euler scheme with large step size including the linear interpolation for the boundary cells.\nIt is evident from the Butcher tableaus, that the method is first order accurate."
52
+ },
53
+ {
54
+ "section_id": "3.2",
55
+ "parent_section_id": "3",
56
+ "section_name": "Space Adaptive Projective Forward Euler as partitioned Runge-Kutta",
57
+ "text": "The Space Adaptive Projective Forward Euler (SAPFE) method from [12 ###reference_12###] is illustrated by Figure 2 ###reference_###. The method uses a PFE method for the variables in the stiff domain, and Forward Euler method with large time step size for the variables in the non-stiff domain. Boundary values for the first inner steps of the PFE method are again obtained by interpolation [12 ###reference_12###].\n###figure_2### Combining the results of Section 2.2 ###reference_### and Section 3.1 ###reference_###, this scheme can be written as a partitioned RK method with stages using the two Butcher tableaus\nwhere the left Butcher tableau in (66 ###reference_###) is for the PFE scheme with inner step size and outer step size and the right Butcher tableau in (66 ###reference_###) is for the forward Euler scheme with large step size including the linear interpolation for the boundary cells.\n(Space Adaptive Projective Runge-Kutta as partitioned RK)\nIn the same fashion as the first order PFE method in Section 2.2 ###reference_### is extended to higher-order PRK methods in 2.3 ###reference_###, higher-order space adaptive Projective Runge-Kutta schemes can be derived and written in the form of a RK scheme as an extension of the work in this section. However, we omit this here for conciseness and leave specific applications for future work.\nNote also that the consistency and accuracy results can readily be carried over from Section 2.3 ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "3.3",
61
+ "parent_section_id": "3",
62
+ "section_name": "Space Adaptive Projective Projective Forward Euler as partitioned Runge-Kutta",
63
+ "text": "The Space Adaptive Projective Projective Forward Euler (SAPPFE) method from [12 ###reference_12###] is illustrated by Figure 3 ###reference_###. It assumes multiple stiff regions with different severity of time step constraints. The method uses a PFE method in the stiff part of the domain and another PFE method in the semi-stiff part of the domain [12 ###reference_12###].\n###figure_3### This scheme can be written as a partitioned RK method with stages using two Butcher tableaus. The methods first require an ordering of the respective time steps. We use and and consider the case as one example. In this case, we obtain:\nThis fixes the order of the respective stages. The Butcher tableaus then read:\nwhere the left Butcher tableau in (67 ###reference_###) is for the PFE scheme with inner step size and outer step size including the linear interpolation for the boundary cells and the right Butcher tableau in (67 ###reference_###) is for the PFE scheme with inner step size and outer step size including the linear interpolation for the boundary cells. Note that the right Butcher tableau represents a reducible RK method, due to a full zero row in the matrix and corresponding zero entry. However, we note that the values of the right part are needed for the update of the left part and vice versa, see the red dots in Figure 3 ###reference_###. The method as a whole can therefore not be reduced to less stages.\nConsistency and accuracy of this method again follows domain wise from the results in section 2.3 ###reference_###. Note that the extension to more stages or higher-order schemes is straightforward and omitted here for conciseness.\n(On the equivalence of additive and component partitioning)\nWe note that the case of component partitioning that is considered in this section is also included in the case of additive partitioning via a reformulation of the ODE to be solved, see [4 ###reference_4###]. In that sense it is straightforward to extend the results of this section to additively partitioned systems."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "Time Adaptive Projective Integration",
69
+ "text": "In this section, we derive time adaptive PI methods using three different ways, which are all facilitated by the representation as a RK method derived in section 2 ###reference_###. The novelty of these time adaptive PI methods is that the error estimation of an adaptive time stepping method is based on internal stages of a Projective Integration scheme, facilitated by the formulation as a RK scheme. On the one hand, this allows for a lot of flexibility, e.g., by comparing inner or outer stages with different time step sizes. On the other hand, it allows to redefine existing methods such as embedded RK methods or Richardson extrapolation, as will be discussed in the subsections below."
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "Time Adaptive Projective Integration from embedded Runge-Kutta",
75
+ "text": "Embedded RK schemes use a combination of two schemes, one low order and one higher order [5 ###reference_5###]. This leads to an extended Butcher tableau of the following form\nwhere the first row containing denotes the higher order RK scheme and the second row containing denotes the low order RK scheme. Note that PI schemes will only formally be second or higher order for vanishing .\nAs the extended Butcher tableau is nothing else than a combination of two standard RK tableaus with the same and , but two different and , the extension to a PRK scheme written in this form is straightforward and reads\nwhere the entries , , and are defined in Section 2.3 ###reference_### and is the corresponding weight vector of the higher order RK method.\nEmbedded RK schemes allow for a cheap error estimator, as the low order and the high order method use the same nodes with the same coefficients and the difference between the high order solution and the low order solution is given by\nwhere contains the function evaluation at time .\nFor an embedded PRK method with outer stages and inner steps, this leads to\nwhich is a consistent extension of the standard embedded RK case combined with the extrapolation step of PI schemes.\nAs the simplest example of an embedded RK method with only 2 stages, we consider the combination of the Heun method with the Forward Euler method:\nIn the example above, the standard error estimator is\nThis embedded method can make use of projective integration by substituting the respective projective versions of the schemes. In the simple case above this means substituting the Projective Heun method for the Heun method and the Projective Forward Euler method for the Forward Euler method. Following the explanation for general PRK schemes in Section 2.3 ###reference_### this leads to the extended Butcher tableau:\nwhere the respective entries are defined in Section 2.3 ###reference_###.\nUsing , the parameters for the Projective Heun method and the Projective Forward Euler method yield the following embedded Runge-Kutta scheme, which we call the Embedded Projective Heun Projective Forward Euler scheme (EPHPFE):\nThis is an embedded scheme, which formally results in second order for vanishing .\nThe error estimator is then simply obtained using\nand can be seen as an example of Equation (73 ###reference_###).\nThe corrected scheme then has the following Butcher tableau\nThe error estimator can also be used to subsequently adjust the next time step based on some control strategy, but this is out of the scope of this work.\n(on nodes )\nIn a PRK scheme with higher-order outer integrator nodes with can occur, see for example (77 ###reference_###). This is unusual in view of standard RK schemes. However, the additional stages with originate from the inner integrator iterating after a usual outer iteration to damp the fast modes in the solution. This yields the desired stability properties also discussed in section 6.2 ###reference_###. For more details on PRK schemes, we refer to the literature [14 ###reference_14###, 15 ###reference_15###]."
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "Time Adaptive Projective Integration from outer step size variation",
81
+ "text": "Another way of deriving an error estimator for adaptive time stepping is via computing the solution again with a smaller step size , for , also called Richardson extrapolation [24 ###reference_24###]. Assuming a time stepping scheme of order , the solution at time using steps of size is deviating from the exact solution according to\nUsing two different solutions for and , we can eliminate the error constant and obtain the corrected solution\nIn the same way, the error estimator can be computed as\nAs one known example, we consider the case of a Forward Euler scheme with to obtain and with to obtain . Writing both methods as a two-stage method, this leads to the following embedded scheme:\nfor which the error estimate reads\nThe corrected solution is precisely the solution obtained by the explicit midpoint scheme:\nThis scheme can be cast as a projective scheme using the approach for general RK methods from Section 2.3 ###reference_### or for embedded methods from Section 4.1 ###reference_###. This includes substituting a Projective Forward Euler method for both Forward Euler methods, once with outer time step and once with outer time step . The approach is illustrated in Figure 4 ###reference_### for .\n###figure_4### The corresponding projective embedded scheme reads\nThis is an embedded scheme, which formally results in second order for vanishing . The error estimate is\nNote the consistent similarity with the original error estimator in Equation (86 ###reference_###) and the similarity to the embedded scheme in Section 4.1 ###reference_###, which uses the same weights and .\nThe Butcher tableau for the new corrected method, which we call Projective Outer Step Size Variation (POSV), reads\nFor the stability analysis and numerical convergence of this POSV scheme, see sections 6.2 ###reference_### and 7 ###reference_###.\nWhile a generalization to other PI methods based on different outer RK methods and to arbitrary number of inner steps is possible, we omit this here for conciseness."
82
+ },
83
+ {
84
+ "section_id": "4.3",
85
+ "parent_section_id": "4",
86
+ "section_name": "Time Adaptive Projective Integration from inner step size variation",
87
+ "text": "Another possibility to estimate the error of a PI scheme is to change the step size of the inner time step. In principle, all inner time steps could be performed with a smaller inner time step size. However, it is also sufficient to only apply this to the last inner time step. Figure 5 ###reference_### illustrates the idea.\n###figure_5### In order to derive an embedded scheme and the corresponding error estimator in the framework of RK methods, we start from the RK version of the single schemes, see Section 2.2 ###reference_###. The coarse scheme is a standard PFE method. The fine scheme is an adapted PFE method that uses two smaller inner time steps before the extrapolation step, see Figure 5 ###reference_###. For simplicity, we use , but the results can readily be extended to arbitrary . This leads to the following Butcher tableaus for both schemes\nObviously, the methods do not compute the same stages. This problem can be mitigated by adding a (redundant) additional stage in the coarse method. This leads to the new embedded scheme:\nThe error estimator at is obtained using\nUsing Richardson extrapolation, the error at can be estimated, too. This error estimator can then be used to correct the coarse solution at . In analogy to the Richardson extrapolation in Section 4.2 ###reference_###, we obtain\nPerforming the extrapolation step thereafter as follows\nyields the following corrected RK scheme, which we call Projective Inner Step Size Variation (PISV),\nNote how the solution at time is used as a corrector to improve the solution.\nFor the stability analysis and numerical convergence of this PISV scheme, we again refer to sections 6.2 ###reference_### and 7 ###reference_### below.\nWhile presenting an example using the first order PFE scheme here, the idea can be extended to higher-order of the outer integrator using the technique explained in Section 2.3 ###reference_###. We leave the application of higher-order schemes for future work.\n(Space and time adaptive PI)\nUsing the tools presented in the preceding sections, it is possible to construct space and time adaptive PI schemes from embedded, partitioned RK methods. While the time adaptivity can be realized using embedded RK schemes, the spatial adaptivity is dealt with using partitioned RK schemes. Additional necessary ingredients are an efficient error control strategy for the spatial and temporal adaptivity. The combination of spatial and temporal adaptivity is a powerful possibility to construct efficient numerical schemes for problems that exhibit nonlinear or local phenomena, that need to be resolved both in space and in time. We leave this for future work."
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "On-the-fly error estimation with Projective Integration",
93
+ "text": "In [17 ###reference_17###], an alternative method for reducing the error of projective integration schemes is proposed. The method uses a so-called on-the-fly error estimation to measure the leading term of the global error after one full projective integration time step, i.e. the accumulated error of the inner steps and the extrapolation step. The method can then be corrected for a more accurate solution. The error of one projective integration step starting from a correct solution is defined as\nThe derivation from [17 ###reference_17###] has so far not gained a lot of attention in the literature, probably in parts due to the complicated formulation with a number of additional computations. However, writing the underlying PI scheme in its Runge-Kutta version derived in this paper allows to obtain a concise formulation of the scheme that can readily be applied in any Runge-Kutta solver framework,\nWe here consider a first order PFE scheme to outline the idea. The extension to higher order PRK schemes can be performed using the derivations in [17 ###reference_17###]. The aim of the on-the-fly error estimation method is to measure the leading error term at the newly computed solution time given by\nwhere is the unknown leading error coefficient to be estimated and is an approximation to the second derivative of the solution.\nThe on-the-fly error estimation now considers the two ingredients:\nestimate the leading error coefficient for a PFE using inner steps\napproximate the second derivative at current time with different methods\nAs for the first ingredient (1), the authors consider the error propagation of the inner steps , based on the error coefficient of a simple forward Euler method, for which the separate error coefficient would be . The accumulated error coefficient after each inner step is then given by\nThe next step is to derive the error propagation during the extrapolation step. For this, the authors of [17 ###reference_17###] use the notation\nwhich can be written in the form of\nusing\nas extrapolation step size.\nAccording to [17 ###reference_17###], the propagated error coefficient after the extrapolation step at time is then given by\nwhich can be evaluated using (99 ###reference_###) and (100 ###reference_###) to\nLastly, the error coefficient needs to be corrected for the fact that it was computed in terms of and not by a simple scaling\nThis means that the leading error coefficient can be estimated without any further function evaluations, as soon as the inner and outer time step sizes and the number of inner time steps are known. Note that the leading error coefficient converges to for vanishing , consistently reproducing the value of the leading error coefficient for a full Forward Euler step.\nAs for the second ingredient (2), different choices are possible leading to different schemes, which we will describe separately."
94
+ },
95
+ {
96
+ "section_id": "5.1",
97
+ "parent_section_id": "5",
98
+ "section_name": "Outer derivative approximation On-the-fly Projective Forward Euler (OPFE)",
99
+ "text": "In [17 ###reference_17###], the authors suggest to approximate the derivative in (98 ###reference_###) using the outer stages and (1 ###reference_###) as\nThis approximation can be obtained at the cost of one additional function evaluation of at time , which will be reflected by one additional Runge-Kutta stage with .\nThe corrected solution is then obtained by simple subtraction of the error (97 ###reference_###) as\nUsing a first order PFE with inner stages and one additional stage for the derivative estimation, this leads to the following Butcher tableau for the outer derivative estimation on-the-fly Projective Forward Euler (OPFE) scheme\nNote that in the limit , we obtain , the inner stages vanish and the scheme given by (107 ###reference_###) will degenerate to the second-order Heun scheme, given by the Butcher tableau\nTo analyse the order of the scheme, we check the order conditions from section 2 ###reference_### and can easily prove second-order accuracy, independent of the outer time step size, inner time step size, and number of inner time steps. This makes the method a powerful tool to achieve second order at the expense of only one function evaluation.\nFor illustration, we consider the three examples .\nFor , we obtain from (101 ###reference_###) that and\nthe Butcher tableau of the OPFE1 reads\nFor , we obtain from (101 ###reference_###) that and\nthe Butcher tableau of the OPFE2 reads\nFor , we obtain from (101 ###reference_###) that and\nthe Butcher tableau of the OPFE3 reads\nBy checking the consistency and order conditions in 2.1 ###reference_###, it can be shown again that all scheme are second order accurate.\nWe emphasize that the OPFE method is estimating the derivative in (98 ###reference_###) using function evaluations at the outer step values and . However, the usage of the outer steps does not take into account the dynamics of the fast scales in the system, which can negatively impact the stability region, as will be investigated during the stability analysis in section 6.2 ###reference_###. It is thus desirable to consider alternatives to the suggested procedure from [17 ###reference_17###]."
100
+ },
101
+ {
102
+ "section_id": "5.2",
103
+ "parent_section_id": "5",
104
+ "section_name": "Inner derivative approximation On-the-fly Projective Forward Euler (IPFE)",
105
+ "text": "We will now present a new alternative to the estimation of the derivative term in (98 ###reference_###), different from using the outer step values and as suggested by [17 ###reference_17###]. The motivation is to perform the derivative estimation not over the longer time scale , but over the shorter time scale , to make sure that the corrected scheme does not suffer from instability for fast modes.\nOne example for this is to estimate the derivative using an additional micro step of size , according to\nThis approximation can be obtained at the cost of two additional function evaluations of at time and , which will be reflected by two additional Runge-Kutta stages with and .\nUsing the estimated derivative and the error, the corrected solution is again obtained by simple subtraction of the error (97 ###reference_###) as\nFor a standard PFE with inner stages and the two additional stages for the derivative estimation, this leads to the following Butcher tableau for the inner derivative estimation on-the-fly Projective Forward Euler (IPFE) scheme\nChecking the order conditions from section 2 ###reference_### indeed reveals second-order accuracy, independent of the outer time step size, inner time step size, and number of inner time steps. In comparison to the OPFE method, second order is achieved here using two additional function evaluations.\nHowever, in the limit using , the inner stages as well as the error estimation collapse and the IPFE scheme (107 ###reference_###) will degenerate to a simple Forward Euler scheme, given by the Butcher tableau"
106
+ },
107
+ {
108
+ "section_id": "6",
109
+ "parent_section_id": null,
110
+ "section_name": "Analysis",
111
+ "text": "For analytical comparison of the schemes derived an discussed in this paper, we consider two important properties: accuracy and stability."
112
+ },
113
+ {
114
+ "section_id": "6.1",
115
+ "parent_section_id": "6",
116
+ "section_name": "Accuracy",
117
+ "text": "The accuracy of a scheme is mainly decided by the order of the scheme and its leading error coefficient. A typical first order scheme has a leading error term at time of the form (98 ###reference_###)\nThe second order leading error terms of Runge-Kutta methods vanish if the second order accuracy condition is fulfilled, i.e., . This means that the mismatch quantifies the leading error coefficient. This yields a simple criterion to check and compare the accuracy of the previously derived schemes. Table 1 ###reference_### shows the results for the schemes of this paper.\nThe first order methods in table 1 ###reference_### clearly have a remaining term of in the limit of vanishing inner step size , which means that the schemes are clearly only first order. However, the PISV scheme results in a significantly reduced error coefficient in the relevant domain , as can also be seen in Figure 6 ###reference_###. In comparison with the FE scheme, all projective schemes lead to a reduction in the error coefficient for small and medium , indicating that they yield a more accurate solution.\n###figure_6### The methods in the right part of table 1 ###reference_### are methods for which the second order leading erorr coefficient asymptotically vanishes for . This means that the scheme formally recovers second order accuracy in the case of vanishing inner step size. Notably, the PRK4 scheme and EPHPFE using yield the same error coefficient. In the case of OPFE and IPFE, the method is designed to yield second order regardless of the inner step size under the assumptions made in [17 ###reference_17###]. The decay of the leading coefficient to zero is depicted in Figure 7 ###reference_###. For very small , the error coefficient of the PRK4 scheme is larger when taking more inner steps . This is consistent with the first order PFE scheme above.\n###figure_7###"
118
+ },
119
+ {
120
+ "section_id": "6.2",
121
+ "parent_section_id": "6",
122
+ "section_name": "Stability",
123
+ "text": "In this section, we will see that writing projective integration schemes as RK schemes allows to use simple tools established for Runge-Kutta schemes to determine the stability properties of the schemes. Note that we are interested in stable schemes for problems including a scale separation with fast and slow modes. Therefore, we aim at stability regions that include one fast eigenvalue cluster and one (typically disconnected) slow eigenvalue cluster. We remark that schemes with connected stability regions are commonly constructed by using Telescopic Projective Integration (TPI), as covered in section 2.4 ###reference_###. However, we focus on clearly separated fast and slow modes here.\nStability of RK schemes is commonly assessed via the scalar Dahlquist equation [5 ###reference_5###]\nwhich has the exact solution . For , the solution decays to zero in time. This decay should be mirrored by a stable numerical solution.\nWe apply the RK scheme to Equation (117 ###reference_###) and obtain , with stability function .\nIn concise notation, the stability function of a standard RK scheme with stages is given by\nwith [5 ###reference_5###].\nThe behavior of the stability function determines the stability properties of the RK scheme.\nA RK scheme is called stable if its solution of (117 ###reference_###) does not grow in time for , which implies . If the scheme is stable for all , then it is called A-stable, which can only be fulfilled by implicit RK schemes. Explicit schemes are typically only stable in a small part of the negative half plane, i.e., for small time steps or slow modes around [5 ###reference_5###]. Projective Integration schemes aim to extend the stability region such that the scheme is also stable for fast modes around , with [20 ###reference_20###]. In the following figures, we mark both points and with a black dot to indicate the desired stability region. Note that the appearance of a domain around inside the stability region is a typical property of PI schemes, due to the use of an inner time stepping scheme with small time step size . More details on the stability properties of PI schemes can be found in [14 ###reference_14###, 15 ###reference_15###, 20 ###reference_20###].\nIn Figure 8 ###reference_###, the stability regions for Forward Euler methods (FE) (left column) and Runge-Kutta 4 methods (RK4) (right column) are shown. The standard methods FE (a) and RK4 (b) are only stable for slow modes as indicated by the stability region around the slow cluster near the origin. The projective variants PFE (c) and PRK4 (d) using inner iteration, however, clearly show that the stability region is augmented by a stable region around the fast cluster, while the PRK4 has a slightly increased stability region. Using inner iterations in (e) and (f), the stability regions increase and even yield a connected stability region in case of the PRK4 scheme (f).\n###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### In Figure 9 ###reference_###, the stability regions for the new time adaptive embedded schemes derived in the previous sections are plotted. The standard explicit midpoint rule (EMR) in (a), in contrast, only results in stable integration of the slow cluster whereas the Embedded Projective Heun Projective Forward Euler scheme (EPHPFE) from Equation 81 ###reference_### in (b) yields stable integration of the fast cluster, too. The same holds true for the scheme with projective outer step size variation (POSV) from Section 4.2 ###reference_### in (d) as well as for the scheme with projective inner step size variation (PISV) from Section 4.3 ###reference_### in (c). Notably, the stability region of the POSV scheme seems to be larger than that of the PISV scheme. The POSV scheme results in a stability region close to the PFE scheme with from Figure 8 ###reference_###.\nThe two method using on-the-fly error estimation in Figure 9 ###reference_###(e) and (f) require some extra explanation. It can be seen that the OPFE scheme does not yield a stable integration of the fast cluster. This is due to the derivative estimation using the outer integration points, which are unable to capture the fast dynamics of the inner iterations. The IPFE method in Figure 9 ###reference_### (f) removes that disadvantage by using an additional inner iteration to estimate the derivative and therefore captures the fast dynamics. This leads to a stable integration of the fast eigenvalue cluster. In other tests (not shown for conciseness) we could show that the stability region of the IPFE increases with increasing , especially around the fast eigenvalue cluster.\n###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### Summarizing, we see that the new projective schemes indeed allow for stable integration of fast clusters due to the extended stability region of the schemes. This can be analyzed straightforwardly in the framework of RK schemes."
124
+ },
125
+ {
126
+ "section_id": "7",
127
+ "parent_section_id": null,
128
+ "section_name": "Numerical tests",
129
+ "text": "While the focus of this work is the derivation of Projective Integration schemes in the framework of RK schemes, it is important to access the numerical properties of the new schemes. We therefore perform a simple convergence test to investigate the speed of convergence for the following two-scale model problem consisting of two coupled equations with separate scales\nwhere we use and . Note that the Jacobian of the model problem (119 ###reference_###)-(120 ###reference_###) reads\nwith eigenvalues , , leading to and developing on different scales. The exact solution is given by\nso that slowly relaxes to zero in time while approaches with potentially fast relaxation time . The exact solution is plotted for and in Figure 10 ###reference_###. This test case is suitable for assessing the accuracy and stability properties of the time integration methods proposed above as it includes a scale separation and requires efficient techniques to speed up standard methods.\n###figure_20### ###figure_21### Standard FE or RK4 schemes will not be stable for the integration of the model problem (119 ###reference_###)-(120 ###reference_###) for time steps , due to the fast evolution of . We will subsequently assess the numerical convergence of the new projective schemes and the error estimators of the embedded projective schemes. For a concise presentation of the results, we show error plots for . However, for large time , the solution will yield due to the relaxation in (120 ###reference_###), so that the errors in our numerical simulations where practically the same for and .\n(Other test cases)\nWhile this paper concerns the derivation and analysis of PI schemes in the framework of RK methods and a simple but useful test case is employed, other models from applications in science and engineering can readily be applied to the methods derived in this paper.\nThis includes models from non-equilibrium rarefied gas dynamics [25 ###reference_25###], where the macroscopic variables density , bulk velocity and temperature evolve on a macroscopic fluid scale, while non-equilibrium variables potentially evolve much faster. The spectrum can contain clear spectral gaps as investigated in [11 ###reference_11###, 10 ###reference_10###].\nAnother application can be models for extended shallow water equations [13 ###reference_13###], which are potentially stiff with multiple scales as investigated in [1 ###reference_1###, 8 ###reference_8###]. Investigation of spatially adaptive methods based on [12 ###reference_12###] might be beneficial for additional computational speedup."
130
+ },
131
+ {
132
+ "section_id": "7.1",
133
+ "parent_section_id": "7",
134
+ "section_name": "Numerical error convergence",
135
+ "text": "As first of two numerical tests, we investigate the numerical error convergence of the method by monitoring the numerical error with respect to the analytical solution for vanishing and constant value of , where as usual the inner time step size is taken as .\n###figure_22### ###figure_23### We choose and see in Figure 11 ###reference_###(a) that the PFE (), PISV (), POSV (), and EPHPFE () schemes seem to correctly converge with first order. Both PRK4 schemes for and , however, do not seem to converge for smaller values of . This is due to the increasing value of , which can no longer be neglected for small . We therefore design a second test were both and . This is done by decreasing the value of simultaneously. In this situation, we see in Figure 11 ###reference_###(b) that all errors decrease and the schemes obtain the expected convergence. Note that the schemes are formally only of first order as shown in Theorem 1 ###reference_rem1###. However, we clearly see that the PR4 schemes are the most accurate, while the simple PFE and PISV schemes yield the largest errors.\nThe numerical convergence test therefore shows that the projective integration schemes derived in this paper obtain the proven convergence rates from Theorem 1 ###reference_rem1### while more complex PRK schemes are still beneficial to obtain more accurate solutions."
136
+ },
137
+ {
138
+ "section_id": "7.2",
139
+ "parent_section_id": "7",
140
+ "section_name": "Embedded schemes and error estimators",
141
+ "text": "A numerical study was conducted to assess the performance of the error estimators and the resulting error behavior of the newly derived embedded projective schemes. We consider the test problem (119 ###reference_###)-(120 ###reference_###), for stable settings , and vary to investigate the error behavior depending on the inner time step size. We only consider the error of the first equation (119 ###reference_###) modeling exponential decay after one standard time step and are interested in the following four quantities per embedded scheme:\nThe error of the corrected embedded scheme: (in blue, if available)\nThe error of the lower-order solution error: (in yellow)\nThe error of the higher-order solution: (in green)\nThe error estimate of the embedded method: (in red)\nThe numerical errors of the new embedded projective schemes will be compared with the respective standard non-projective embedded schemes (if available).\n###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### In Figure 12 ###reference_### we see the respective data for the embedded schemes also investigated in Figure 9 ###reference_###. On the one hand, the standard Explicit Midpoint Rule (EMR) in 12 ###reference_###(a) does not contain inner time steps and therefore does not depend on the inner time step size . Note, how the error estimator (in red) measures the error of the higher-order solution (in green). On the other hand, the new POSV scheme in 12 ###reference_###(b) consistently shows the same error values as the EMR scheme for vanishing , but it has a decreased error for intermediate values of . For values , the scheme is less accurate again, but a projective scheme would not be used in that case.\nSimilarly, the embedded Heun Forward Euler (EHFE) method in Figure 12 ###reference_###(c) does not contain inner time steps and therefore does not depend on the inner time step size , too. Note, how the error estimator (in red) here measures the error of the low-order solution (in yellow). Correspondingly, the new projective version EPHPFE in 12 ###reference_###(d) shows the same error values as the EHFE scheme for vanishing . However, for non-vanishing the new scheme accurately predicts the error using its build-in error estimator. For unusually large , the error is slightly increased again, while the error estimator shows good performance in measuring the lower-order error.\nLastly, we see that also the new PISV method (for which there is no corresponding standard scheme, since it is based on varying the inner step size) shows a comparable behavior with decreasing error for intermediate values of . The error of the corrected embedded scheme (in blue) is clearly smaller than both the lower-order and higher-order solutions for all values . However, since the method is based on inner time step variation, the error estimate does not give an accurate assessment of the actual error. This is expected as the inner time step size is typically not chosen because of accuracy, but because of stability constraints. The PISV method is less advantageous than the POSV method with respect to accurate error control.\nSummarizing, the new embedded projective schemes POSV, EPHPFH, and POSV show a favorable error behavior in comparison to their corresponding standard schemes for small and intermediate inner time step sizes, which is exactly the application case for projective schemes. This indicates a good performance of the respective error estimators of these new embedded projective schemes."
142
+ },
143
+ {
144
+ "section_id": "8",
145
+ "parent_section_id": null,
146
+ "section_name": "Conclusion",
147
+ "text": "In this paper, we use the definition of Projective Integration (PI) methods as explicit time stepping schemes to write them as Runge-Kutta (RK) methods. This remarkably simple rewriting allows to check their consistency and accuracy properties easily by means of the order conditions of the RK method, without performing tedious Taylor expansions, which is especially troublesome for the many steps of a Projective Runge-Kutta (PRK) method or Telescopic Projective Integration (TPI) method. Spatially adaptive Projective Integration methods can be included as partitioned RK methods. Using the framework of RK methods, we derived new time adaptive PI schemes, that are based on the corresponding embedded RK method, step size variation, or an on-the-fly error estimation.\nThe accuracy, stability, and numerical convergence properties of the rewritten methods and newly derived methods were easily analyzed in the RK framework and the projective methods clearly show the desired enlarged stability region while converging with the theoretically derived order of accuracy. We prove instability of an on-the-fly error estimation method and show stability of a new improved version using the rewriting as RK method.\nThe work in this paper allows to use more tools for stability analysis of RK schemes to be applied to PI.\nThe number and size of inner time steps of PI methods or the different parameters of a TPI method could then be optimized to answer the question if similarly stable RK schemes with fewer stages can be derived. The development of space and time adaptive PI methods is another possible future research direction."
148
+ }
149
+ ],
150
+ "appendix": [
151
+ {
152
+ "section_id": "Appendix 1",
153
+ "parent_section_id": null,
154
+ "section_name": "Appendix A Examples: PRK4 schemes",
155
+ "text": "The Butcher tableaus for the projective versions of the standard RK scheme of fourth order using different inner step numbers are given below.\nFor the PRK4K1 version with , i.e., inner steps, the Butcher tableau reads:\nThe PRK4K2 version with , i.e., inner steps, inner steps is given by the following Butcher tableau:"
156
+ }
157
+ ],
158
+ "tables": {
159
+ "1": {
160
+ "table_html": "<figure class=\"ltx_table\" id=\"S6.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S6.T1.15\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S6.T1.15.16.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_rr ltx_border_t\" colspan=\"2\" id=\"S6.T1.15.16.1.1\">First order methods</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S6.T1.15.16.1.2\">Asymptotically second order methods</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S6.T1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S6.T1.3.3.4\">FE</th>\n<td class=\"ltx_td ltx_align_center ltx_border_rr ltx_border_t\" id=\"S6.T1.1.1.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S6.T1.2.2.2\">PRK4 ()</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S6.T1.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S6.T1.6.6.4\">PFE</th>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T1.4.4.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S6.T1.5.5.2\">PRK4 ()</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.6.6.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S6.T1.7.7.1\">PFE ()</th>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T1.8.8.2\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S6.T1.9.9.3\">EPHPFE ()</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.10.10.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.14.14\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S6.T1.11.11.1\">PISV ()</th>\n<td class=\"ltx_td ltx_align_center ltx_border_rr\" id=\"S6.T1.12.12.2\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S6.T1.13.13.3\">POSV ()</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S6.T1.14.14.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S6.T1.15.15\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S6.T1.15.15.2\"></th>\n<td class=\"ltx_td ltx_border_b ltx_border_rr\" id=\"S6.T1.15.15.3\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_r\" id=\"S6.T1.15.15.4\">OPFE, IPFE</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S6.T1.15.15.1\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Second order leading error coefficients.</figcaption>\n</figure>",
161
+ "capture": "Table 1: Second order leading error coefficients."
162
+ }
163
+ },
164
+ "image_paths": {
165
+ "1": {
166
+ "figure_path": "2210.08302v2_figure_1.png",
167
+ "caption": "Figure 1: Space Adaptive Forward Euler scheme (SAFE) with small time step \u03b4\u2062t\ud835\udeff\ud835\udc61\\delta titalic_\u03b4 italic_t in stiff region (left) and large time step \u0394\u2062t\u0394\ud835\udc61\\Delta troman_\u0394 italic_t in non-stiff region (right). Values of red cells at the boundary of the two domains need to be reconstructed, see [12].",
168
+ "url": "http://arxiv.org/html/2210.08302v2/x1.png"
169
+ },
170
+ "2": {
171
+ "figure_path": "2210.08302v2_figure_2.png",
172
+ "caption": "Figure 2: Space Adaptive Projective Forward Euler scheme (SAPFE) with K+1\ud835\udc3e1K+1italic_K + 1 inner small time steps \u03b4\u2062t\ud835\udeff\ud835\udc61\\delta titalic_\u03b4 italic_t in stiff region (left) and large time step \u0394\u2062t\u0394\ud835\udc61\\Delta troman_\u0394 italic_t in non-stiff region (right). Values of red cells at the boundary of the two domains need to be reconstructed, see [12].",
173
+ "url": "http://arxiv.org/html/2210.08302v2/x2.png"
174
+ },
175
+ "3": {
176
+ "figure_path": "2210.08302v2_figure_3.png",
177
+ "caption": "Figure 3: Space Adaptive Projective Projective Forward Euler scheme (SAPPFE) with K+1=3\ud835\udc3e13K+1=3italic_K + 1 = 3 inner small time steps \u03b4\u2062tL\ud835\udeffsubscript\ud835\udc61\ud835\udc3f\\delta t_{L}italic_\u03b4 italic_t start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT in stiff region (left) and K+1=3\ud835\udc3e13K+1=3italic_K + 1 = 3 inner small time steps \u03b4\u2062tR>\u03b4\u2062tL\ud835\udeffsubscript\ud835\udc61\ud835\udc45\ud835\udeffsubscript\ud835\udc61\ud835\udc3f\\delta t_{R}>\\delta t_{L}italic_\u03b4 italic_t start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT > italic_\u03b4 italic_t start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT in semi stiff region (right). Values of red cells at both sides of the boundary of the two domains need to be reconstructed, see [12].",
178
+ "url": "http://arxiv.org/html/2210.08302v2/x3.png"
179
+ },
180
+ "4": {
181
+ "figure_path": "2210.08302v2_figure_4.png",
182
+ "caption": "Figure 4: Projective outer step size variation (POSV) for Projective Forward Euler scheme with K=2\ud835\udc3e2K=2italic_K = 2. Top: standard PFE with outer step size \u0394\u2062t\u0394\ud835\udc61\\Delta troman_\u0394 italic_t. Bottom: PFE with outer step size \u0394\u2062t/2\u0394\ud835\udc612\\Delta t/2roman_\u0394 italic_t / 2.",
183
+ "url": "http://arxiv.org/html/2210.08302v2/x4.png"
184
+ },
185
+ "5": {
186
+ "figure_path": "2210.08302v2_figure_5.png",
187
+ "caption": "Figure 5: Last projective inner step size variation (PISV) for Projective Forward Euler scheme. Top: standard PFE with inner step size \u03b4\u2062t\ud835\udeff\ud835\udc61\\delta titalic_\u03b4 italic_t. Bottom: PFE with last inner step using two steps of size \u03b4\u2062t/2\ud835\udeff\ud835\udc612\\delta t/2italic_\u03b4 italic_t / 2.",
188
+ "url": "http://arxiv.org/html/2210.08302v2/x5.png"
189
+ },
190
+ "6": {
191
+ "figure_path": "2210.08302v2_figure_6.png",
192
+ "caption": "Figure 6: Error coefficient of first order schemes for FE, PFE using K=1,2,3\ud835\udc3e123K=1,2,3italic_K = 1 , 2 , 3 and PISV using K=1\ud835\udc3e1K=1italic_K = 1. Also compare table 1 (left).",
193
+ "url": "http://arxiv.org/html/2210.08302v2/x6.png"
194
+ },
195
+ "7": {
196
+ "figure_path": "2210.08302v2_figure_7.png",
197
+ "caption": "Figure 7: Error coefficient of second order schemes PRK using K=1,2\ud835\udc3e12K=1,2italic_K = 1 , 2, EPHPFE using K=2\ud835\udc3e2K=2italic_K = 2, POSV using K=2\ud835\udc3e2K=2italic_K = 2, OPFE and IPFE. Also compare table 1 (right).",
198
+ "url": "http://arxiv.org/html/2210.08302v2/x7.png"
199
+ },
200
+ "8(a)": {
201
+ "figure_path": "2210.08302v2_figure_8(a).png",
202
+ "caption": "Figure 8: Stability regions of FE scheme (a), RK4 scheme (b), PFE scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), PRK4 scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (d), PFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (e) and PRK4 scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (f). Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective schemes are also stable for fast cluster and larger K\ud835\udc3eKitalic_K increases the stability region.",
203
+ "url": "http://arxiv.org/html/2210.08302v2/x8.png"
204
+ },
205
+ "8(b)": {
206
+ "figure_path": "2210.08302v2_figure_8(b).png",
207
+ "caption": "Figure 8: Stability regions of FE scheme (a), RK4 scheme (b), PFE scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), PRK4 scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (d), PFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (e) and PRK4 scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (f). Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective schemes are also stable for fast cluster and larger K\ud835\udc3eKitalic_K increases the stability region.",
208
+ "url": "http://arxiv.org/html/2210.08302v2/x9.png"
209
+ },
210
+ "8(c)": {
211
+ "figure_path": "2210.08302v2_figure_8(c).png",
212
+ "caption": "Figure 8: Stability regions of FE scheme (a), RK4 scheme (b), PFE scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), PRK4 scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (d), PFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (e) and PRK4 scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (f). Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective schemes are also stable for fast cluster and larger K\ud835\udc3eKitalic_K increases the stability region.",
213
+ "url": "http://arxiv.org/html/2210.08302v2/x10.png"
214
+ },
215
+ "8(d)": {
216
+ "figure_path": "2210.08302v2_figure_8(d).png",
217
+ "caption": "Figure 8: Stability regions of FE scheme (a), RK4 scheme (b), PFE scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), PRK4 scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (d), PFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (e) and PRK4 scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (f). Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective schemes are also stable for fast cluster and larger K\ud835\udc3eKitalic_K increases the stability region.",
218
+ "url": "http://arxiv.org/html/2210.08302v2/x11.png"
219
+ },
220
+ "8(e)": {
221
+ "figure_path": "2210.08302v2_figure_8(e).png",
222
+ "caption": "Figure 8: Stability regions of FE scheme (a), RK4 scheme (b), PFE scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), PRK4 scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (d), PFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (e) and PRK4 scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (f). Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective schemes are also stable for fast cluster and larger K\ud835\udc3eKitalic_K increases the stability region.",
223
+ "url": "http://arxiv.org/html/2210.08302v2/x12.png"
224
+ },
225
+ "8(f)": {
226
+ "figure_path": "2210.08302v2_figure_8(f).png",
227
+ "caption": "Figure 8: Stability regions of FE scheme (a), RK4 scheme (b), PFE scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), PRK4 scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (d), PFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (e) and PRK4 scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (f). Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective schemes are also stable for fast cluster and larger K\ud835\udc3eKitalic_K increases the stability region.",
228
+ "url": "http://arxiv.org/html/2210.08302v2/x13.png"
229
+ },
230
+ "9(a)": {
231
+ "figure_path": "2210.08302v2_figure_9(a).png",
232
+ "caption": "Figure 9: Stability region of EMR scheme (a), EPHPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (b), PISV scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), POSV scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (d), OPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2, and IPFE scheme using K=3\ud835\udc3e3K=3italic_K = 3. Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective embedded schemes and IPFE are stable for fast cluster and show larger stability region for larger K\ud835\udc3eKitalic_K.",
233
+ "url": "http://arxiv.org/html/2210.08302v2/x14.png"
234
+ },
235
+ "9(b)": {
236
+ "figure_path": "2210.08302v2_figure_9(b).png",
237
+ "caption": "Figure 9: Stability region of EMR scheme (a), EPHPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (b), PISV scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), POSV scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (d), OPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2, and IPFE scheme using K=3\ud835\udc3e3K=3italic_K = 3. Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective embedded schemes and IPFE are stable for fast cluster and show larger stability region for larger K\ud835\udc3eKitalic_K.",
238
+ "url": "http://arxiv.org/html/2210.08302v2/x15.png"
239
+ },
240
+ "9(c)": {
241
+ "figure_path": "2210.08302v2_figure_9(c).png",
242
+ "caption": "Figure 9: Stability region of EMR scheme (a), EPHPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (b), PISV scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), POSV scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (d), OPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2, and IPFE scheme using K=3\ud835\udc3e3K=3italic_K = 3. Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective embedded schemes and IPFE are stable for fast cluster and show larger stability region for larger K\ud835\udc3eKitalic_K.",
243
+ "url": "http://arxiv.org/html/2210.08302v2/x16.png"
244
+ },
245
+ "9(d)": {
246
+ "figure_path": "2210.08302v2_figure_9(d).png",
247
+ "caption": "Figure 9: Stability region of EMR scheme (a), EPHPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (b), PISV scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), POSV scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (d), OPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2, and IPFE scheme using K=3\ud835\udc3e3K=3italic_K = 3. Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective embedded schemes and IPFE are stable for fast cluster and show larger stability region for larger K\ud835\udc3eKitalic_K.",
248
+ "url": "http://arxiv.org/html/2210.08302v2/x17.png"
249
+ },
250
+ "9(e)": {
251
+ "figure_path": "2210.08302v2_figure_9(e).png",
252
+ "caption": "Figure 9: Stability region of EMR scheme (a), EPHPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (b), PISV scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), POSV scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (d), OPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2, and IPFE scheme using K=3\ud835\udc3e3K=3italic_K = 3. Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective embedded schemes and IPFE are stable for fast cluster and show larger stability region for larger K\ud835\udc3eKitalic_K.",
253
+ "url": "http://arxiv.org/html/2210.08302v2/x18.png"
254
+ },
255
+ "9(f)": {
256
+ "figure_path": "2210.08302v2_figure_9(f).png",
257
+ "caption": "Figure 9: Stability region of EMR scheme (a), EPHPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (b), PISV scheme using K+1=1\ud835\udc3e11K+1=1italic_K + 1 = 1 (c), POSV scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2 (d), OPFE scheme using K+1=2\ud835\udc3e12K+1=2italic_K + 1 = 2, and IPFE scheme using K=3\ud835\udc3e3K=3italic_K = 3. Black dots at (\u22121,0)10(-1,0)( - 1 , 0 ) and (\u22121\u03f5,0)1italic-\u03f50(-\\frac{1}{\\epsilon},0)( - divide start_ARG 1 end_ARG start_ARG italic_\u03f5 end_ARG , 0 ) indicate eigenvalue clusters for desired stability region. Projective embedded schemes and IPFE are stable for fast cluster and show larger stability region for larger K\ud835\udc3eKitalic_K.",
258
+ "url": "http://arxiv.org/html/2210.08302v2/x19.png"
259
+ },
260
+ "10(a)": {
261
+ "figure_path": "2210.08302v2_figure_10(a).png",
262
+ "caption": "Figure 10: Exact solution of test case (119)-(120) with varying \u03f5=0.05italic-\u03f50.05\\epsilon=0.05italic_\u03f5 = 0.05 (left) and \u03f5=0.005italic-\u03f50.005\\epsilon=0.005italic_\u03f5 = 0.005 (right). Note that u2subscript\ud835\udc622u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT relaxes towards u1subscript\ud835\udc621u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT much faster with decreasing \u03f5italic-\u03f5\\epsilonitalic_\u03f5, leading to a stiff model.",
263
+ "url": "http://arxiv.org/html/2210.08302v2/x20.png"
264
+ },
265
+ "10(b)": {
266
+ "figure_path": "2210.08302v2_figure_10(b).png",
267
+ "caption": "Figure 10: Exact solution of test case (119)-(120) with varying \u03f5=0.05italic-\u03f50.05\\epsilon=0.05italic_\u03f5 = 0.05 (left) and \u03f5=0.005italic-\u03f50.005\\epsilon=0.005italic_\u03f5 = 0.005 (right). Note that u2subscript\ud835\udc622u_{2}italic_u start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT relaxes towards u1subscript\ud835\udc621u_{1}italic_u start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT much faster with decreasing \u03f5italic-\u03f5\\epsilonitalic_\u03f5, leading to a stiff model.",
268
+ "url": "http://arxiv.org/html/2210.08302v2/x21.png"
269
+ },
270
+ "11(a)": {
271
+ "figure_path": "2210.08302v2_figure_11(a).png",
272
+ "caption": "Figure 11: Numerical error convergence of schemes for decreasing time step size \u0394\u2062t\u0394\ud835\udc61\\Delta troman_\u0394 italic_t. If only \u0394\u2062t\u0394\ud835\udc61\\Delta troman_\u0394 italic_t goes to zero (a), convergence is not obtained because at some point the \u03bb=\u03b4\u2062t/\u0394\u2062t\ud835\udf06\ud835\udeff\ud835\udc61\u0394\ud835\udc61\\lambda=\\delta t/\\Delta titalic_\u03bb = italic_\u03b4 italic_t / roman_\u0394 italic_t error becomes dominant (a). If also \u03bb=\u03b4\u2062t/\u0394\u2062t\ud835\udf06\ud835\udeff\ud835\udc61\u0394\ud835\udc61\\lambda=\\delta t/\\Delta titalic_\u03bb = italic_\u03b4 italic_t / roman_\u0394 italic_t goes to zero (b), the expected convergence is obtained.",
273
+ "url": "http://arxiv.org/html/2210.08302v2/x22.png"
274
+ },
275
+ "11(b)": {
276
+ "figure_path": "2210.08302v2_figure_11(b).png",
277
+ "caption": "Figure 11: Numerical error convergence of schemes for decreasing time step size \u0394\u2062t\u0394\ud835\udc61\\Delta troman_\u0394 italic_t. If only \u0394\u2062t\u0394\ud835\udc61\\Delta troman_\u0394 italic_t goes to zero (a), convergence is not obtained because at some point the \u03bb=\u03b4\u2062t/\u0394\u2062t\ud835\udf06\ud835\udeff\ud835\udc61\u0394\ud835\udc61\\lambda=\\delta t/\\Delta titalic_\u03bb = italic_\u03b4 italic_t / roman_\u0394 italic_t error becomes dominant (a). If also \u03bb=\u03b4\u2062t/\u0394\u2062t\ud835\udf06\ud835\udeff\ud835\udc61\u0394\ud835\udc61\\lambda=\\delta t/\\Delta titalic_\u03bb = italic_\u03b4 italic_t / roman_\u0394 italic_t goes to zero (b), the expected convergence is obtained.",
278
+ "url": "http://arxiv.org/html/2210.08302v2/x23.png"
279
+ },
280
+ "12(a)": {
281
+ "figure_path": "2210.08302v2_figure_12(a).png",
282
+ "caption": "Figure 12: Errors and error estimators of embedded projective schemes (b), (d), (e) in comparison to standard embedded schemes (a), (c) for changing inner time step size \u03b4\u2062t\ud835\udeff\ud835\udc61\\delta titalic_\u03b4 italic_t. Full embedded solution error (blue), lower-order solution error (yellow), higher-order solution (green), error estimate (red). The projective versions (b),(d) show smaller errors than the standard schemes (a), (c), resulting in smaller error estimates.",
283
+ "url": "http://arxiv.org/html/2210.08302v2/x24.png"
284
+ },
285
+ "12(b)": {
286
+ "figure_path": "2210.08302v2_figure_12(b).png",
287
+ "caption": "Figure 12: Errors and error estimators of embedded projective schemes (b), (d), (e) in comparison to standard embedded schemes (a), (c) for changing inner time step size \u03b4\u2062t\ud835\udeff\ud835\udc61\\delta titalic_\u03b4 italic_t. Full embedded solution error (blue), lower-order solution error (yellow), higher-order solution (green), error estimate (red). The projective versions (b),(d) show smaller errors than the standard schemes (a), (c), resulting in smaller error estimates.",
288
+ "url": "http://arxiv.org/html/2210.08302v2/x25.png"
289
+ },
290
+ "12(c)": {
291
+ "figure_path": "2210.08302v2_figure_12(c).png",
292
+ "caption": "Figure 12: Errors and error estimators of embedded projective schemes (b), (d), (e) in comparison to standard embedded schemes (a), (c) for changing inner time step size \u03b4\u2062t\ud835\udeff\ud835\udc61\\delta titalic_\u03b4 italic_t. Full embedded solution error (blue), lower-order solution error (yellow), higher-order solution (green), error estimate (red). The projective versions (b),(d) show smaller errors than the standard schemes (a), (c), resulting in smaller error estimates.",
293
+ "url": "http://arxiv.org/html/2210.08302v2/x26.png"
294
+ },
295
+ "12(d)": {
296
+ "figure_path": "2210.08302v2_figure_12(d).png",
297
+ "caption": "Figure 12: Errors and error estimators of embedded projective schemes (b), (d), (e) in comparison to standard embedded schemes (a), (c) for changing inner time step size \u03b4\u2062t\ud835\udeff\ud835\udc61\\delta titalic_\u03b4 italic_t. Full embedded solution error (blue), lower-order solution error (yellow), higher-order solution (green), error estimate (red). The projective versions (b),(d) show smaller errors than the standard schemes (a), (c), resulting in smaller error estimates.",
298
+ "url": "http://arxiv.org/html/2210.08302v2/x27.png"
299
+ },
300
+ "12(e)": {
301
+ "figure_path": "2210.08302v2_figure_12(e).png",
302
+ "caption": "Figure 12: Errors and error estimators of embedded projective schemes (b), (d), (e) in comparison to standard embedded schemes (a), (c) for changing inner time step size \u03b4\u2062t\ud835\udeff\ud835\udc61\\delta titalic_\u03b4 italic_t. Full embedded solution error (blue), lower-order solution error (yellow), higher-order solution (green), error estimate (red). The projective versions (b),(d) show smaller errors than the standard schemes (a), (c), resulting in smaller error estimates.",
303
+ "url": "http://arxiv.org/html/2210.08302v2/x28.png"
304
+ }
305
+ },
306
+ "validation": true,
307
+ "references": [
308
+ {
309
+ "1": {
310
+ "title": "Projective Integration for Hyperbolic Shallow Water Moment\nEquations.",
311
+ "author": "Amrita Amrita and Julian Koellermeier.",
312
+ "venue": "Axioms, 11(5):235, may 2022.",
313
+ "url": null
314
+ }
315
+ },
316
+ {
317
+ "2": {
318
+ "title": "Projective integration with an adaptive projection horizon.",
319
+ "author": "Max A. Fahrenkopf, James W. Schneider, and B. Erik Ydstie.",
320
+ "venue": "IFAC Proceedings Volumes, 46(32):721\u2013725, December 2013.",
321
+ "url": null
322
+ }
323
+ },
324
+ {
325
+ "3": {
326
+ "title": "Projective methods for stiff differential equations: Problems with\ngaps in their eigenvalue spectrum.",
327
+ "author": "Charles William Gear and Ioannis George Kevrekidis.",
328
+ "venue": "SIAM Journal on Scientific Computing, 24(4):1091\u20131106, 2003.",
329
+ "url": null
330
+ }
331
+ },
332
+ {
333
+ "4": {
334
+ "title": "Multirate generalized additive runge kutta methods.",
335
+ "author": "Michael G\u00fcnther and Adrian Sandu.",
336
+ "venue": "Numerische Mathematik, 133(3):497\u2013524, July 2016.",
337
+ "url": null
338
+ }
339
+ },
340
+ {
341
+ "5": {
342
+ "title": "Solving Ordinary Differential Equations I, Nonstiff Problems.",
343
+ "author": "Ernst Hairer, Syvert Norsett, and Gerhard Wanner.",
344
+ "venue": "Springer, 2000.",
345
+ "url": null
346
+ }
347
+ },
348
+ {
349
+ "6": {
350
+ "title": "Solving Ordinary Differential Equations II: Stiff and\nDifferential-Algebraic Problems.",
351
+ "author": "Ernst Hairer and Gerhard Wanner.",
352
+ "venue": "Springer Series in Computational Mathematics. Springer Berlin\nHeidelberg, 2013.",
353
+ "url": null
354
+ }
355
+ },
356
+ {
357
+ "7": {
358
+ "title": "On the convergence of lawson methods for semilinear stiff problems.",
359
+ "author": "Marlis Hochbruck, Jan Leibold, and Alexander Ostermann.",
360
+ "venue": "Numerische Mathematik, 145(3):553\u2013580, July 2020.",
361
+ "url": null
362
+ }
363
+ },
364
+ {
365
+ "8": {
366
+ "title": "Equilibrium stability analysis of hyperbolic shallow water moment\nequations.",
367
+ "author": "Qian Huang, Julian Koellermeier, and Wen-An Yong.",
368
+ "venue": "Mathematical Methods in the Applied Sciences,\n45(10):6459\u20136480, July 2022.",
369
+ "url": null
370
+ }
371
+ },
372
+ {
373
+ "9": {
374
+ "title": "Optimal stability polynomials for numerical integration of initial\nvalue problems.",
375
+ "author": "David I. Ketcheson and Aron J. Ahmadia.",
376
+ "venue": "pages 247\u2013271, 2012.",
377
+ "url": null
378
+ }
379
+ },
380
+ {
381
+ "10": {
382
+ "title": "Projective integration for moment models of the bgk equation.",
383
+ "author": "Julian Koellermeier and Giovanni Samaey.",
384
+ "venue": "In Lecture Notes in Computer Science (including subseries\nLecture Notes in Artificial Intelligence and Lecture Notes in\nBioinformatics), volume 12142 LNCS, pages 321\u2013333. Springer, June 2020.",
385
+ "url": null
386
+ }
387
+ },
388
+ {
389
+ "11": {
390
+ "title": "Projective integration schemes for hyperbolic moment equations.",
391
+ "author": "Julian Koellermeier and Giovanni Samaey.",
392
+ "venue": "Kinetic & Related Models, 14(2):353, May 2021.",
393
+ "url": null
394
+ }
395
+ },
396
+ {
397
+ "12": {
398
+ "title": "Spatially Adaptive Projective Integration Schemes For Stiff\nHyperbolic Balance Laws With Spectral Gaps.",
399
+ "author": "Julian Koellermeier and Giovanni Samaey.",
400
+ "venue": "The SMAI Journal of computational mathematics, 8:295\u2013325, dec\n2022.",
401
+ "url": null
402
+ }
403
+ },
404
+ {
405
+ "13": {
406
+ "title": "Moment approximations and model cascades for shallow flow.",
407
+ "author": "J. Kowalski and M. Torrilhon.",
408
+ "venue": "Commun. Comput. Phys., 25(3):669\u2013702, 2019.",
409
+ "url": null
410
+ }
411
+ },
412
+ {
413
+ "14": {
414
+ "title": "A high-order asymptotic-preserving scheme for kinetic equations using\nprojective integration.",
415
+ "author": "Pauline Lafitte, Annelies Lejon, and Giovanni Samaey.",
416
+ "venue": "SIAM Journal on Numerical Analysis, 54(1):1\u201333, January 2016.",
417
+ "url": null
418
+ }
419
+ },
420
+ {
421
+ "15": {
422
+ "title": "A high-order relaxation method with projective integration for\nsolving nonlinear systems of hyperbolic conservation laws.",
423
+ "author": "Pauline Lafitte, Ward Melis, and Giovanni Samaey.",
424
+ "venue": "Journal of Computational Physics, 340:1\u201325, 2017.",
425
+ "url": null
426
+ }
427
+ },
428
+ {
429
+ "16": {
430
+ "title": "Asymptotic-preserving projective integration schemes for kinetic\nequations in the diffusion limit.",
431
+ "author": "Pauline Lafitte and Giovanni Samaey.",
432
+ "venue": "SIAM Journal on Scientific Computing, 34(2):A579\u2013A602, January\n2012.",
433
+ "url": null
434
+ }
435
+ },
436
+ {
437
+ "17": {
438
+ "title": "On-the-fly local error estimation for projective integrators.",
439
+ "author": "S Lee and C. Gear.",
440
+ "venue": "Lawrence Livermore National Laboratory Technical Report\nUCRL-TR-224892, 2006.",
441
+ "url": null
442
+ }
443
+ },
444
+ {
445
+ "18": {
446
+ "title": "Second-order accurate projective integrators for multiscale problems.",
447
+ "author": "Steven L. Lee and Charles William Gear.",
448
+ "venue": "Journal of Computational and Applied Mathematics,\n201(1):258\u2013274, April 2007.",
449
+ "url": null
450
+ }
451
+ },
452
+ {
453
+ "19": {
454
+ "title": "On convergence of higher order schemes for the projective integration\nmethod for stiff ordinary differential equations.",
455
+ "author": "John Maclean and Georg A. Gottwald.",
456
+ "venue": "Journal of Computational and Applied Mathematics, 288:44\u201369,\nNovember 2015.",
457
+ "url": null
458
+ }
459
+ },
460
+ {
461
+ "20": {
462
+ "title": "Projective integration for nonlinear bgk kinetic equations.",
463
+ "author": "Ward Melis, Thomas Rey, and Giovanni Samaey.",
464
+ "venue": "In C Canc\u00e8s and P Omnes, editors, Finite Volumes for\nComplex Applications VIII - Hyperbolic, Elliptic and Parabolic Problems,\npages 145\u2013153, 2017.",
465
+ "url": null
466
+ }
467
+ },
468
+ {
469
+ "21": {
470
+ "title": "Projective and telescopic projective integration for the nonlinear\nbgk and boltzmann equations.",
471
+ "author": "Ward Melis, Thomas Rey, and Giovanni Samaey.",
472
+ "venue": "The SMAI journal of computational mathematics, 5:53\u201388, 2019.",
473
+ "url": null
474
+ }
475
+ },
476
+ {
477
+ "22": {
478
+ "title": "Telescopic projective integration for kinetic equations with multiple\nrelaxation times.",
479
+ "author": "Ward Melis and Giovanni Samaey.",
480
+ "venue": "Journal of Scientific Computing, 76:697\u2013726, 2018.",
481
+ "url": null
482
+ }
483
+ },
484
+ {
485
+ "23": {
486
+ "title": "Implicit\u2013explicit runge\u2013kutta schemes and applications to\nhyperbolic systems with relaxation.",
487
+ "author": "Lorenzo Pareschi and Giovanni Russo.",
488
+ "venue": "Journal of Scientific Computing, 25:129\u2013155, 2005.",
489
+ "url": null
490
+ }
491
+ },
492
+ {
493
+ "24": {
494
+ "title": "The approximate arithmetical solution by finite differences of\nphysical problems involving differential equations, with an application to\nthe stresses in a masonry dam.",
495
+ "author": "Lewis Fry Richardson.",
496
+ "venue": "Philosophical Transactions of the Royal Society of London.\nSeries A, 210(459-470):307\u2013357, January 1911.",
497
+ "url": null
498
+ }
499
+ },
500
+ {
501
+ "25": {
502
+ "title": "Modeling nonequilibrium gas flow based on moment equations.",
503
+ "author": "Manuel Torrilhon.",
504
+ "venue": "Annual Review of Fluid Mechanics, 48(1):429\u2013458, 2016.",
505
+ "url": null
506
+ }
507
+ },
508
+ {
509
+ "26": {
510
+ "title": "A projective integration method for transient stability assessment of\npower systems with a high penetration of distributed generation.",
511
+ "author": "Chengshan Wang, Kai Yuan, Peng Li, Bingqi Jiao, and Guanyu Song.",
512
+ "venue": "IEEE Transactions on Smart Grid, 9(1):386\u2013395, January 2018.",
513
+ "url": null
514
+ }
515
+ }
516
+ ],
517
+ "url": "http://arxiv.org/html/2210.08302v2"
518
+ }
20240119/2210.09745v2.json ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Transfer Learning with Affine Model Transformation",
3
+ "abstract": "Supervised transfer learning has received considerable attention due to its potential to boost the predictive power of machine learning in scenarios where data are scarce.\nGenerally, a given set of source models and a dataset from a target domain are used to adapt the pre-trained models to a target domain by statistically learning domain shift and domain-specific factors.\nWhile such procedurally and intuitively plausible methods have achieved great success in a wide range of real-world applications, the lack of a theoretical basis hinders further methodological development.\nThis paper presents a general class of transfer learning regression called affine model transfer, following the principle of expected-square loss minimization.\nIt is shown that the affine model transfer broadly encompasses various existing methods, including the most common procedure based on neural feature extractors.\nFurthermore, the current paper clarifies theoretical properties of the affine model transfer such as generalization error and excess risk.\nThrough several case studies, we demonstrate the practical benefits of modeling and estimating inter-domain commonality and domain-specific factors separately with the affine-type transfer models.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Transfer learning (TL) is a methodology\nto improve the predictive performance of machine learning in a target domain with limited data by reusing knowledge gained from training in related source domains.\nIts great potential has been demonstrated in various real-world problems, including computer vision (Krizhevsky2012ImageNetCW, ###reference_1###; Csurka2017DomainAF, ###reference_2###), natural language processing (Ruder2019TransferLI, ###reference_3###; Devlin2019BERTPO, ###reference_4###), biology (Sevakula2019TransferLF, ###reference_5###), and materials science (Yamada2019PredictingMP, ###reference_6###; wu2019machine, ###reference_7###; Ju2019ExploringDL, ###reference_8###).\nNotably, most of the outstanding successes of TL to date have relied on the feature extraction ability of deep neural networks.\nFor example, a conventional method reuses feature representations encoded in an intermediate layer of a pre-trained model as an input for the target task or uses samples from the target domain to fine-tune the parameters of the pre-trained source model (yosinski2014transferable, ###reference_9###).\nWhile such methods are operationally plausible and intuitive, they lack methodological principles and remain theoretically unexplored in terms of their learning capability for limited data.\nThis study develops a principled methodology generally applicable to various kinds of TL.\nIn this study, we focus on supervised TL settings.\nIn particular, we deal with settings where, given feature representations obtained from training in the source domain, we use samples from the target domain to model and estimate the domain shift to the target.\nThis procedure is called hypothesis transfer learning (HTL); several methods have been proposed, such as using a linear transformation function (kuzborskij2013stability, ###reference_10###; kuzborskij2017fast, ###reference_11###) and considering a general class of continuous transformation functions (du2017hypothesis, ###reference_12###).\nIf the transformation function appropriately captures the functional relationship between the source and target domains, only the domain-specific factors need to be additionally learned, which can be done efficiently even with a limited sample size.\nIn other words, the performance of HTL depends strongly on whether the transformation function appropriately represents the cross-domain shift.\nHowever, the general methodology for modeling and estimating such domain shifts has been less studied.\nThis study derives a theoretically optimal class of supervised TL that minimizes the expected loss function of the HTL. The resulting function class takes the form of an affine coupling of three functions and , where the shift from a given source feature to the target domain is represented by the functions and , and the domain-specific factors are represented by for any given input .\nThese functions can be estimated simultaneously using conventional supervised learning algorithms such as kernel methods or deep neural networks.\nHereafter, we refer to this framework as the affine model transfer.\nAs described later, we can formulate a wide variety of TL algorithms within the affine model transfer, including the widely used neural feature extractors, offset and scale HTLs (kuzborskij2013stability, ###reference_10###; kuzborskij2017fast, ###reference_11###; du2017hypothesis, ###reference_12###), and Bayesian TL (Minami2021AGC, ###reference_13###). We clarify theoretical properties of the affine model transfer such as generalization error and excess risk.\nTo summarize, the contributions of our study are as follows:\nThe affine model transfer is proposed to adapt source features to the target domain by separately estimating cross-domain shift and domain-specific factors.\nThe affine form is derived theoretically as an optimal class based on the squared loss for the target task.\nThe affine model transfer encompasses several existing TL methods, including neural feature extraction. It can work with any type of source model, including non-machine learning models such as physical models as well as multiple source models.\nFor each of the three functions , , and , we provide an efficient and stable estimation algorithm when modeled using the kernel method.\nTwo theoretical properties of the affine transfer model are shown: the generalization and the excess risk bound.\nWith several applications, we compare the affine model transfer with other TL algorithms, discuss its strengths, and demonstrate the advantage of being able to estimate cross-domain shifts and domain-specific factors separately."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Transfer Learning via Transformation Function",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Affine Model Transfer",
21
+ "text": "This study considers regression problems with squared loss.\nWe assume that the output of the target domain follows , where is the true model on the target domain, and the observation noise has mean zero and variance .\nWe are given samples from the target domain and the feature representation from one or more source domains.\nTypically, is given as a vector, including the output of the source models, observed data in the source domains, or learned features in a pre-trained model, but it can also be a non-vector feature such as a tensor, graph, or text.\nHereafter, is referred to as the source features.\nIn this paper, we focus on transfer learning with variable transformations as proposed in (du2017hypothesis, ###reference_12###).\nFor an illustration of the concept, consider the case where there exists a relationship between the true functions and such that with an unknown parameter .\nIf is non-smooth, a large number of training samples is needed to learn directly.\nHowever, since the difference is a linear function with respect to the unknown , it can be learned with fewer samples if prior information about is available.\nFor example, a target model can be obtained by adding to the model trained for the intermediate variable .\nThe following is a slight generalization of TL procedure provided in (du2017hypothesis, ###reference_12###):\nWith the source features, perform a variable transformation of the observed outputs as , using the data transformation function .\nTrain an intermediate model using the transformed sample set to predict the transformed output for any given .\nObtain a target model using the model transformation function that combines and to define a predictor.\nIn particular, (du2017hypothesis, ###reference_12###) considers the case where the model transformation function is equal to the inverse of the data transformation function. We consider a more general case that eliminates this constraint.\nThe objective of step 1 is to identify a transformation that maps the output variable to the intermediate variable , making the variable suitable for learning.\nIn step 2, a predictive model for is constructed.\nSince the data is limited in many TL setups, a simple model, such as a linear model, should be used as .\nStep 3 is to transform the intermediate model into a predictive model for the original output .\nThis class of TL includes several approaches proposed in previous studies.\nFor example, (kuzborskij2013stability, ###reference_10###; kuzborskij2017fast, ###reference_11###) proposed a learning algorithm consisting of linear data transformation and linear model transformation: and with pre-defined weights .\nIn this case, factors unexplained by the linear combination of source features are learned with , and the target output is predicted additively with the common factor and the additionally learned .\nIn (Minami2021AGC, ###reference_13###), it is shown that a type of Bayesian TL is equivalent to using the following transformation functions; for , and with two varying hyperparameters and .\nThis includes TL using density ratio estimation (DBLP:conf/sdm/LiuF16, ###reference_14###) and neural network-based fine-tuning as special cases when the two hyperparameters belong to specific regions.\nThe performance of this TL strongly depends on the design of the two transformation functions and .\nIn the sequel, we theoretically derive the optimal form of transformation functions under the squared loss scenario.\nFor simplicity, we denote the transformation functions as on and on .\nTo derive the optimal class of and , note first that the TL procedure described above can be formulated in population as solving two successive least square problems;\nSince the regression function that minimizes the mean squared error is the conditional mean, the first problem is solved by , which depends on . We can thus consider the optimal transformation functions and by the following minimization:\nIt is easy to see that Eq. (1 ###reference_###) is equivalent to the following consistency condition:\nFrom the above observation, we make three assumptions to derive the optimal form of and :\nThe data transformation function is differentiable with respect to the first argument.\nThe model transformation function is invertible with respect to the first argument, i.e., its inverse exists.\nFor any distribution on the target domain , and for all ,\nwhere .\nAssumption 2.2 ###reference_heorem2### is commonly used in most existing HTL settings, such as (kuzborskij2013stability, ###reference_10###) and (du2017hypothesis, ###reference_12###).\nIt assumes a one-to-one correspondence between the predictive value and the output of the intermediate model .\nIf this assumption does not hold, then multiple values of correspond to the same predicted value , which is unnatural.\n\nNote that Assumption 2.3 ###reference_heorem3### corresponds to the unbiased condition of (du2017hypothesis, ###reference_12###).\nWe now derive the properties that the optimal transformation functions must satisfy.\nUnder Assumptions 2.1 ###reference_heorem1###-2.3 ###reference_heorem3###, the transformation functions and satisfy the following two properties:\nwhere and are some functions.\nThe proof is given in Section D.1 ###reference_### in Supplementary Material.\nDespite not initially assuming that the two transformation functions are inverses, Theorem 2.4 ###reference_heorem4### implies they must indeed be inverses.\nFurthermore, the mean squared error is minimized when the data and model transformation functions are given by an affine transformation and its inverse, respectively.\nIn summary, under the expected squared loss minimization with the HTL procedure, the optimal class for HTL model is expressed as follows:\nwhere and are the arbitrarily function classes.\nHere, each of and is modeled as a function of that represents common factors across the source and target domains.\n is modeled as a function of , in order to capture the domain-specific factors unexplainable by the source features.\nWe have derived the optimal form of the transformation functions when the squared loss is employed.\nEven for general convex loss functions, (i) of Theorem 2.4 ###reference_heorem4### still holds.\nHowever, (ii) of Theorem 2.4 ###reference_heorem4### does not generally hold because the optimal transformation function depends on the loss function.\nExtensions to other losses are briefly discussed in Section A.1 ###reference_###, but the establishment of a complete theory is a future work.\nHere, the affine transformation is found to be optimal in terms of minimizing the mean squared error.\nWe can also derive the same optimal function by minimizing the upper bound of the estimation error in the HTL procedure, as discussed in Section A.2 ###reference_###.\nOne of key principles for the design of , , and is interpretability.\nIn our model, and primarily facilitate knowledge transfer, while the estimated is used to gain insight on domain-specific factors.\nFor instance, in order to infer cross-domain differences, we could design and by the conventional neural feature extraction, and a simple, highly interpretable model such as a linear model could be used for .\nThus, observing the estimated regression coefficients in , one can statistically infer which features of are related to inter-domain differences.\nThis advantage of the proposed method is demonstrated in Section 5.2 ###reference_### and Section B.3 ###reference_###."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Relation to Existing Methods",
27
+ "text": "The affine model transfer encompasses some existing TL procedures.\nFor example, by setting and , the prediction model is estimated without using the source features, which corresponds to an ordinary direct learning, i.e., a learning scheme without transfer.\nFurthermore, various kinds of HTLs can be formulated by imposing constraints on and . In prior work, (kuzborskij2013stability, ###reference_10###) employs a two-step procedure where the source features are combined with pre-defined weights, and then the auxiliary model is additionally learned for the residuals unexplainable by the source features.\nThe affine model transfer can represent this HTL as a special case by setting .\n(du2017hypothesis, ###reference_12###) uses the transformed output with the output value of a source model, and this cross-domain shift is then regressed onto using a target dataset. This HTL corresponds to and .\nWhen a pre-trained source model is provided as a neural network, TL is usually performed with the intermediate layer as input to the model in the target domain.\nThis is called a feature extractor or frozen featurizer and has been experimentally and theoretically proven to have strong transfer capability as the de facto standard for TL (yosinski2014transferable, ###reference_9###; tripuraneni2020theory, ###reference_15###).\nThe affine model transfer encompasses the neural feature extraction as a special subclass, which is equivalent to setting .\nA performance comparison of the affine model transfer with the neural feature extraction is presented in Section 5 ###reference_### and Section B.2 ###reference_###.\nThe relationships between these existing methods and the affine model transfer are illustrated in Figure 1 ###reference_### and Figure S.1 ###reference_###\nThe affine model transfer can also be interpreted as generalizing the feature extraction by adding a product term .\nThis additional term allows for the inclusion of unknown factors in the transferred model that are unexplainable by source features alone.\nFurthermore, this encourages the avoidance of a negative transfer, a phenomenon where prior learning experiences interfere with training in a new task.\nThe usual TL based only on attempts to explain and predict the data generation process in the target domain using only the source features.\nHowever, in the presence of domain-specific factors, a negative transfer can occur owing to a lack of descriptive power.\nThe additional term compensates for this shortcoming.\nThe comparison of behavior for the case with the non-relative source features is described in Section 5.1 ###reference_###.\nThe affine model transfer can be naturally expressed as an architecture of network networks. This architecture, called affine coupling layers, is widely used for invertible neural networks in flow-based generative modeling (dinh2014nice, ###reference_16###; Dinh2017DensityEU, ###reference_17###).\nNeural networks based on affine coupling layers have been proven to have universal approximation ability (teshima2020coupling, ###reference_18###).\nThis implies that the affine transfer model has the potential to represent a wide range of function classes, despite its simple architecture based on the affine coupling of three functions.\n###figure_1### ###figure_2### ###figure_3###"
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Modeling and Estimation",
33
+ "text": "In this section, we focus on using kernel methods for the affine transfer model and provide the estimation algorithm.\nLet and be reproducing kernel Hilbert spaces (RKHSs) with positive-definite kernels and , which define the feature mappings and , respectively.\nDenote .\nFor the proposed model class, the -regularized empirical risk with the squared loss is given as follows:\nwhere are hyperparameters for the regularization.\nAccording to the representer theorem, the minimizer of with respect to the parameters , , and reduces to\n\nwith the -dimensional unknown parameter vectors .\nSubstituting this expression into Eq. (2 ###reference_###), we obtain the objective function as\nHere, the symbol denotes the Hadamard product.\n is the Gram matrix associated with the kernel for . denotes the -th column of the Gram matrix. The matrix is given by the tensor product of and .\nBecause the model is linear with respect to parameter and bilinear for and , the optimization of Eq. (3 ###reference_###) can be solved using well-established techniques for the low-rank tensor regression.\nIn this study, we use the block relaxation algorithm (zhou2013tensor, ###reference_19###) as described in Algorithm 1 ###reference_###.\nIt updates , and by repeatedly fixing two of the three parameters and minimizing the objective function for the remaining one.\nFixing two parameters, the resulting subproblem can be solved analytically because the objective function is expressed in a quadratic form for the remaining parameter.\nAlgorithm 1 ###reference_### can be regarded as repeating the HTL procedure introduced in Section 2.1 ###reference_###; alternately estimates the parameters of the transformation function and the parameters of the model for the given transformed data .\n\nThe function in Algorithm 1 ###reference_### is not jointly convex in general. However, when employing methods like kernel methods or generalized linear models, and fixing two parameters, exhibits convexity with respect to the remaining parameter. According to (zhou2013tensor, ###reference_19###), when each sub-minimization problem is convex, Algorithm 1 ###reference_### is guaranteed to converge to a stationary point. Furthermore, (zhou2013tensor, ###reference_19###) showed that consistency and asymptotic normality hold for the alternating minimization algorithm."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Theoretical Results",
39
+ "text": "In this section, we present two theoretical properties, the generalization bound and excess risk bound.\nLet be an arbitrary probability space, and be independent random variables\nwith distribution .\nFor a function , let the expectation of with respect to and its empirical counterpart denote respectively by\n\nWe use a non-negative loss such that it is bounded from above by and for any fixed , is -Lipschitz for some .\nRecall that the function class proposed in this work is\nIn particular, the following discussion in this section assumes that , and are represented by linear functions on the RKHSs."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "Generalization Bound",
45
+ "text": "The optimization problem is expressed as follows:\nwhere and denote the feature maps.\nWithout loss of generality, it is assumed that () and .\nHereafter, we will omit the suffixes in the norms if there is no confusion.\nLet be a solution of Eq. (4 ###reference_###), and denote the corresponding function in as .\nFor any , we have\nwhere we use the fact that and are non-negative, and is the minimizer of Eq. (4 ###reference_###).\nDenoting\n\nwe obtain\n\nBecause the same inequality\nholds for and , we have\n\nand\n\nMoreover, we have\n\nTherefore, it is sufficient to consider the following hypothesis class and loss class :\nHere, we show the generalization bound of the proposed model class.\nThe following theorem is based on (kuzborskij2017fast, ###reference_11###), showing that the difference between the generalization error and empirical error can be bounded using the magnitude of the relevance of the domains.\nThere exists a constant depending only on and such that, for any and , with probability at least ,\nwhere .\nBecause is the feature map from the source feature space into the RKHS , corresponds to the true risk of training in the target domain using only the source features .\nIf this is sufficiently small, e.g., , the convergence rate indicated by Theorem 4.1 ###reference_heorem1### becomes , which is an improvement over the naive convergence rate .\nThis means that if the source task yields feature representations strongly related to the target domain, training in the target domain is accelerated.\nTheorem 4.1 ###reference_heorem1### measures this cross-domain relation using the metric .\nTheorem 4.1 ###reference_heorem1### is based on Theorem 11 of (kuzborskij2017fast, ###reference_11###) in which the function class is considered.\nOur work differs in the following two points: the source features are modeled not only additively but also multiplicatively, i.e., we consider the function class , and we also consider the estimation of the parameters for the source feature combination, i.e., the parameters of the functions and .\nIn particular, the latter affects the resulting rate.\nWith fixed the source combination parameters, the resulting rate improves only up to .\nThe details are discussed in Section D.2 ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "Excess Risk Bound",
51
+ "text": "Here, we analyze the excess risk, which is the difference between the risk of the estimated function and the smallest possible risk within the function class.\nRecall that we consider the functions and to be elements of the RKHSs and with kernels and , respectively.\nDefine the kernel , and .\nLet and be the RKHS with and respectively.\nFor , consider the normalized Gram matrix and its eigenvalues , arranged in a nonincreasing order.\nWe make the following additional assumptions:\nThere exists and () such that and .\nFor , there exist and such that .\nAssumption 4.2 ###reference_heorem2### is used in (Bartlett2005LocalRC, ###reference_20###) and is not overly restrictive as it holds for many regularization algorithms and convex, uniformly bounded function classes.\nIn the analysis of kernel methods, Assumption 4.3 ###reference_heorem3### is standard (Steinwart2008SupportVM, ###reference_21###), and is known to be equivalent to the classical covering or entropy number assumption (Steinwart2009OptimalRF, ###reference_22###).\nThe inverse decay rate measures the complexity of the RKHS, with larger values corresponding to more complex function spaces.\nLet be any element of satisfying .\nUnder Assumptions 4.2 ###reference_heorem2### and 4.3 ###reference_heorem3###, for any , with probability at least ,\nTheorem 4.4 ###reference_heorem4### suggests that the convergence rate of the excess risk depends on the decay rates of the eigenvalues of two Gram matrices and .\nThe inverse decay rate of the eigenvalues of represents the learning efficiency using only the source features, while is the inverse decay rate of the eigenvalues of the Hadamard product of and , which addresses\nthe effect of combining the source features and original input.\nWhile rigorous discussion on the relationship between the spectra of two Gram matrices and their Hadamard product seems difficult,\nintuitively, the smaller the overlap between the space spanned by the source features and by the original input, the smaller the overlap between and .\nIn other words, as the source features and original input have different information, the tensor product will be more complex, and the decay rate is expected to be larger.\nIn Section B.1 ###reference_###, we experimentally confirm this speculation."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Experimental Results",
57
+ "text": "We demonstrate the potential of the affine model transfer through two case studies: (i) the prediction of feed-forward torque at seven joints of the robot arm (williams2006gaussian, ###reference_23###), and (ii) the prediction of review scores and decisions of scientific papers (Singh2022SciRepEvalAM, ###reference_24###).\nThe experimental details are presented in Section C ###reference_###. Additionally, two case studies in materials science are presented in Section B ###reference_###.\nThe Python code is available at https://github.com/mshunya/AffineTL ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "5.1",
61
+ "parent_section_id": "5",
62
+ "section_name": "Kinematics of the Robot Arm",
63
+ "text": "We experimentally investigated the learning performance of the affine model transfer, compared to several existing methods. The objective of the task is to predict the feed-forward torques, required to follow the desired trajectory, at seven different joints of the SARCOS robot arm (williams2006gaussian, ###reference_23###).\nTwenty-one features representing the joint position, velocity, and acceleration were used as the input .\nThe target task is to predict the torque value at one joint.\nThe representations encoded in the intermediate layer of the source neural network for predicting the other six joints were used as the source features .\nThe experiments were conducted with seven different tasks (denoted as Torque 1-7) corresponding to the seven joints.\nFor each target task, a training set of size was randomly constructed 20 times, and the performances were evaluated using the test data.\nThe following seven methods were compared, including two existing HTL procedures:\nKernel ridge regression with the Gaussian kernel was used for each procedure. The scale parameter was fixed to the square root of the dimension of the input.\nThe regularization parameter in the kernel ridge regression and , and in the affine model transfer were selected through 5-fold cross-validation.\nIn addition to the seven feature-based methods, four weight-based TL methods were evaluated: fine-tuning, MAML (Finn2017ModelAgnosticMF, ###reference_25###), -SP (xuhong2018explicit, ###reference_26###), and PAC-Net (myung2022pac, ###reference_27###).\nTable 5.1 ###reference_### summarizes the prediction performance of the seven different procedures for varying numbers of training samples in two representative tasks: Torque 1 and Torque 7.\nThe joint of Torque 1 is located closest to the root of the arm.\nTherefore, the learning task for Torque 1 is less relevant to those for the other joints, and the transfer from Torque 2\u20136 to Torque 1 would not work.\nIn fact, as shown in Table 5.1 ###reference_###, no method showed a statistically significant improvement to Direct.\nIn particular, Only source failed to acquire predictive ability, and HTL-offset and HTL-scale likewise showed poor prediction performance owing to the negative effect of the failure in the variable transformation.\nIn contrast, the two affine transfer models showed almost the same predictive performance as Direct, which is expressed as its submodel, and successfully suppressed the occurrence of negative transfer.\nBecause Torque 7 was measured at the joint closest to the end of the arm, its value strongly depends on those at the other six joints, and the procedures with the source features were more effective than in the other tasks.\nIn particular, AffineTL achieved the best performance among the other feature-based methods.\nThis is consistent with the theoretical result that the transfer capability of the affine model transfer can be further improved when the risk of learning using only the source features is sufficiently small.\nIn Table C.1.2 ###reference_SSS2.Px7### in Section C.1 ###reference_###, we present the results for all tasks.\nIn most cases, AffineTL achieved the best performance among the feature-based methods.\nIn several other cases, Direct produced the best results; in almost all cases, Only source and the two HTLs showed no advantage over AffineTL.\nComparing the weight-based and feature-based methods, we noticed that the weight-based methods showed higher performance with large sample sizes.\nNevertheless, in scenarios with extremely small sample sizes (e.g., or ), AffineTL exhibited comparable or even superior performance.\nThe strength of our method compared to weight-based TLs including fine-tuning is that it does not degrade its performance in cases where cross-domain relationships are weak.\nWhile fine-tuning outperformed our method in cases of Torque 7, the performance of fine-tuning was significantly degraded as the source-target relationship became weaker, as seen in Torque 1 case.\nIn contrast, our method was able to avoid negative transfer even for such cases.\nThis characteristic is particularly beneficial because, in many cases, the degree of relatedness between the domains is not known in advance.\nFurthermore, weight-based methods can sometimes be unsuitable, especially when transferring knowledge from large models, such as LLMs. In these scenarios, fine-tuning all parameters is unfeasible, and feature-based TL is preferred. Our approach often outperforms other feature-based methods."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.SS1.177\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.SS1.177.179.5.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S5.SS1.8.8.4\" style=\"font-size:90%;\">\nPerformance on predicting the torque values at the first and seventh joints of the SARCOS robot arm.\nThe mean and standard deviation of the RMSE are reported for varying numbers of training samples.\nFor each task and , the case with the smallest mean RMSE is indicated by the bold type. An asterisk indicates a case where the RMSEs of 20 independent experiments were significantly improved over <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.8.8.4.1\">Direct</span> at the 1% significance level, according to the Welch\u2019s t-test.\n represent the dimension of the original input (i.e., ). \n</span></figcaption><div class=\"ltx_flex_figure ltx_flex_table\">\n<div class=\"ltx_flex_cell\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S5.SS1.167.167\" style=\"width:433.6pt;height:358.3pt;vertical-align:-0.8pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-58.2pt,48.0pt) scale(0.788477734648797,0.788477734648797) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.SS1.167.167.159\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.SS1.167.167.159.160.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_tt\" id=\"S5.SS1.167.167.159.160.1.1\" rowspan=\"3\">\n<span class=\"ltx_text\" id=\"S5.SS1.167.167.159.160.1.1.1\">Target</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_tt\" id=\"S5.SS1.167.167.159.160.1.2\" rowspan=\"3\">\n<span class=\"ltx_text\" id=\"S5.SS1.167.167.159.160.1.2.1\">Model</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"7\" id=\"S5.SS1.167.167.159.160.1.3\">Number of training samples</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.11.11.3.3\">\n<td class=\"ltx_td ltx_align_center\" colspan=\"3\" id=\"S5.SS1.9.9.1.1.1\">\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.SS1.10.10.2.2.2\">\n</td>\n<td class=\"ltx_td ltx_align_center\" colspan=\"3\" id=\"S5.SS1.11.11.3.3.3\">\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.167.167.159.161.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS1.167.167.159.161.2.1\">5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS1.167.167.159.161.2.2\">10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS1.167.167.159.161.2.3\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS1.167.167.159.161.2.4\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS1.167.167.159.161.2.5\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS1.167.167.159.161.2.6\">40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.SS1.167.167.159.161.2.7\">50</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.18.18.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S5.SS1.18.18.10.10.8\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S5.SS1.18.18.10.10.8.1\">Torque 1</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S5.SS1.18.18.10.10.9\">Direct</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.12.12.4.4.1\">21.3 2.04</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.13.13.5.5.2\">18.9 2.11</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.14.14.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.14.14.6.6.3.1\">17.4 1.79</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.15.15.7.7.4\">15.8 1.70</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.16.16.8.8.5\">13.7 1.26</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.17.17.9.9.6\">12.2 1.61</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.18.18.10.10.7\">10.8 1.23</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.25.25.17.17\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.25.25.17.17.8\">Only source</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.19.19.11.11.1\">24.0 6.37</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.20.20.12.12.2\">22.3 3.10</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.21.21.13.13.3\">21.0 2.49</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.22.22.14.14.4\">19.7 1.34</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.23.23.15.15.5\">18.5 1.92</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.24.24.16.16.6\">17.6 1.59</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.25.25.17.17.7\">17.3 1.31</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.32.32.24.24\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.32.32.24.24.8\">Augmented</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.26.26.18.18.1\">21.8 2.88</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.27.27.19.19.2\">19.2 1.37</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.28.28.20.20.3\">17.8 2.30</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.29.29.21.21.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.29.29.21.21.4.1\">15.7 1.53</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.30.30.22.22.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.30.30.22.22.5.1\">13.3 1.19</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.31.31.23.23.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.31.31.23.23.6.1\">11.9 1.37</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.32.32.24.24.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.32.32.24.24.7.1\">10.7 0.954</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.39.39.31.31\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.39.39.31.31.8\">HTL-offset</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.33.33.25.25.1\">23.7 6.50</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.34.34.26.26.2\">21.2 3.85</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.35.35.27.27.3\">19.8 3.23</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.36.36.28.28.4\">17.8 2.35</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.37.37.29.29.5\">16.2 3.31</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.38.38.30.30.6\">15.0 3.16</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.39.39.31.31.7\">15.1 2.76</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.46.46.38.38\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.46.46.38.38.8\">HTL-scale</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.40.40.32.32.1\">23.3 4.47</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.41.41.33.33.2\">22.1 5.31</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.42.42.34.34.3\">20.4 3.84</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.43.43.35.35.4\">18.5 2.72</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.44.44.36.36.5\">17.6 2.41</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.45.45.37.37.6\">16.9 2.10</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.46.46.38.38.7\">16.7 1.74</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.53.53.45.45\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.SS1.53.53.45.45.8\">\n<span class=\"ltx_ERROR undefined\" id=\"S5.SS1.53.53.45.45.8.1\">\\cdashline</span>2-9</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.53.53.45.45.9\">AffineTL-full</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.47.47.39.39.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.47.47.39.39.1.1\">21.2 2.23</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.48.48.40.40.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.48.48.40.40.2.1\">18.8 1.31</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.49.49.41.41.3\">18.6 2.83</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.50.50.42.42.4\">15.9 1.65</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.51.51.43.43.5\">13.7 1.53</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.52.52.44.44.6\">12.3 1.45</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.53.53.45.45.7\">11.1 1.12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.60.60.52.52\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.SS1.60.60.52.52.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.60.60.52.52.9\">AffineTL-const</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.54.54.46.46.1\">21.2 2.21</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.55.55.47.47.2\">18.8 1.44</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.56.56.48.48.3\">17.7 2.44</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.57.57.49.49.4\">15.9 1.58</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.58.58.50.50.5\">13.4 1.15</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.59.59.51.51.6\">12.2 1.54</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.60.60.52.52.7\">10.9 1.02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.67.67.59.59\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.SS1.67.67.59.59.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S5.SS1.67.67.59.59.9\">Fine-tune</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.61.61.53.53.1\">25.0 7.11</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.62.62.54.54.2\">20.5 3.33</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.63.63.55.55.3\">18.6 2.10</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.64.64.56.56.4\">17.6 2.55</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.65.65.57.57.5\">14.1 1.39</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.66.66.58.58.6\">12.6 1.13</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.67.67.59.59.7\">11.1 1.03</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.74.74.66.66\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.SS1.74.74.66.66.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.74.74.66.66.9\">MAML</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.68.68.60.60.1\">29.8 12.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.69.69.61.61.2\">22.5 3.21</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.70.70.62.62.3\">20.8 2.12</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.71.71.63.63.4\">20.3 3.14</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.72.72.64.64.5\">16.7 3.00</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.73.73.65.65.6\">14.4 1.85</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.74.74.66.66.7\">13.4 1.19</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.82.82.74.74\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.SS1.82.82.74.74.9\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.75.75.67.67.1\">\n-SP</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.76.76.68.68.2\">24.9 7.09</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.77.77.69.69.3\">20.5 3.30</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.78.78.70.70.4\">18.8 2.04</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.79.79.71.71.5\">18.0 2.45</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.80.80.72.72.6\">14.5 1.36</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.81.81.73.73.7\">13.0 1.13</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.82.82.74.74.8\">11.6 0.983</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.89.89.81.81\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.SS1.89.89.81.81.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.89.89.81.81.9\">PAC-Net</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.83.83.75.75.1\">25.2 8.68</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.84.84.76.76.2\">22.7 5.60</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.85.85.77.77.3\">20.7 2.65</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.86.86.78.78.4\">20.1 2.16</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.87.87.79.79.5\">18.5 2.77</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.88.88.80.80.6\">17.6 1.85</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.89.89.81.81.7\">17.1 1.38</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.96.96.88.88\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S5.SS1.96.96.88.88.8\" rowspan=\"5\"><span class=\"ltx_text\" id=\"S5.SS1.96.96.88.88.8.1\">Torque 7</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S5.SS1.96.96.88.88.9\">Direct</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.90.90.82.82.1\">2.66 0.307</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.91.91.83.83.2\">2.13 0.420</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.92.92.84.84.3\">1.85 0.418</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.93.93.85.85.4\">1.54 0.353</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.94.94.86.86.5\">1.32 0.200</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.95.95.87.87.6\">1.18 0.138</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.96.96.88.88.7\">1.05 0.111</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.103.103.95.95\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.103.103.95.95.8\">Only source</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.97.97.89.89.1\">2.31 0.618</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.98.98.90.90.2\">*1.73 0.560</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.99.99.91.91.3\">*1.49 0.513</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.100.100.92.92.4\">*1.22 0.269</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.101.101.93.93.5\">*1.09 0.232</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.102.102.94.94.6\">*0.969 0.144</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.103.103.95.95.7\">*0.927 0.170</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.110.110.102.102\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.110.110.102.102.8\">Augmented</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.104.104.96.96.1\">2.47 0.406</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.105.105.97.97.2\">1.90 0.515</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.106.106.98.98.3\">1.67 0.552</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.107.107.99.99.4\">*1.31 0.214</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.108.108.100.100.5\">1.16 0.225</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.109.109.101.101.6\">*0.984 0.149</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.110.110.102.102.7\">*0.897 0.138</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.117.117.109.109\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.117.117.109.109.8\">HTL-offset</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.111.111.103.103.1\">2.29 0.621</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.112.112.104.104.2\">*<span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.112.112.104.104.2.1\">1.69 0.507</span>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.113.113.105.105.3\">*1.49 0.513</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.114.114.106.106.4\">*1.22 0.269</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.115.115.107.107.5\">*1.09 0.233</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.116.116.108.108.6\">*0.969 0.144</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.117.117.109.109.7\">*0.925 0.171</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.124.124.116.116\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.124.124.116.116.8\">HTL-scale</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.118.118.110.110.1\">2.32 0.599</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.119.119.111.111.2\">*1.71 0.516</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.120.120.112.112.3\">1.51 0.513</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.121.121.113.113.4\">*1.24 0.271</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.122.122.114.114.5\">*1.12 0.234</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.123.123.115.115.6\">*0.999 0.175</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.124.124.116.116.7\">0.948 0.172</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.131.131.123.123\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row\" id=\"S5.SS1.131.131.123.123.8\">\n<span class=\"ltx_ERROR undefined\" id=\"S5.SS1.131.131.123.123.8.1\">\\cdashline</span>2-9</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.131.131.123.123.9\">AffineTL-full</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.125.125.117.117.1\">*<span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.125.125.117.117.1.1\">2.23 0.554</span>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.126.126.118.118.2\">*1.71 0.501</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.127.127.119.119.3\">*<span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.127.127.119.119.3.1\">1.45 0.458</span>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.128.128.120.120.4\">*1.21 0.256</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.129.129.121.121.5\">*1.06 0.219</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.130.130.122.122.6\">*0.974 0.164</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.131.131.123.123.7\">*<span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.131.131.123.123.7.1\">0.870 0.121</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.138.138.130.130\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.SS1.138.138.130.130.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.138.138.130.130.9\">AffineTL-const</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.132.132.124.124.1\">*2.30 0.565</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.133.133.125.125.2\">*1.73 0.420</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.134.134.126.126.3\">*1.48 0.527</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.135.135.127.127.4\">*<span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.135.135.127.127.4.1\">1.20 0.243</span>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.136.136.128.128.5\">*<span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.136.136.128.128.5.1\">1.04 0.217</span>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.137.137.129.129.6\">*<span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.137.137.129.129.6.1\">0.963 0.161</span>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.138.138.130.130.7\">*0.884 0.136</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.145.145.137.137\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.SS1.145.145.137.137.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S5.SS1.145.145.137.137.9\">Fine-tune</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.139.139.131.131.1\">*2.33 0.511</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.140.140.132.132.2\">*1.62 0.347</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.141.141.133.133.3\">*1.35 0.340</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.142.142.134.134.4\">*1.12 0.165</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.143.143.135.135.5\">*0.959 0.12</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.144.144.136.136.6\">*0.848 0.0824</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.SS1.145.145.137.137.7\">*0.790 0.0547</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.152.152.144.144\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.SS1.152.152.144.144.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.152.152.144.144.9\">MAML</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.146.146.138.138.1\">2.54 1.29</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.147.147.139.139.2\">1.90 0.507</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.148.148.140.140.3\">1.67 0.313</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.149.149.141.141.4\">1.63 0.282</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.150.150.142.142.5\">1.28 0.272</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.151.151.143.143.6\">1.20 0.199</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.152.152.144.144.7\">1.06 0.111</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.160.160.152.152\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S5.SS1.160.160.152.152.9\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S5.SS1.153.153.145.145.1\">\n-SP</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.154.154.146.146.2\">*2.33 0.509</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.155.155.147.147.3\">*1.65 0.378</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.156.156.148.148.4\">*1.35 0.340</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.157.157.149.149.5\">*1.12 0.165</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.158.158.150.150.6\">*0.968 0.114</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.159.159.151.151.7\">*0.858 0.0818</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.SS1.160.160.152.152.8\">*0.802 0.0535</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.SS1.167.167.159.159\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb\" id=\"S5.SS1.167.167.159.159.8\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb\" id=\"S5.SS1.167.167.159.159.9\">PAC-Net</th>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.SS1.161.161.153.153.1\">2.24 0.706</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.SS1.162.162.154.154.2\">*1.61 0.394</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.SS1.163.163.155.155.3\">*1.43 0.389</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.SS1.164.164.156.156.4\">*1.24 0.177</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.SS1.165.165.157.157.5\">*1.18 0.100</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.SS1.166.166.158.158.6\">1.13 0.0726</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S5.SS1.167.167.159.159.7\">1.100 0.0589</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</div>\n<div class=\"ltx_flex_cell\">\n<p class=\"ltx_p\" id=\"S5.SS1.170.170\">We experimentally investigated the learning performance of the affine model transfer, compared to several existing methods. The objective of the task is to predict the feed-forward torques, required to follow the desired trajectory, at seven different joints of the SARCOS robot arm <cite class=\"ltx_cite ltx_citemacro_citep\">(<a class=\"ltx_ref\" href=\"#bib.bib23\" title=\"\">williams2006gaussian, ###reference_23###</a>)</cite>.\nTwenty-one features representing the joint position, velocity, and acceleration were used as the input .\nThe target task is to predict the torque value at one joint.\nThe representations encoded in the intermediate layer of the source neural network for predicting the other six joints were used as the source features .\nThe experiments were conducted with seven different tasks (denoted as Torque 1-7) corresponding to the seven joints.\nFor each target task, a training set of size was randomly constructed 20 times, and the performances were evaluated using the test data.</p>\n</div>\n<div class=\"ltx_flex_cell\">\n<p class=\"ltx_p\" id=\"S5.SS1.177.180\">The following seven methods were compared, including two existing HTL procedures:</p>\n</div>\n<div class=\"ltx_flex_cell\">\n<p class=\"ltx_p\" id=\"S5.SS1.175.175\">Kernel ridge regression with the Gaussian kernel was used for each procedure. The scale parameter was fixed to the square root of the dimension of the input.\nThe regularization parameter in the kernel ridge regression and , and in the affine model transfer were selected through 5-fold cross-validation.\nIn addition to the seven feature-based methods, four weight-based TL methods were evaluated: fine-tuning, MAML <cite class=\"ltx_cite ltx_citemacro_citep\">(<a class=\"ltx_ref\" href=\"#bib.bib25\" title=\"\">Finn2017ModelAgnosticMF, ###reference_25###</a>)</cite>, -SP <cite class=\"ltx_cite ltx_citemacro_citep\">(<a class=\"ltx_ref\" href=\"#bib.bib26\" title=\"\">xuhong2018explicit, ###reference_26###</a>)</cite>, and PAC-Net <cite class=\"ltx_cite ltx_citemacro_citep\">(<a class=\"ltx_ref\" href=\"#bib.bib27\" title=\"\">myung2022pac, ###reference_27###</a>)</cite>.</p>\n</div>\n<div class=\"ltx_flex_cell\">\n<p class=\"ltx_p\" id=\"S5.SS1.177.181\">Table <a class=\"ltx_ref\" href=\"#S5.SS1\" title=\"5.1 Kinematics of the Robot Arm \u2023 5 Experimental Results \u2023 Transfer Learning with Affine Model Transformation\">5.1 ###reference_###</a> summarizes the prediction performance of the seven different procedures for varying numbers of training samples in two representative tasks: Torque 1 and Torque 7.\nThe joint of Torque 1 is located closest to the root of the arm.\nTherefore, the learning task for Torque 1 is less relevant to those for the other joints, and the transfer from Torque 2\u20136 to Torque 1 would not work.\nIn fact, as shown in Table <a class=\"ltx_ref\" href=\"#S5.SS1\" title=\"5.1 Kinematics of the Robot Arm \u2023 5 Experimental Results \u2023 Transfer Learning with Affine Model Transformation\">5.1 ###reference_###</a>, no method showed a statistically significant improvement to <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.181.1\">Direct</span>.\nIn particular, <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.181.2\">Only source</span> failed to acquire predictive ability, and <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.181.3\">HTL-offset</span> and <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.181.4\">HTL-scale</span> likewise showed poor prediction performance owing to the negative effect of the failure in the variable transformation.\nIn contrast, the two affine transfer models showed almost the same predictive performance as <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.181.5\">Direct</span>, which is expressed as its submodel, and successfully suppressed the occurrence of negative transfer.</p>\n</div>\n<div class=\"ltx_flex_cell\">\n<p class=\"ltx_p\" id=\"S5.SS1.177.182\">Because Torque 7 was measured at the joint closest to the end of the arm, its value strongly depends on those at the other six joints, and the procedures with the source features were more effective than in the other tasks.\nIn particular, <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.182.1\">AffineTL</span> achieved the best performance among the other feature-based methods.\nThis is consistent with the theoretical result that the transfer capability of the affine model transfer can be further improved when the risk of learning using only the source features is sufficiently small.</p>\n</div>\n<div class=\"ltx_flex_cell\">\n<p class=\"ltx_p\" id=\"S5.SS1.177.177\">In Table <a class=\"ltx_ref\" href=\"#A3.SS1.SSS2.Px7\" title=\"PAC-Net \u2023 C.1.2 Model Definition and Hyperparameter Search \u2023 C.1 Kinematics of the Robot Arm \u2023 Appendix C Experimental Details \u2023 Acknowledgments and Disclosure of Funding \u2023 6 Conclusions \u2023 5.3 Case Studies in Materials Science \u2023 5.2 Evaluation of Scientific Documents \u2023 5.1 Kinematics of the Robot Arm \u2023 5 Experimental Results \u2023 Transfer Learning with Affine Model Transformation\">C.1.2 ###reference_SSS2.Px7###</a> in Section <a class=\"ltx_ref\" href=\"#A3.SS1\" title=\"C.1 Kinematics of the Robot Arm \u2023 Appendix C Experimental Details \u2023 Acknowledgments and Disclosure of Funding \u2023 6 Conclusions \u2023 5.3 Case Studies in Materials Science \u2023 5.2 Evaluation of Scientific Documents \u2023 5.1 Kinematics of the Robot Arm \u2023 5 Experimental Results \u2023 Transfer Learning with Affine Model Transformation\">C.1 ###reference_###</a>, we present the results for all tasks.\nIn most cases, <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.177.1\">AffineTL</span> achieved the best performance among the feature-based methods.\nIn several other cases, <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.177.2\">Direct</span> produced the best results; in almost all cases, <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.177.3\">Only source</span> and the two HTLs showed no advantage over <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.177.4\">AffineTL</span>.\nComparing the weight-based and feature-based methods, we noticed that the weight-based methods showed higher performance with large sample sizes.\nNevertheless, in scenarios with extremely small sample sizes (e.g., or ), <span class=\"ltx_text ltx_font_bold\" id=\"S5.SS1.177.177.5\">AffineTL</span> exhibited comparable or even superior performance.\n</p>\n</div>\n<div class=\"ltx_flex_cell\">\n<p class=\"ltx_p\" id=\"S5.SS1.177.183\">The strength of our method compared to weight-based TLs including fine-tuning is that it does not degrade its performance in cases where cross-domain relationships are weak.\nWhile fine-tuning outperformed our method in cases of Torque 7, the performance of fine-tuning was significantly degraded as the source-target relationship became weaker, as seen in Torque 1 case.\nIn contrast, our method was able to avoid negative transfer even for such cases.\nThis characteristic is particularly beneficial because, in many cases, the degree of relatedness between the domains is not known in advance.\nFurthermore, weight-based methods can sometimes be unsuitable, especially when transferring knowledge from large models, such as LLMs. In these scenarios, fine-tuning all parameters is unfeasible, and feature-based TL is preferred. Our approach often outperforms other feature-based methods.</p>\n</div>\n</div>\n</figure>",
70
+ "capture": "Table 1: \nPerformance on predicting the torque values at the first and seventh joints of the SARCOS robot arm.\nThe mean and standard deviation of the RMSE are reported for varying numbers of training samples.\nFor each task and , the case with the smallest mean RMSE is indicated by the bold type. An asterisk indicates a case where the RMSEs of 20 independent experiments were significantly improved over Direct at the 1% significance level, according to the Welch\u2019s t-test.\n represent the dimension of the original input (i.e., ). \n"
71
+ }
72
+ },
73
+ "image_paths": {},
74
+ "validation": true,
75
+ "references": [],
76
+ "url": "http://arxiv.org/html/2210.09745v2"
77
+ }
20240119/2211.12121v3.json ADDED
@@ -0,0 +1,499 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Least squares approximations in linear statistical inverse learning problems",
3
+ "abstract": "Statistical inverse learning aims at recovering an unknown function from randomly scattered and possibly noisy point evaluations of another function , connected to via an ill-posed mathematical model.\nIn this paper we blend statistical inverse learning theory with the classical regularization strategy of applying finite-dimensional projections.\nOur key finding is that coupling the number of random point evaluations with the choice of projection dimension, one can derive probabilistic convergence rates for the reconstruction error of the maximum likelihood (ML) estimator. Convergence rates in expectation are derived with a ML estimator complemented with a norm-based cut-off operation. Moreover, we prove that the obtained rates are minimax optimal.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Statistical inverse learning aims at recovering an unknown function from randomly scattered and possibly noisy point evaluations of another function , connected to via an ill-posed mathematical model.\nStatistical learning has a long tradition in inverse problems going back to the works [40 ###reference_40###, 4 ###reference_4###] and has recently gained increasing attention in literature.\nA crucial component in addressing statistical inverse learning is the choice of regularization scheme [17 ###reference_17###] needed to stabilize the inverse problem.\nThe success of a given inverse learning method is often described in (probabilistic) terms of the reconstruction error and, in particular, its convergence speed with respect to increasing number of point evaluations. To this effect, we highlight the recent work by Blanchard and M\u00fccke [5 ###reference_5###], where minimax optimal convergence rates are derived for the general spectral regularization approach in Hilbert spaces under certain classes of sampling measure. The spectral approach has since been extended to non-linear inverse problems [43 ###reference_43###], adaptive parameter choice rules [30 ###reference_30###] and convex regularization penalties [7 ###reference_7###].\nIn this paper we blend statistical inverse learning theory with the approach of regularization by projection. This approach is based on the idea that projecting either the domain or the range of the forward operator to a finite-dimensional subspace stabilizes the inverse problem [17 ###reference_17###]. As one key motivation for studying projection-based strategies, common iterative algorithms such as the Krylov space methods can be interpreted as projection-based regularization methods for inverse problems.\nIt is well-known that convergence rates for general inverse problems cannot be shown without further assumptions or applying projection in both the domain and the range of the forward operator [46 ###reference_46###].\nOur key finding is that the statistical learning framework with finite number of random data point evaluations coupled with general finite-dimensional projection in operator domain provides similar remedy to the convergence study. When coupling the choice of projection dimension\nwith the number of random point evaluations, we are able to derive convergence rates for the probabilistic reconstruction error of the maximum likelihood (ML) estimator in the range of the projection. Moreover, convergence rates for the expected reconstruction error are derived for the ML estimator complemented with a norm-based cut-off. To complete the picture, we prove that the attained rates are minimax optimal.\nIn terms of frequentist approach to statistical inverse problems, our result is related to work by Mathe and Pereverzev [34 ###reference_34###], who provide convergence rates for optimal discretization schemes in linear models. In the spirit of statistical learning, the projection applied in the range in [34 ###reference_34###] could be interpreted as taking dual pairings in a reproducing kernel Hilbert space with the corresponding kernel function thus giving rise to point evaluation data. However, the design in [34 ###reference_34###] is fixed while in learning context (including this paper) it is considered to be randomly generated by an unknown distribution.\nLet us describe this phenomena more rigorously. Suppose is a Borel set and is a probability measure on , which we will occasionally refer to as the design measure. Let be a separable real infinite-dimensional Hilbert space. We investigate the measurement model\nwhere is the datum, is a compact, one-to-one linear operator, and is the unknown function.\nThe range is assumed to be a reproducing kernel Hilbert space (RKHS) induced by the positive semidefinite kernel . Moreover, without loss of generality we assume .\nLet be i.i.d. with respect to the probability measure . We are interested in finding an approximation for the ground truth based on an ensemble of noisy observations such that\nwhere stands for the noise-free data corresponding to the ground-truth value , describes noise level and are independent and normally distributed. By denoting , we reformulate (2 ###reference_###) into a vectorized form\nwhere is the evaluation operator at point set and .\nTo study the limit of increasing , we denote\nwhere is the canonical injection map, and introduce the corresponding normal operator\nFor more properties of the operator , see [15 ###reference_15###, Prop. 19].\nBelow, a key structure is the underlying discretization scheme of , which will stay fixed throughout the paper.\nLet , , be finite-dimensional subspaces of such that and let be an orthogonal projection.\nWe call a sequence admissible subspaces if for all and .\nThe main purpose of definition 1.1 ###reference_heorem1### is to make sure that any can be approximated for a given accuracy in some for large enough.\nWe note that the nested structure of the subspaces is only utilized in the proof of the minimax optimality result below.\nWe seek an approximate solution for the ground truth by defining the ML estimator as the minimum-norm least squares solution to . More precisely, we set\nwhere stands for the norm induced by the empirical inner product with .\nThe estimator is defined up to a zero-measurable set as it is always unique and can be represented by a linear mapping applied to as we will see later.\nIn statistical inverse problems, it is well-known that further restrictions regarding the ground truth , so-called source conditions, are needed in order to derive concentration rates for regularized estimators [9 ###reference_9###].\nClassical source conditions often imply certain smoothness of via the mapping properties of . Here, we can impose more explicit approximation conditions to by connecting the source set to the approximation rates obtained in subspaces by defining\nwhere and , , are admissible subspaces. Above, we use convention . For example, finite-element approximations in Sobolev space are typically bounded by some higher order Sobolev norm , , of the function and a power of the mesh size [44 ###reference_44###]. Here, this would correspond to \ncoinciding with an -Sobolev ball with parameter dependent on and the dimension of the domain.\nTo specify assumptions on the design measure let denote all probability measures on domain . We introduce the following parametrized subset of design measures\nwhere stands for the smallest non-zero eigenvalue, i.e., the smallest eigenvalue of the finite-dimensional restriction . Also, recall that is always positive as is strictly positive-definite on for any .\nRestricting to the subsets defined in (1 ###reference_###) quantifies the ill-posedness of the statistical inverse problems. Good intuition is perhaps obtained by recalling that corresponds to the limit of infinite observations. For example, the set specifies the worst instability of this limiting problem all across the subspaces . We assume first to derive an upper bound with given parameter choice rules. Then, a lower bound to concentration is given in , i.e., in a set restricting the stability. We only consider polynomial decay rates, i.e., mildly ill-posed problems, for convenience.\nNext the probabilistic reconstruction error is characterized by the following theorem.\nLet be a sequence of admissible subspaces, is a Hilbert\u2013Schmidt operator and\nsuppose and \nfor some constants . Moreover, we assume that is the ML estimator defined by identity (5 ###reference_###).\nLet satisfy\nThere exists a constant depending on , such that\nwith probability greater than .\nNotice that with the condition (8 ###reference_###) theorem 1.2 ###reference_heorem2### characterizes the error for tail probabilities. To derive convergence results in expectation, the authors in [5 ###reference_5###] utilize a priori bound for the Tikhonov regularized solution valid almost surely, which can be merged with a sharper result concerning the tail probabilities. Here, such a priori bound is not immediately available as the ML estimator can have arbitrary large norm with positive probability. To mitigate this effect, we consider a non-linear truncated estimator inspired by earlier work in [12 ###reference_12###].\nMore precisely, let satisfy\nwhere is fixed. We define a non-linear estimator by setting\nwhere is chosen depending on parameters specified below.\nIn the following result we characterize the upper rate of convergence in (see [5 ###reference_5###, Def. 3.1]) for . To that end, let be the set of regular conditional probability distributions such that is distributed according to the Gaussian distribution for some .\nMoreover, let us denote the admissible class of models by\nwhere and are a subset of probability measures and a source condition, respectively, specified by some fixed admissible subspaces .\nLet be a sequence of admissible subspaces and suppose satisfy . Moreover, we assume that is the ML estimator defined by identity (11 ###reference_###).\nFor the parameter choice\nwith given by\nwith suitably large constant depending on and ,\nit holds that\nwhere , and\n\nFollowing the techniques utilized in [5 ###reference_5###] it is now possible to prove minimax optimality of the upper convergence rate provided above. We formulate this result as the following theorem.\nLet , , is a Hilbert\u2013Schmidt operator and consider the design measures\nand given by (6 ###reference_###).\nThen the sequence of estimators with parameter choice rule (12 ###reference_###) and (13 ###reference_###) is strong minimax optimal in for all over the class , i.e., the upper rate of convergence given by theorem 1.3 ###reference_heorem3### is also strong minimax lower rate of convergence such that\nwhere the infimum is taken over all estimators (measurable mappings) .\nThere is partial overlap between results provided by Blanchard and M\u00fccke in [5 ###reference_5###] and those proved here. Namely,\ntruncated singular value decomposition is a classical method of spectral regularization,\nwhere the unknown is projected to a finite number of eigenvectors of .\nSuppose has a polynomially decaying spectrum and is spanned by the eigenvectors corresponding to the largest eigenvalues.\nFirst, it is straightforward to prove that the classical source condition satisfies\nSecond, we have , where are the eigenvalues of in decreasing order, giving rise to the interpretation of our parameter as the polynomial decay of spectrum of (denoted by in [5 ###reference_5###, p. 983]). Furthermore, we have\nfor any .\nThird, more general noise model is utilized in [5 ###reference_5###]. For normally distributed noise, this would correspond to .\nFinally, we notice that the qualification of the spectral projection is arbitrary [5 ###reference_5###, Example 2.16].\nAs a consequence, the rate demonstrated by Blanchard and M\u00fccke is given by\nwhere we identified , and .\nLet us also point out an interesting difference in the respective results: In our work, the upper convergence rate requires design measure to be limited to a set of type , where the ill-posedness of the inverse problem is limited. Similarly, the minimax optimality is given in a framework further limiting design measure to a set that restricts \u201dwell-posedness\u201d of the problem. Noticeably the setup is exactly the opposite in [5 ###reference_5###], where upper rate requires limiting design measure to .\nWe would argue that the former is more natural as intuitively worse ill-posedness should lead to deteriorated identifiability and, therefore, slower convergence rate (likewise, better conditioning should lead to improved convergence rate).\nHowever, the difference can be attributed to the source conditions: the classical source condition intertwines the ill-posedness of the problem with the assumption on the smoothness of the ground truth, while in our case the dependence is less direct. While our source condition is independent of the forward operator, some interplay is required due to same subspace structure being applied in the assumptions regarding the design measure."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Literature overview",
15
+ "text": "The literature on regularization by projection or discretization is extensive; for a wide perspective on the deterministic problem in terms of projections both in domain and range of a (nonlinear) forward operator, see e.g.\n[38 ###reference_38###, 18 ###reference_18###, 50 ###reference_50###, 42 ###reference_42###, 19 ###reference_19###, 24 ###reference_24###, 6 ###reference_6###, 21 ###reference_21###, 14 ###reference_14###, 22 ###reference_22###, 35 ###reference_35###, 27 ###reference_27###, 45 ###reference_45###, 28 ###reference_28###].\nFor adaptive or multigrid approaches, see [25 ###reference_25###, 26 ###reference_26###, 32 ###reference_32###]. We also mention a recent data-driven approach, where the projections are designed based on data [1 ###reference_1###].\nProjection methods have also been studied in the framework of statistical inverse problems, where minimax optimality of the method is understood in various settings\n[53 ###reference_53###, 39 ###reference_39###, 23 ###reference_23###, 16 ###reference_16###, 33 ###reference_33###, 31 ###reference_31###, 10 ###reference_10###, 11 ###reference_11###].\nNotice that the framework provided in aforementioned papers for statistical inverse problems is connected to the learning setting in the sense that point evaluation of the data function can be considered as a projection to kernel functions on a reproducing kernel Hilbert space. However, point evaluations arising from a random measure are not covered by this theory.\nIn this regard, our work has close connections to the reproducing kernel methods in learning theory, which is a popular field with a vast body of literature. Let us note that connections of kernel regression methods to regularization theory were first studied in [15 ###reference_15###, 52 ###reference_52###, 29 ###reference_29###] and the line of research has since become widely popular. Early work on upper rates of convergence in a reproducing kernel Hilbert space was carried out by Smale and Cucker in [13 ###reference_13###], where they utilized a covering number technique. After the initial success, there has been a long line of subsequent work [52 ###reference_52###, 47 ###reference_47###, 48 ###reference_48###, 2 ###reference_2###, 54 ###reference_54###, 8 ###reference_8###] providing convergence rates comparable to [5 ###reference_5###].\nLet us also point out that there is an avenue of research [36 ###reference_36###, 49 ###reference_49###] considering penalties of type . Notice that the notion of convergence in the usual learning context and the inverse problem setting is different and are not directly comparable: in learning theory the convergence rates are derived in norm, where is the unknown sampling measure generating data points. However, since the solution and data space, i.e. and , differ for the inverse problem, it is natural to consider modes of convergence in . For a related discussion and brief overview on relevant convergence rate literature, see [37 ###reference_37###].\nIn terms of inverse learning problems, we mention that Tikhonov regularization of non-linear inverse problems is considered in [43 ###reference_43###] and adaptive parameter choice rules are studied\nin [30 ###reference_30###]. Moreover, for distributed learning of inverse problems, see [20 ###reference_20###] and references therein.\nThis paper is organized as follows. In section 2 we provide mathematical preliminaries and record well-known concentration results that will be utilized later. Section 3 contains a proof for theorem 1.2 ###reference_heorem2###, while theorem 1.3 ###reference_heorem3### is derived in section 4. In section 5 prove that the obtained rates are minimax optimal. Finally, section 5 is we provide brief conclusions."
16
+ },
17
+ {
18
+ "section_id": "2",
19
+ "parent_section_id": null,
20
+ "section_name": "Mathematical preliminaries",
21
+ "text": "Recall the definition of a reproducing kernel Hilbert space.\nA Hilbert space of functions with inner product is called the\nreproducing kernel Hilbert space (RKHS) corresponding to a symmetric, positive definite kernel if\nfor all , , as a function of its second argument, belongs to ,\nfor all and , .\n\nWe assume below, without loss of generality, that the kernel elements are uniformly bounded and, consequently, for any . Moreover, recall that was assumed to be strictly positive definite. Here, we assign with the empirical inner product given by\nLet us now consider the adjoint operator that satisfies\nfor any and . Therefore, we have\nand, consequently,\nIn similar vein, we introduce a short-hand notation for the normal operator\nWe write , when the ensemble is identified with one point .\nClearly, we have , when .\nThe operator can be considered as an empirical estimator of defined by identity (4 ###reference_###). Its concentration rate is known and made precise by the corollary 2.3 ###reference_heorem3### below. To prove corollary 2.3 ###reference_heorem3###, the following general concentration result is used and will be utilized also elsewhere in this paper.\nLet be a probability space and a random variable to a real separable Hilbert space . Assume that there are two positive constants and such that for any\nIf the sample is drawn i.i.d from according to , then, for any , we have\nwith probability greater than , where\nIn particular, inequality (14 ###reference_###) holds if\n\nLet us briefly note that a normal random variable satisfies (14 ###reference_###) with .\nLet be a Hilbert\u2013Schmidt operator and .\nFor any sample size and it holds that\nwith probability greater than .\nIn what follows, stands for the Moore\u2013Penrose pseudoinverse of an operator [17 ###reference_17###].\nNotice that we frequently utilize the following two properties of pseudoinverse\nand\nwhere is a linear bounded operator."
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "Concentration result",
27
+ "text": "In this section we assume that is a Hilbert\u2013Schmidt operator and derive the proof for theorem 1.2 ###reference_heorem2###. To this end,\nwe will make gradual assumptions on the interplay between and subspaces \nto build towards the framework assumed in the theorem.\nFor convenience, let us abbreviate\nfor any .\nLater, we provide the precise connection of the final estimate to constants and and, therefore, keep track of them below.\nNext, consider the definition of . A least squares solution to in clearly solves the normal equation\nIt follows that is obtained with the pseudoinverse\nand, consequently,\nwhere we utilized identity (16 ###reference_###) for and abbreviated\nIn literature, the term is often called approximation error, while is referred to as the variance.\nLet us derive some concentration properties for utilizing corollary 2.3 ###reference_heorem3###. Carefully notice that below we vary the operator norm between the standard norm of linear bounded operators and the Hilbert\u2013Schmidt norm . Recall that and\nfor any linear bounded operators .\nLet satisfy\nWith probability greater than , it holds that\n\nReformulating inequality (20 ###reference_###) we observe that\nNow corollary 2.3 ###reference_heorem3### immediately yields the result.\nWe note that since for any satisfying and , we also have .\nSuppose and\ninequality (21 ###reference_###) holds.\nThen it follows that\n\nFirst observe that is a bijection from onto itself, since , and by inequality (21 ###reference_###) we have . Therefore, by Neumann series theorem also is bijection on and we have\nFor inequality (22 ###reference_###) we notice that\nby our assumptions, where we also applied (17 ###reference_###).\nFor remaining inequalities, we find by some algebraic manipulation that\nFirst, by triangle inequality we obtain\nand since , it follows that\nThis yields inequality (23 ###reference_###).\nSecond, we multiply the identity (27 ###reference_###) from right by and apply triangle inequality together with a norm bounds so that\nwhere we applied the inequality (22 ###reference_###). Now inequality (24 ###reference_###) follows by noting that\ndue to identity (16 ###reference_###) and our assumption .\nIn the same vein, multiplying identity (27 ###reference_###) from right by yields\nand, consequently,\nNow we have\nwhich concludes the proof.\nSuppose that and .\nFor any , let satisfy\nThen it holds that\nfor holds with probability greater than , where is given in equation (19 ###reference_###).\nLet us first observe that\nWhen the bound (21 ###reference_###) holds, we have that\nwhere we used identities (16 ###reference_###) and (26 ###reference_###).\nNow combining inequality (25 ###reference_###) in proposition 3.3 ###reference_heorem3### with the source condition implies\nwith the given probability.\nIn what follows, we abuse notation by denoting the square root of the pseudoinverse by for convenience.\nSuppose . Let satisfy\nThere exists a constant dependent on and such that\nwith probability greater than , where is the noise level.\nLet us decompose into three terms\nBelow we use the observation that for satisfying (29 ###reference_###) we have by lemma 3.1 ###reference_heorem1### that with probability greater than and, therefore, the inequalities of proposition 3.3 ###reference_heorem3### will be available with the same probability.\nFirst, under the assumption of inequality (21 ###reference_###), we obtain by inequality (23 ###reference_###) that\nwith probability greater than , where we used the assumption that .\nFor the second term we find that\nwith probability greater than , where we applied the Cordes inequality [3 ###reference_3###, theorem IX.2.1-2] and inequality (24 ###reference_###) from proposition 3.3 ###reference_heorem3###.\nFor the third term, let us write\nConsequently, we have identity\nNotice that , , are independent -valued random variables. Recall that and\nwhere we identified the linear operator with a real number and utilized a rough upper bound .\nDue to boundedness of we observe that\nwhere .\nMoreover, abbreviating we obtain\ndue to positivity of the operator .\nIn consequence, we obtain\nsince and . Moreover, we have\ndue to (17 ###reference_###) and the cyclic property of the trace.\nTo sum up, we obtain\nfor any .\nApplying theorem 2.2 ###reference_heorem2### yields that\nwith probability greater than . Combining the error estimates for , , we obtain the result.\nProof of theorem 1.2 ###reference_heorem2###.\nThe result follows by combining theorems 3.5 ###reference_heorem5### and 3.7 ###reference_heorem7###. Namely, if we have independent events and occuring with probability greater than and , respectively, then and occur simultaneously with probability\nThis concludes the proof."
28
+ },
29
+ {
30
+ "section_id": "4",
31
+ "parent_section_id": null,
32
+ "section_name": "Expected reconstruction error",
33
+ "text": "Recall the definition of given in (10 ###reference_###)\nand by assumption it also holds that . We define our nonlinear estimator according to\nwhere is set below and will depend on , and .\nIn the following, consider as a complete probability space describing all possible events of our learning problem (1 ###reference_###). We factorize into smaller subsets that can be individually quantified.\nLet us denote\nMoreover, let\nand . Notice that by lemma 3.1 ###reference_heorem1### we have\nUtilizing inequality\nfor any and the trivial inequality , we can decompose the expected reconstruction error as follows\nwhere in both second and third term we utilized the truncation.\nA direct estimate to the third term in error decomposition is given by (30 ###reference_###).\nLet us now derive estimates also for the first and second error term in (4 ###reference_###).\nConsider the model for some constants , where , and suppose that is the ML estimator defined by identity (11 ###reference_###).\nWe have\nfor , where the constant depending on and , .\nDefine a positive random variable\nNext, recall from theorem 1.2 ###reference_heorem2### that for satisfying\nwe have\nwhere\nand\nfor some constant depending on , . Similarly, we observe that\ndue to the source condition and requirement .\nNow it follows directly from [5 ###reference_5###, Cor. C.2.] that for any positive we have\nwith a constant depending on . The result follows by estimates of type for .\nLet be a sequence of admissible subspaces and\nsuppose and \nfor some constants .\nMoreover, let be the ML estimator defined by identity (11 ###reference_###).\nFor the parameter choice\nit holds that\n\nWe observe first that\nwhere we made use of .\nFor , we notice by identity (18 ###reference_###) and proposition 3.5 ###reference_heorem5### that\nand, consequently,\nby the choice of in equation (32 ###reference_###).\nNote that in , we have that\nThen, it follows that\nwhere we applied identity .\nIndeed, random variables have subexponential distribution, more precisely, with , . It follows that\nsee e.g. [51 ###reference_51###].\nBy Bernstein\u2019s inequality, a random variable satisfies the one-sided tail bound\nBy considering , we observe that and taking , we conclude that\nwhich completes the proof.\n\nCombining propositions 4.1 ###reference_heorem1### and 4.3 ###reference_heorem3### with inequality (4 ###reference_###), we can state the following upper bound to the reconstruction error\nLet be a sequence of admissible subspaces and\nsuppose and \nfor some constants .\nMoreover, let be the ML estimator defined by identity (11 ###reference_###) with given by (32 ###reference_###).\nWe have that\nfor , where the inequality is up to a constant depending on and , .\nNow we are ready to prove our main result.\nProof of theorem 1.3 ###reference_heorem3###.\nLet us consider the upper bound obtained in corollary 4.5 ###reference_heorem5### and\nfocus on the sum\nAn ansatz with variables , minimizes with parameters\nand induces a bound\nWe will next confirm that applying the ansatz with values in (35 ###reference_###) to the other terms in the upper bound of (33 ###reference_###) yields slower rates of convergence.\nFirst, applying the ansatz to the second term, we observe that\nSecond, when satisfy , we have that and therefore grows polynomially. In consequence, the last terms on the right-hand side of inequality (33 ###reference_###) decay exponentially w.r.t. , and we find that dominates the expectation up to a constant with parameter choice rule indicated by (35 ###reference_###) asymptotically w.r.t .\nThis yields the result."
34
+ },
35
+ {
36
+ "section_id": "5",
37
+ "parent_section_id": null,
38
+ "section_name": "Minimax optimality",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "5.1",
43
+ "parent_section_id": "5",
44
+ "section_name": "Preliminaries",
45
+ "text": "In this section we follow the main steps of strategy devised in [5 ###reference_5###] for proving minimax optimality. We note that here the proof is in some parts simplified since our source condition is not dependent on the normal operator and, therefore, allowing more explicit arguments directly deriving the strong minimax optimality.\nLet us construct the necessary set of tools for the proof. For the moment, consider a general model of probability measures on a measurable space . Further, let be a metric.\nNow suppose and recall the definition of Kullback\u2013Leibler divergence between and given by\nif is absolutely continuous w.r.t. .\nFor a -fold tensors, i.e. on , we have\nLet us briefly describe the procedure to prove (strong) minimax optimality below. For any large enough we aim to find with the following properties:\nwe can find parameters such that and , are -separated with respect to the associated distance while the (data-generating) distributions have small mutual Kullback\u2013Leibler divergence.\nWe utilize the following fundamental lower bound\nTo prove that satisfies the claim in theorem 1.4 ###reference_heorem4###, we show that can be chosen so that\nhold simultaneously for some positive constant independent of .\nThis result will provide the strong minimax lower bound described in theorem 1.4 ###reference_heorem4### while the upper bound is obtained in theorem 1.3 ###reference_heorem3###.\nTo begin with, let us record the main auxiliary results that will be utilized.\nAssume that and suppose contains elements such that\nFor some and for any we have ,\nFor any , is absolutely continuous with respect to and\nfor some .\nThen it follows that\n\nIn addition, the following lemma is key in constructing the required elements .\nFor any there exists an integer and such that\nfor any with it holds\nand\nwhere ."
46
+ },
47
+ {
48
+ "section_id": "5.2",
49
+ "parent_section_id": "5",
50
+ "section_name": "Proof of strong minimax optimality",
51
+ "text": "Let us now turn to the concrete problem at hand.\nWe consider as the metric induced by norm, i.e., . For we associate the following joint measure\nwhere . We observe that if , then . Moreover, by [5 ###reference_5###, Prop. 6.2] for we have\nWe assume that form an orthonormal basis with the property that for any and forms an orthonormal basis for the subspace .\nSuch basis can be constructed e.g. by the Gram\u2013Schmidt orthonormalization procedure.\nAssume that and let . For any with there exists and functions satisfying following three conditions:\nIt holds that and\nfor any with .\nLet be given by (39 ###reference_###). Then it holds\nfor any with , where the constant is dependent on .\nIt holds that .\n\nWe define\nand observe that with we have satisfying condition of lemma 5.2 ###reference_heorem2###. In consequence, let and be given by the same lemma\nand define\nfor .\nWe observe that since one can write\n\nand show that\nMoreover, we have\nwhich completes the proof for claim (i).\nConsider now claim (ii). We have that\nFor the claim (iii), it remains to observe that\nwhich concludes the proof.\nNow we are ready to prove the minimax optimality.\nProof of theorem 1.4 ###reference_heorem4###. Let us now fix parameters \nand consider model , where\nand given by (6 ###reference_###) parametrized by admissible subspaces , . By theorem 1.3 ###reference_heorem3### we know that\nyields an upper rate of convergence in . It remains to show that is also a strong minimax lower rate of convergence.\nTo this end, we construct a rule (which we will invert below to obtain ) for small enough that yields the lower bound in the spirit of (37 ###reference_###) and (38 ###reference_###). For let and be given by proposition 5.3 ###reference_heorem3###. Let us then consider the conditions of proposition 5.1 ###reference_heorem1###. Clearly, the condition (i) is satisfied due to first result in proposition 5.3 ###reference_heorem3###. For the second condition we have\nBy setting\nensures and condition (ii).\nNow we obtain by proposition 5.1 ###reference_heorem1### that\nwhere the constant is independent of .\nFinally, inverting identity (42 ###reference_###) translates into a bound\nwhich is aligned with the rate implying is bounded away from zero and, in conclusion, we have\nThis completes the proof."
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusions",
57
+ "text": "In this work we have studied statistical inverse learning for a regularization strategy obtained by projecting the unknown to finite-dimensional subspaces. We have demonstrated that our nonlinear estimator, which is constructed as a norm cut-off of a linear maximum likelihood estimator, achieves minimax optimal convergence rates. Indeed, in the particular example of truncated singular value decomposition, our rate coincides with the known minimax optimal rate.\nProjection methods are often motivated by iterative schemes, where the number of iteration steps identifies the dimension of a subspace to which the unknown is projected. Here, we require that the subspace structure is fixed. It remains future work to study interesting iterative methods for which conditions such as imposed by the set are satisfied with high probability; e.g. if the structure is dependent on the observational data , is data-driven or a spectral basis is approximated by a power method. Moreover, future work naturally includes extending our results to nonlinear inverse problems."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {},
62
+ "image_paths": {},
63
+ "validation": true,
64
+ "references": [
65
+ {
66
+ "1": {
67
+ "title": "Data driven regularization by projection.",
68
+ "author": "Andrea Aspri, Yury Korolev, and Otmar Scherzer.",
69
+ "venue": "Inverse Problems, 36(12):125009, 2020.",
70
+ "url": null
71
+ }
72
+ },
73
+ {
74
+ "2": {
75
+ "title": "On regularization algorithms in learning theory.",
76
+ "author": "Frank Bauer, Sergei Pereverzev, and Lorenzo Rosasco.",
77
+ "venue": "Journal of complexity, 23(1):52\u201372, 2007.",
78
+ "url": null
79
+ }
80
+ },
81
+ {
82
+ "3": {
83
+ "title": "Matrix analysis, volume 169.",
84
+ "author": "Rajendra Bhatia.",
85
+ "venue": "Springer Science & Business Media, 2013.",
86
+ "url": null
87
+ }
88
+ },
89
+ {
90
+ "4": {
91
+ "title": "Consistency and rates of convergence of nonlinear Tikhonov\nregularization with random noise.",
92
+ "author": "Nicolai Bissantz, Thorsten Hohage, and Axel Munk.",
93
+ "venue": "Inverse Problems, 20(6):1773, 2004.",
94
+ "url": null
95
+ }
96
+ },
97
+ {
98
+ "5": {
99
+ "title": "Optimal rates for regularization of statistical inverse learning\nproblems.",
100
+ "author": "Gilles Blanchard and Nicole M\u00fccke.",
101
+ "venue": "Foundations of Computational Mathematics, 18(4):971\u20131013,\n2018.",
102
+ "url": null
103
+ }
104
+ },
105
+ {
106
+ "6": {
107
+ "title": "Self-regularization of projection methods with a posteriori\ndiscretization level choice for severely ill-posed problems.",
108
+ "author": "Gottfried Bruckner and Sergei V Pereverzev.",
109
+ "venue": "Inverse Problems, 19(1):147, 2002.",
110
+ "url": null
111
+ }
112
+ },
113
+ {
114
+ "7": {
115
+ "title": "Convex regularization in statistical inverse learning problems.",
116
+ "author": "Tatiana A Bubba, Martin Burger, Tapio Helin, and Luca Ratti.",
117
+ "venue": "arXiv preprint arXiv:2102.09526, 2021.",
118
+ "url": null
119
+ }
120
+ },
121
+ {
122
+ "8": {
123
+ "title": "Optimal rates for the regularized least-squares algorithm.",
124
+ "author": "Andrea Caponnetto and Ernesto De Vito.",
125
+ "venue": "Foundations of Computational Mathematics, 7(3):331\u2013368, 2007.",
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "9": {
131
+ "title": "Nonparametric statistical inverse problems.",
132
+ "author": "Laurent Cavalier.",
133
+ "venue": "Inverse Problems, 24(3):034004, 2008.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "10": {
139
+ "title": "Sharp adaptation for inverse problems with random noise.",
140
+ "author": "Laurent Cavalier and Alexandre Tsybakov.",
141
+ "venue": "Probability Theory and Related Fields, 123(3):323\u2013354, 2002.",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "11": {
147
+ "title": "Statistical approach to some ill-posed problems for linear partial\ndifferential equations.",
148
+ "author": "Pao-Liu Chow, Ildar A Ibragimov, and Rafail Z Khasminskii.",
149
+ "venue": "Probability theory and related fields, 113(3):421\u2013441, 1999.",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "12": {
155
+ "title": "On the stability and accuracy of least squares approximations.",
156
+ "author": "Albert Cohen, Mark A Davenport, and Dany Leviatan.",
157
+ "venue": "Foundations of computational mathematics, 13(5):819\u2013834, 2013.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "13": {
163
+ "title": "Best choices for regularization parameters in learning theory: on the\nbias-variance problem.",
164
+ "author": "Felipe Cucker and Steve Smale.",
165
+ "venue": "Foundations of Computational Mathematics, 2(4):413\u2013428, 2002.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "14": {
171
+ "title": "Error controlled regularization by projection.",
172
+ "author": "Wolfgang Dahmen and Markus J\u00fcrgens.",
173
+ "venue": "Electronic transactions on numerical analysis, 25:67\u2013100,\n2006.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "15": {
179
+ "title": "Discretization error analysis for Tikhonov regularization.",
180
+ "author": "Ernesto De Vito, Lorenzo Rosasco, and Andrea Caponnetto.",
181
+ "venue": "Analysis and Applications, 4(01):81\u201399, 2006.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "16": {
187
+ "title": "Nonlinear solution of linear inverse problems by wavelet-vaguelette\ndecomposition.",
188
+ "author": "David L Donoho.",
189
+ "venue": "Applied and computational harmonic analysis, 2(2):101\u2013126,\n1995.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "17": {
195
+ "title": "Regularization of inverse problems, volume 375.",
196
+ "author": "Heinz Werner Engl, Martin Hanke, and Andreas Neubauer.",
197
+ "venue": "Springer Science & Business Media, 1996.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "18": {
203
+ "title": "Convergence of a general projection method for an operator equation\nof the first kind.",
204
+ "author": "CW Groetsch.",
205
+ "venue": "In Houston J. Mathem. Citeseer, 1988.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "19": {
211
+ "title": "Regularization by projection for unbounded operators arising in\ninverse.",
212
+ "author": "CW Groetsch.",
213
+ "venue": "In Proceedings International Workshop On lnverse Problems\nHoChiMinh City Jan, volume 17, pages 61\u201370, 1995.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "20": {
219
+ "title": "Learning theory of distributed spectral algorithms.",
220
+ "author": "Zheng-Chu Guo, Shao-Bo Lin, and Ding-Xuan Zhou.",
221
+ "venue": "Inverse Problems, 33(7):074009, 2017.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "21": {
227
+ "title": "On the solution of ill-posed problems by projection methods with a\nposteriori choice of the discretization level.",
228
+ "author": "U Hamarik, E Avi, and A Ganina.",
229
+ "venue": "Mathematical Modelling and Analysis, 7(2):241\u2013252, 2002.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "22": {
235
+ "title": "Regularization by projection: Approximation theoretic aspects and\ndistance functions.",
236
+ "author": "Bernd Hofmann, Peter Math\u00e9, and Sergej V Pereverzev.",
237
+ "venue": "2007.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "23": {
243
+ "title": "Discretization effects in statistical inverse problems.",
244
+ "author": "Iain M Johnstone and Bernard W Silverman.",
245
+ "venue": "Journal of complexity, 7(1):1\u201334, 1991.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "24": {
251
+ "title": "Regularization by projection with a posteriori discretization level\nchoice for linear and nonlinear ill-posed problems.",
252
+ "author": "Barbara Kaltenbacher.",
253
+ "venue": "Inverse Problems, 16(5):1523, 2000.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "25": {
259
+ "title": "On the regularizing properties of a full multigrid method for\nill-posed problems.",
260
+ "author": "Barbara Kaltenbacher.",
261
+ "venue": "Inverse problems, 17(4):767, 2001.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "26": {
267
+ "title": "V-cycle convergence of some multigrid methods for ill-posed problems.",
268
+ "author": "Barbara Kaltenbacher.",
269
+ "venue": "Mathematics of Computation, 72(244):1711\u20131730, 2003.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "27": {
275
+ "title": "A convergence analysis of regularization by discretization in\npreimage space.",
276
+ "author": "Barbara Kaltenbacher and Jonas Offtermatt.",
277
+ "venue": "Mathematics of Computation, 81(280):2049\u20132069, 2012.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "28": {
283
+ "title": "Projection methods for ill-posed problems revisited.",
284
+ "author": "Stefan Kindermann.",
285
+ "venue": "Computational Methods in Applied Mathematics, 16(2):257\u2013276,\n2016.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "29": {
291
+ "title": "Spectral algorithms for supervised learning.",
292
+ "author": "L. Lo Gerfo, Lorenzo Rosasco, Francesca Odone, Ernesto De Vito, and Alessandro\nVerri.",
293
+ "venue": "Neural Computation, 20(7):1873\u20131897, 2008.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "30": {
299
+ "title": "Balancing principle in supervised learning for a general\nregularization scheme.",
300
+ "author": "Shuai Lu, Peter Math\u00e9, and Sergei V Pereverzev.",
301
+ "venue": "Applied and Computational Harmonic Analysis, 48(1):123\u2013148,\n2020.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "31": {
307
+ "title": "Comparisons of parameter choice methods for regularization with\ndiscrete noisy data.",
308
+ "author": "Mark A Lukas.",
309
+ "venue": "Inverse Problems, 14(1):161, 1998.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "32": {
315
+ "title": "An adaptive discretization for tikhonov-phillips regularization with\na posteriori parameter selection.",
316
+ "author": "Peter Maa\u00df, Sergei V Pereverzev, Ronny Ramlau, and Sergei G Solodky.",
317
+ "venue": "1998.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "33": {
323
+ "title": "Statistical inverse estimation in Hilbert scales.",
324
+ "author": "Bernard A Mair and Frits H Ruymgaart.",
325
+ "venue": "SIAM Journal on Applied Mathematics, 56(5):1424\u20131444, 1996.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "34": {
331
+ "title": "Optimal discretization of inverse problems in Hilbert scales.\nRegularization and self-regularization of projection methods.",
332
+ "author": "Peter Math\u00e9 and Sergei V Pereverzev.",
333
+ "venue": "SIAM Journal on Numerical Analysis, 38(6):1999\u20132021, 2001.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "35": {
339
+ "title": "Regularization by projection in variable Hilbert scales.",
340
+ "author": "Peter Math\u00e9 and Nadine Sch\u00f6ne.",
341
+ "venue": "Applicable Analysis, 87(2):201\u2013219, 2008.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "36": {
347
+ "title": "Regularization in kernel learning.",
348
+ "author": "Shahar Mendelson and Joseph Neeman.",
349
+ "venue": "The Annals of Statistics, 38(1):526\u2013565, 2010.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "37": {
355
+ "title": "Direct and inverse problems in machine learning.",
356
+ "author": "Nicole M\u00fccke.",
357
+ "venue": "Doctoral thesis, Universit\u00e4t Potsdam, 2017.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "38": {
363
+ "title": "Regularisierung schlecht gestellter probleme durch\nprojektionsverfahren.",
364
+ "author": "Frank Natterer.",
365
+ "venue": "Numerische Mathematik, 28(3):329\u2013341, 1977.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "39": {
371
+ "title": "Convergence rates for regularized solutions of integral equations\nfrom discrete noisy data.",
372
+ "author": "Douglas W Nychka and Dennis D Cox.",
373
+ "venue": "The Annals of Statistics, pages 556\u2013572, 1989.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "40": {
379
+ "title": "Convergence characteristics of methods of regularization estimators\nfor nonlinear operator equations.",
380
+ "author": "Finbarr O\u2019Sullivan.",
381
+ "venue": "SIAM Journal on Numerical Analysis, 27(6):1635\u20131649, 1990.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "41": {
387
+ "title": "Remarks on inequalities for large deviation probabilities.",
388
+ "author": "Iosif F Pinelis and Alexander I Sakhanenko.",
389
+ "venue": "Theory of Probability & Its Applications, 30(1):143\u2013148,\n1986.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "42": {
395
+ "title": "On the regularization of projection methods for solving ill-posed\nproblems.",
396
+ "author": "R Plato and G Vainikko.",
397
+ "venue": "Numerische Mathematik, 57(1):63\u201379, 1990.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "43": {
403
+ "title": "Convergence analysis of Tikhonov regularization for non-linear\nstatistical inverse learning problems.",
404
+ "author": "Abhishake Rastogi, Gilles Blanchard, and Peter Math\u00e9.",
405
+ "venue": "Electronic Journal of Statistics, 14(2):2798\u20132841, 2020.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "44": {
411
+ "title": "Introduction to the finite element method.",
412
+ "author": "Junuthula Narasimha Reddy.",
413
+ "venue": "McGraw-Hill Education, 2019.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "45": {
419
+ "title": "Two-parameter discrepancy principle for combined projection and\nTikhonov regularization of ill-posed problems.",
420
+ "author": "Teresa Regi\u0144ska.",
421
+ "venue": "Journal of Inverse and Ill-Posed Problems, 21(4):561\u2013577,\n2013.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "46": {
427
+ "title": "Nonconvergence results for the application of least-squares\nestimation to ill-posed problems.",
428
+ "author": "Thomas Seidman.",
429
+ "venue": "Journal of Optimization Theory and Applications,\n30(4):535\u2013547, 1980.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "47": {
435
+ "title": "Shannon sampling II: Connections to learning theory.",
436
+ "author": "Steve Smale and Ding-Xuan Zhou.",
437
+ "venue": "Applied and Computational Harmonic Analysis, 19(3):285\u2013302,\n2005.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "48": {
443
+ "title": "Learning theory estimates via integral operators and their\napproximations.",
444
+ "author": "Steve Smale and Ding-Xuan Zhou.",
445
+ "venue": "Constructive approximation, 26(2):153\u2013172, 2007.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "49": {
451
+ "title": "Optimal rates for regularized least squares regression.",
452
+ "author": "Ingo Steinwart, Don R Hush, Clint Scovel, et al.",
453
+ "venue": "In COLT, pages 79\u201393, 2009.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "50": {
459
+ "title": "Self-regularization solving ill-posed problems by projection methods.",
460
+ "author": "G Vainikko and U H\u00e4marik.",
461
+ "venue": "Models and Methods in Operational Research, pages 157\u2013164,\n1988.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "51": {
467
+ "title": "High-dimensional probability: An introduction with applications\nin data science, volume 47.",
468
+ "author": "Roman Vershynin.",
469
+ "venue": "Cambridge university press, 2018.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "52": {
475
+ "title": "Learning from examples as an inverse problem.",
476
+ "author": "Ernesto De Vito, Lorenzo Rosasco, Andrea Caponnetto, Umberto De Giovannini, and\nFrancesca Odone.",
477
+ "venue": "Journal of Machine Learning Research, 6(May):883\u2013904, 2005.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "53": {
483
+ "title": "Practical approximate solutions to linear operator equations when the\ndata are noisy.",
484
+ "author": "Grace Wahba.",
485
+ "venue": "SIAM journal on numerical analysis, 14(4):651\u2013667, 1977.",
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "54": {
491
+ "title": "On early stopping in gradient descent learning.",
492
+ "author": "Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto.",
493
+ "venue": "Constructive Approximation, 26(2):289\u2013315, 2007.",
494
+ "url": null
495
+ }
496
+ }
497
+ ],
498
+ "url": "http://arxiv.org/html/2211.12121v3"
499
+ }
20240119/2212.01521v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2212.08044v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2301.07300v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2301.10766v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2302.06120v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2302.09648v5.json ADDED
@@ -0,0 +1,281 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Wrapyfi: A Python Wrapper for Integrating Robots, Sensors, and Applications across Multiple Middleware",
3
+ "abstract": "Message oriented and robotics middleware play an important role in facilitating robot control, abstracting complex functionality, and unifying communication patterns between sensors and devices. However, using multiple middleware frameworks presents a challenge in integrating different robots within a single system. To address this challenge, we present Wrapyfi, a Python wrapper supporting multiple message oriented and robotics middleware, including ZeroMQ, YARP, ROS, and ROS 2. Wrapyfi also provides plugins for exchanging deep learning framework data, without additional encoding or preprocessing steps. Using Wrapyfi eases the development of scripts that run on multiple machines, thereby enabling cross-platform communication and workload distribution. We finally present the three communication schemes that form the cornerstone of Wrapyfi\u2019s communication model, along with examples that demonstrate their applicability. \nhttp://software.knowledge-technology.info#wrapyfi.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction",
9
+ "text": "Real-time robotic applications require exchanging multimodal data arriving from a variety of sensors. A framework that distributes sensory information across processes is necessary, especially for robot-robot and human-robot interaction (Mohamed and Lemaignan, 2021 ###reference_18###). Multiprocess and multithread instances are used to parallelize independent methods. However, such parallelization approaches are limited to single machines and may not be sufficient for applications with a large number of sensors or computationally expensive processing methods. Eventually, this leads to performance bottlenecks on consumer-grade computers. Message oriented and robotics middleware, such as ZeroMQ (Hintjens, 2013 ###reference_11###), YARP (Metta et al., 2006 ###reference_17###), ROS (Quigley et al., 2009 ###reference_22###), and ROS 2 (Macenski et al., 2022 ###reference_14###), were developed to tackle such challenges. Middleware frameworks use communication protocols to exchange data and distribute operations across several machines and nodes (Elkady and Sobh, 2012 ###reference_8###).\nROS (Quigley et al., 2009 ###reference_22###) is a middleware commonly used in the robotics community. ROS provides control hardware interfaces, visualization tools, and communication models for many robotic platforms (ABi, 2019 ###reference_3###). Its widespread use is a direct result of its early adoption of open source and the vast amount of robotic tools provided by its developers and contributors. However, ROS is scheduled for deprecation in favor of ROS 2 (Macenski et al., 2022 ###reference_14###). Many robotic platforms and packages, nonetheless, have not been updated to support this transition yet. Although bridges were developed to enable communication between ROS, ROS 2, and WebSocket, integrating such bridges into working pipelines requires major modifications to the underlying code and its structure. This demands following certain naming conventions and limiting the message types supported, resulting in additional effort. Other middleware designed specifically for certain robotic platforms such as YARP (Metta et al., 2006 ###reference_17###) used by the iCub (Metta et al., 2010 ###reference_16###) robot, provide interfaces for communicating with ROS (Natale et al., 2016 ###reference_19###) as well. However, their usage dictates modifying scripts to accommodate specific message types. This poses a major hurdle for developers wanting to integrate different robots and middleware, as a result, restricting the cross-compatibility of their applications with existing systems.\nTo improve interoperability between different robotic platforms and reduce reliance on a particular middleware, we have developed the open source Wrapyfi111https://github.com/fabawi/wrapyfi ###reference_github.com/fabawi/wrapyfi### (illustrated in Figure 1 ###reference_###) framework, a Python wrapper supporting multiple middleware bindings. Wrapyfi is a simpler alternative to GenoM3 (Mallet et al., 2010 ###reference_15###). GenoM3 adopts a model-driven approach and uses templates to define the components and data exchanges across middleware. Since it is specifically developed for Python scripting, Wrapyfi eliminates the need for having to learn another language or to define templates, unlike GenoM3. REMS (Tanaka and Mehta, 2022 ###reference_27###) is a middleware built in Python with simplistic interfaces for educational purposes. Although REMS supports a large set of robots and simulation environments, it does not address interoperability between different middleware operating on them.\nWrapyfi\u2019s decorator-based design integrates easily with existing workflows, prioritizing minimal modifications for improved multi-robot communication. Beyond robotic applications, its adaptability is observed in supporting message oriented middleware, facilitating communication with interfaces that do not necessarily require the additional packages and tools provided by robotics middleware. Deep learning frameworks like JAX (Bradbury et al., 2018 ###reference_5###) and PyTorch (Paszke et al., 2019 ###reference_21###), support multi-machine parallelization mainly through remote procedure calls. The approaches adopted in distributing models and data differ greatly, including the communication patterns used and the orchestration of communication, having either a single or several controllers. By offering a standard approach for multiple frameworks, and supporting two of the most common communication patterns, namely publish-subscribe and request-reply\u2014also known as the request-response or client-server pattern\u2014Wrapyfi offers greater control over communication dynamics in comparison to each framework\u2019s parallelization protocol.\nOpen Neural Network Exchange(ONNX) (Bai et al., 2019 ###reference_4###) is a framework designed to standardize machine learning model representations, offering compatibility with a wide range of deep learning frameworks. However, using ONNX with any framework requires converting the model formats. In contrast, Wrapyfi does not impose such a constraint or bind developers to a specific protocol. Wrapyfi does not only allow for native Python object exchanges but also transports data structures such as arrays and tensors, which are relied upon in deep learning applications. This integration makes Wrapyfi a useful tool for developers, allowing them to take advantage of both robotics and deep learning ecosystems."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Data types",
15
+ "text": "Wrapyfi employs a type-aware serialization method that automatically transforms the objects exchanged between script mirrors into a format compatible with the selected middleware. Wrapyfi supports the following data types:\nNative objects, arrays, and tensors.\nWrapyfi allows for the transmission of a variety of data types used in Python. Prior to transmission, these data types are converted into JSON strings to ensure compatibility across different middleware platforms. Wrapyfi supports using NumPy (Harris et al., 2020 ###reference_9###) arrays and enables their sharing across mirrored scripts. Moreover, Wrapyfi offers a plugin interface that developers may use to customize the transmission of other types of objects. This feature allows encoding objects as strings, which can eventually be decoded back into their original structure. Wrapyfi comes with built-in plugins for exchanging Arrow (Richardson et al., 2023 ###reference_23###) vectors, pandas222pandas version 1 with NumPy as a backend (pandas development team, 2020 ###reference_20###) data frames, and Pillow333https://github.com/python-pillow/Pillow ###reference_### images. It also supports tensors from major deep learning frameworks such as TensorFlow (Abadi et al., 2015 ###reference_2###), PyTorch (Paszke et al., 2019 ###reference_21###), MXNet (Chen et al., 2015 ###reference_7###), JAX (Bradbury et al., 2018 ###reference_5###), PaddlePaddle (Ma et al., 2019 ###reference_13###), and Dask (Rocklin, 2015 ###reference_24###). These plugins make it possible to exchange data between different frameworks and to integrate deep learning models into robotic systems.\nWhen specified, the tensors transmitted using Wrapyfi can be mapped to GPUs or CPUs different from the ones specified on a publishing script\u2019s end, allowing for the distribution of computationally demanding deep learning models.\nImages. ROS, ROS 2, and YARP provide specialized message types for transmitting images. We use image messages to stream raw monochrome, RGB, and JPEG-encoded images. ZeroMQ does not provide such specialized message structures. Therefore, we make use of the multipart message structure to create an image interface, allowing us to standardize middleware behavior and transmit the image properties to a specified topic.\nAudio chunks. ROS and ROS 2 do not provide messages structured for audio transmission, so we create custom messages and services to transmit audio along with its properties. The number of audio channels transmitted can vary in size, as long as the audio chunk structure follows the python-sounddevice format444https://github.com/spatialaudio/python-sounddevice ###reference_nddevice###. For YARP, we use the existing sound port and transmit the audio as a sequence. Whereas, for ZeroMQ, we transmit a string, encoding the auditory signal along with its properties as a single multipart message."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "3. Communication schemes",
21
+ "text": "Wrapyfi manages script interactions using three communication schemes\u2014Mirroring, Forwarding, and Channeling. Mirroring enables concurrent execution of multiple scripts with synchronized actions. Forwarding creates chains of methods to tunnel arguments and return values across different middleware configurations. Channeling allows for the broadcasting of multiple return values via one method, each using potentially different middleware. Each scheme addresses different challenges in distributed systems.\nThe MiddlewareCommunicator is a Wrapyfi class for establishing communication methods. It implements the register decorator for setting the middleware types, topics, and various communication parameters. Each method set to publish, subscribe, request, or reply should be encapsulated with this decorator. LABEL:fig:intro_example illustrates the use of the register decorator to register a method for YARP middleware communication, specifying object type, middleware, name of the class, YARP port (topic), communication protocol, and whether the method should await a response, which results in blocking the subscribing method until the publisher transmits a message. The read_msg method obtains user input from one process, allowing all other subscribing processes to acquire user input from a single process invocation.\nIn LABEL:fig:activate_communication, setting the mode to 'publish' triggers read_msg upon method call, whereas 'listen' returns the message received over the middleware. These modes enable the establishment of communication following the publish/subscribe pattern. Alternatively, setting the activate_communication mode to 'request' or 'reply' triggers the request/reply pattern.\nMirroring. Mirrors are multiple scripts running concurrently. The scripts share arguments and return values using a predefined communication pattern. The behaviors of all mirrored scripts are identical. However, their methods could either execute functionality in place or acquire their return values from another publisher. By calling read_msg in LABEL:fig:intro_example using a single publishing script, all subscribing mirrors receive the same return object when invoked as well. Regardless of the communication pattern or blocking behavior, all scripts follow the same pipeline with similar method returns.\nForwarding. The forwarding scheme in Wrapyfi enables passing arguments to multiple methods, each with a different middleware setting. This forms a chain of methods, transferring arguments and return values across middleware and topics. Forwarding employs multiple scripts with unique functions, connected by register decorators, making it suitable for creating multi-step processes with several scripts having partial component support. In LABEL:fig:forwarding_example, we demonstrate data transmission between a system without ZeroMQ support and another without Yarp support, using an intermediary system that supports both. The first system dispatches the message using Yarp by invoking send_yarp. The intermediary system then forwards it using ZeroMQ to send_zmq. The final system, with Yarp disabled, receives the message via ZeroMQ by listening to send_zmq. This scheme is needed when strict specifications are required regarding compatibility of software and middleware between systems, as in the case of robots.\nChanneling. In the channeling scheme, Wrapyfi enables broadcasting to multiple middleware by encapsulating a method with numerous decorators, each corresponding to a return value with its own data type and middleware. This is illustrated in LABEL:fig:channel_example, where a method transmits three different data types over varied middleware, such as a Yarp native object message comprising a NumPy image and an audio chunk, a ROS image (OpenCV (Bradski, 2000 ###reference_6###) compatible), and a ZeroMQ audio chunk. This scheme supports the simultaneous reception of different data types. If the environment lacks support for a specified middleware, a None type object is returned. Channeling is especially useful for handling multiple sensory inputs from different sources, allowing selective acquisition and disregard of unnecessary sensory input. This provides a balanced approach between mirroring and forwarding, altering the pipeline based on the returns received from the supported middleware."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "4. Use Cases",
27
+ "text": "Facial expression imitation. Participants exhibit eight facial expressions while sitting in front of two robots, Pepper (SoftBank Robotics Group, [n.\u2009d.] ###reference_26###) and iCub (Metta et al., 2010 ###reference_16###) as depicted in Figure 2 ###reference_###. The robots then imitate the participants\u2019 expressions. Pepper represents emotions through color changes, while iCub displays robotic facial expressions. The forwarding scheme in Wrapyfi tunnels interactions between the different system components and middleware configurations, enabling the exchange of visual and facial expression data between the robots and the recognition model (Siqueira et al., 2020 ###reference_25###). Forwarding manages image acquisition across robots and synchronizes the transfer of facial expressions to and from the model by sequentially invoking each robot\u2019s acquisition and action methods.\n###figure_1### Head orientation imitation. In this example, we imitate a participant\u2019s head orientation and eye movements on a simulated iCub (Tikhanoff et al., 2008 ###reference_28###) as shown in Figure 3 ###reference_###. The input coordinates arrive either from a wearable eye tracker (Kassner et al., 2014 ###reference_12###) fitted with an IMU or a vision-based head pose estimation model (Hempel et al., 2022 ###reference_10###). The channeling scheme allows switching between the input sources by specifying the return element propagated to the robot.\n###figure_2###"
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "5. Code and Usage",
33
+ "text": "To install Wrapyfi, compatible middleware, and required interfaces:\nhttps://wrapyfi.readthedocs.io/ ###reference_wrapyfi.readthedocs.io/###. \nWe additionally provide instructions on running Wrapyfi examples:\nhttps://wrapyfi.readthedocs.io/en/latest/examples.html ###reference_xamples.html###. \nTutorials detail the steps needed to run the Wrapyfi use case scripts:\nhttps://wrapyfi.readthedocs.io/en/latest/tutorials.html ###reference_utorials.html###. \nWe also evaluate transmission latency of the Wrapyfi plugins:\nhttps://wrapyfi.readthedocs.io/en/latest/evaluation.html ###reference_valuation.html###."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "6. Conclusions",
39
+ "text": "Wrapyfi is a framework that simplifies data transfer across different middleware platforms. Two of Wrapyfi\u2019s key strengths are the transmission of custom data types and support for multiple middleware. We introduced three communication schemes\u2014mirroring, forwarding, and channeling\u2014each serving a different set of applications. The framework currently supports two common communication patterns: publish-subscribe and request-reply. In future work, we plan to extend Wrapyfi to support more communication patterns that are available in some middleware platforms, such as actions in ROS 2, which are similar to asynchronous request-reply. We also aim to provide interfaces for custom messages and middleware-specific data types. Wrapyfi\u2019s modular design permits integrating further middleware, expanding the array of potential applications."
40
+ }
41
+ ],
42
+ "appendix": [],
43
+ "tables": {},
44
+ "image_paths": {
45
+ "1": {
46
+ "figure_path": "2302.09648v5_figure_1.png",
47
+ "caption": "Figure 1. Overview of the Wrapyfi framework. From top to bottom: 1) Data types are encoded or decoded depending on the transmission mode; 2) Encoded objects are prepared for transmission using the Request/Reply or Publish/Subscribe communication pattern; 3) Messages are transmitted through the selected middleware protocol; 4) Messages sequenced according to the communication scheme; 5) Messages exchanged between robots, applications, and sensors.\u22c6\u22c6{}^{\\star}start_FLOATSUPERSCRIPT \u22c6 end_FLOATSUPERSCRIPT\n\u22c6\u22c6{}^{\\star}start_FLOATSUPERSCRIPT \u22c6 end_FLOATSUPERSCRIPT The \u201cnine dots\u201d ROS and ROS 2 logos are trademarks of Open Source Robotics Foundation. TensorFlow, the TensorFlow logo, and any related marks are trademarks of Google Inc. The OpenCV logo is a trademark of https://opencv.org. The NumPy logo is used in accordance with the NumPy logo guidelines. The pandas logo is used in accordance with the brand and logo guidelines. PyTorch, the PyTorch logo and any related marks are trademarks of The Linux Foundation. The name ZeroMQ and the \u201c\u00d8MQ\u201d logo are used in compliance with creative commons license Attribution-NoDerivatives 4.0 International (CC BY-ND 4.0). The logos for Dask, Apache MXNet, paddlepaddle, PIL (Pillow), JAX, and YARP are included with respect to their trademark policies; we acknowledge that these are subject to copyrights, trademarks, or registered trademarks of their respective holders. We do not claim ownership of these copyrights or trademarks. The use of these logos does not indicate endorsement by the trademark or copyright holders, nor does it suggest any affiliation or endorsement by the authors of this work.",
48
+ "url": "http://arxiv.org/html/2302.09648v5/x1.png"
49
+ },
50
+ "2": {
51
+ "figure_path": "2302.09648v5_figure_2.png",
52
+ "caption": "Figure 2. Facial expression imitation on the Pepper and iCub.",
53
+ "url": "http://arxiv.org/html/2302.09648v5/x2.png"
54
+ },
55
+ "3": {
56
+ "figure_path": "2302.09648v5_figure_3.png",
57
+ "caption": "Figure 3. Head and eye movement imitation using either an IMU-fitted eye tracker or a head pose estimation model.",
58
+ "url": "http://arxiv.org/html/2302.09648v5/extracted/5357406/src/imgs/immitate_multisensor.png"
59
+ }
60
+ },
61
+ "validation": true,
62
+ "references": [
63
+ {
64
+ "1": {
65
+ "title": "TensorFlow: Large-Scale Machine Learning on\nHeterogeneous Distributed Systems. In 12th\nUSENIX Symposium on Operating Systems Design and Implementation\n(OSDI). 265\u2013283.",
66
+ "author": "Mart\u00edn Abadi et al.\n2015.",
67
+ "venue": "https://www.tensorflow.org/",
68
+ "url": null
69
+ }
70
+ },
71
+ {
72
+ "2": {
73
+ "title": "Open-Source Robotics Projects.",
74
+ "author": "ABi (Ed.).\n2019.",
75
+ "venue": "ABi Research.",
76
+ "url": null
77
+ }
78
+ },
79
+ {
80
+ "3": {
81
+ "title": "ONNX: Open Neural Network Exchange.",
82
+ "author": "Junjie Bai, Fang Lu,\nKe Zhang, et al. 2019.",
83
+ "venue": "",
84
+ "url": null
85
+ }
86
+ },
87
+ {
88
+ "4": {
89
+ "title": "JAX: Composable transformations of\nPython+NumPy programs.",
90
+ "author": "James Bradbury, Roy\nFrostig, Peter Hawkins, Matthew James\nJohnson, Chris Leary, Dougal Maclaurin,\nGeorge Necula, Adam Paszke,\nJake VanderPlas, Skye\nWanderman-Milne, and Qiao Zhang.\n2018.",
91
+ "venue": "",
92
+ "url": null
93
+ }
94
+ },
95
+ {
96
+ "5": {
97
+ "title": "The OpenCV Library.",
98
+ "author": "G. Bradski.\n2000.",
99
+ "venue": "Dr. Dobb\u2019s Journal of Software Tools\n(2000).",
100
+ "url": null
101
+ }
102
+ },
103
+ {
104
+ "6": {
105
+ "title": "MXNet: A Flexible and Efficient Machine Learning\nLibrary for Heterogeneous Distributed Systems.",
106
+ "author": "Tianqi Chen, Mu Li,\nYutian Li, Min Lin,\nNaiyan Wang, Minjie Wang,\nTianjun Xiao, Bing Xu,\nChiyuan Zhang, and Zheng Zhang.\n2015.",
107
+ "venue": "ArXiv abs/1512.01274\n(2015).",
108
+ "url": null
109
+ }
110
+ },
111
+ {
112
+ "7": {
113
+ "title": "Robotics Middleware: A Comprehensive Literature\nSurvey and Attribute-Based Bibliography.",
114
+ "author": "Ayssam Elkady and Tarek\nSobh. 2012.",
115
+ "venue": "Journal of Robotics 2012\n(2012).",
116
+ "url": null
117
+ }
118
+ },
119
+ {
120
+ "8": {
121
+ "title": "Array programming with NumPy.",
122
+ "author": "Charles R. Harris et al.\n2020.",
123
+ "venue": "Nature 585,\n7825 (2020), 357\u2013362.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "9": {
129
+ "title": "6D Rotation Representation For Unconstrained Head\nPose Estimation. In IEEE International Conference\non Image Processing (ICIP). IEEE,\n2496\u20132500.",
130
+ "author": "Thorsten Hempel, Ahmed A.\nAbdelrahman, and Ayoub Al-Hamadi.\n2022.",
131
+ "venue": "https://doi.org/10.1109/ICIP46576.2022.9897219",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "10": {
137
+ "title": "ZeroMQ: Messaging for Many\nApplications.",
138
+ "author": "Pieter Hintjens.\n2013.",
139
+ "venue": "\u201dO\u2019Reilly Media, Inc.\u201d.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "11": {
145
+ "title": "Pupil: An Open Source Platform for Pervasive Eye\nTracking and Mobile Gaze-based Interaction. In\nAdjunct Proceedings of the ACM International Joint\nConference on Pervasive and Ubiquitous Computing (UBICOMP).\nACM, 1151\u20131160.",
146
+ "author": "Moritz Kassner, William\nPatera, and Andreas Bulling.\n2014.",
147
+ "venue": "https://doi.org/10.1145/2638728.2641695",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "12": {
153
+ "title": "PaddlePaddle: An Open-Source Deep Learning\nPlatform from Industrial Practice.",
154
+ "author": "Yanjun Ma, Dianhai Yu,\nTian Wu, and Haifeng Wang.\n2019.",
155
+ "venue": "Frontiers of Data and Computing\n1, 1 (2019),\n105\u2013115.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "13": {
161
+ "title": "Robot Operating System 2: Design, architecture,\nand uses in the wild.",
162
+ "author": "Steven Macenski, Tully\nFoote, Brian Gerkey, Chris Lalancette,\nand William Woodall. 2022.",
163
+ "venue": "Science Robotics 7,\n66 (2022).",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "14": {
169
+ "title": "GenoM3: Building middleware-independent robotic\ncomponents. In IEEE International Conference on\nRobotics and Automation (ICRA). IEEE,\n4627\u20134632.",
170
+ "author": "Anthony Mallet, C\u00e9dric\nPasteur, Matthieu Herrb, S\u00e9verin\nLemaignan, and F\u00e9lix Ingrand.\n2010.",
171
+ "venue": "https://doi.org/10.1109/ROBOT.2010.5509539",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "15": {
177
+ "title": "The iCub humanoid robot: An open-systems platform\nfor research in cognitive development.",
178
+ "author": "Giorgio Metta et al.\n2010.",
179
+ "venue": "Neural Networks 23,\n8-9 (2010), 1125\u20131134.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "16": {
185
+ "title": "YARP: Yet Another Robot Platform.",
186
+ "author": "Giorgio Metta, Paul\nFitzpatrick, and Lorenzo Natale.\n2006.",
187
+ "venue": "International Journal of Advanced Robotic\nSystems 3, 1 (2006),\n8.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "17": {
193
+ "title": "ROS for Human-Robot Interaction. In\nIEEE/RSJ International Conference on Intelligent\nRobots and Systems (IROS). IEEE,\n3020\u20133027.",
194
+ "author": "Youssef Mohamed and\nS\u00e9verin Lemaignan. 2021.",
195
+ "venue": "https://doi.org/10.1109/IROS51168.2021.9636816",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "18": {
201
+ "title": "The iCub Software Architecture: Evolution and\nLessons Learned.",
202
+ "author": "Lorenzo Natale, Ali\nPaikan, Marco Randazzo, and Daniele E\nDomenichelli. 2016.",
203
+ "venue": "Frontiers in Robotics and AI\n3 (2016), 24.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "19": {
209
+ "title": "pandas-dev/pandas: Pandas.",
210
+ "author": "The pandas development team.\n2020.",
211
+ "venue": "https://doi.org/10.5281/zenodo.3509134",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "20": {
217
+ "title": "PyTorch: An Imperative Style, High-Performance\nDeep Learning Library.",
218
+ "author": "Adam Paszke et al.\n2019.",
219
+ "venue": "In Advances in Neural Information\nProcessing Systems 32 (NeurIPS). Curran Associates,\nInc., 8024\u20138035.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "21": {
225
+ "title": "ROS: An open-source Robot Operating\nSystem. In IEEE International Conference on\nRobotics and Automation Workshop on Open Source Software (ICRAOSS),\nVol. 3.2. IEEE, 5.",
226
+ "author": "Morgan Quigley, Ken\nConley, Brian Gerkey, Josh Faust,\nTully Foote, Jeremy Leibs,\nRob Wheeler, and Andrew Y Ng.\n2009.",
227
+ "venue": "",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "22": {
233
+ "title": "arrow: Integration to \u2019Apache\u2019 \u2019Arrow\u2019.",
234
+ "author": "Neal Richardson, Ian\nCook, Nic Crane, Dewey Dunnington,\nRomain Fran\u00e7ois, Jonathan Keane,\nDrago\\textcommabelows Moldovan-Gr\u00fcnfeld,\nJeroen Ooms, and Apache Arrow.\n2023.",
235
+ "venue": "",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "23": {
241
+ "title": "Dask: Parallel Computation with Blocked algorithms\nand Task Scheduling. In Proceedings of the 14th\nPython in Science Conference, Kathryn\nHuff and James Bergstra (Eds.).\n130\u2013136.",
242
+ "author": "Matthew Rocklin.\n2015.",
243
+ "venue": "",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "24": {
249
+ "title": "Efficient Facial Feature Learning with Wide\nEnsemble-based Convolutional Neural Networks. In\nThe Thirty-Fourth AAAI Conference on Artificial\nIntelligence. AAAI, 5800\u20135809.",
250
+ "author": "Henrique Siqueira, Sven\nMagg, and Stefan Wermter.\n2020.",
251
+ "venue": "https://doi.org/10.1609/aaai.v34i04.6037",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "25": {
257
+ "title": "Pepper the humanoid and programmable robot.",
258
+ "author": "SoftBank Robotics Group.\n[n.\u2009d.].",
259
+ "venue": "",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "26": {
265
+ "title": "REMS: Middleware for Robotics Education and\nDevelopment.",
266
+ "author": "Yusuke Tanaka and Ankur\nMehta. 2022.",
267
+ "venue": "ArXiv abs/2210.05784\n(2022).",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "27": {
273
+ "title": "An Open-Source Simulator for Cognitive Robotics\nResearch: The Prototype of the ICub Humanoid Robot Simulator. In\nProceedings of the 8th Workshop on Performance\nMetrics for Intelligent Systems (PerMIS \u201908). ACM,\n57\u201361.",
274
+ "author": "Vadim Tikhanoff, Angelo\nCangelosi, Paul M. Fitzpatrick, Giorgio\nMetta, Lorenzo Natale, and Francesco\nNori. 2008.",
275
+ "venue": "https://doi.org/10.1145/1774674.1774684",
276
+ "url": null
277
+ }
278
+ }
279
+ ],
280
+ "url": "http://arxiv.org/html/2302.09648v5"
281
+ }
20240119/2302.12190v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2302.13854v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2303.02901v2.json ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "\ud835\udefc-divergence improves the entropy production estimation via machine learning",
3
+ "abstract": "Recent years have seen a surge of interest in the algorithmic estimation of stochastic entropy production (EP) from trajectory data via machine learning. A crucial element of such algorithms is the identification of a loss function whose minimization guarantees the accurate EP estimation. In this study, we show that there exists a host of loss functions, namely those implementing a variational representation of the -divergence, which can be used for the EP estimation. By fixing to a value between and , the -NEEP (Neural Estimator for Entropy Production) exhibits a much more robust performance against strong nonequilibrium driving or slow dynamics, which adversely affects the existing method based on the Kullback-Leibler divergence (). In particular, the choice of tends to yield the optimal results. To corroborate our findings, we present an exactly solvable simplification of the EP estimation problem, whose loss function landscape and stochastic properties give deeper intuition into the robustness of the -NEEP.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "How irreversible does a process look? One may pose this question for two distinct reasons. First, whether a biological process requires energy dissipation is often a subject of much debate [1 ###reference_1###, 2 ###reference_2###]. To resolve this issue, it is useful to note that irreversibility suggests energy dissipation. Various hallmarks of irreversibility, such as the breaking of the fluctuation-dissipation theorem [3 ###reference_3###] and the presence of nonequilibrium probability currents in the phase space [4 ###reference_4###, 5 ###reference_5###], have been used to determine whether energy is dissipated. Second, whether a nonequilibrium system allows for an effective equilibrium description is an important issue. For instance, in active matter, despite the energy dissipation at the microscopic level, it has been argued that the large-scale phenomena allow for an effective equilibrium description [6 ###reference_6###, 7 ###reference_7###, 8 ###reference_8###, 9 ###reference_9###, 10 ###reference_10###]. If we can quantify the irreversibility of an empirical process at various levels of coarse-graining [11 ###reference_11###, 12 ###reference_12###], it will provide us with helpful clues as to whether we should look for an effective equilibrium theory for the process.\nBased on the framework of stochastic thermodynamics, modern thermodynamics assigns entropy production (EP) to each stochastic trajectory based on its irreversibility [13 ###reference_13###]. Thus, empirically measuring the irreversibility of a process is closely tied to the problem of estimating EP from sampled trajectories [14 ###reference_14###, 15 ###reference_15###, 16 ###reference_16###, 17 ###reference_17###, 18 ###reference_18###, 19 ###reference_19###, 20 ###reference_20###, 21 ###reference_21###]. A straightforward approach to the problem is to evaluate the relevant transition probabilities by directly counting the number of trajectory segments, which is called the plug-in method [14 ###reference_14###, 15 ###reference_15###]. The method, readily applicable to discrete systems, can also be applied to continuous systems through the use of kernel functions [16 ###reference_16###]. However, while this method is simple and intuitive, it requires a huge ensemble of lengthy trajectories for accurate estimations (curse of dimensionality). More recent studies proposed methods based on universal lower bounds of the average EP, such as the thermodynamic uncertainty relations [16 ###reference_16###, 17 ###reference_17###, 18 ###reference_18###, 19 ###reference_19###] and the entropic bound [20 ###reference_20###]. While these methods do not suffer from the curse of dimensionality and are applicable even to non-stationary processes [19 ###reference_19###, 20 ###reference_20###], their accuracy is impaired when the underlying bounds are not tight. Moreover, these methods are applicable only to the estimation of the average EP, not the EP of each trajectory.\nMeanwhile, with the advent of machine learning techniques in physics, a novel method for EP estimation using artificial neural networks has been developed [21 ###reference_21###]. This method, called the Neural Estimator for Entropy Production (NEEP), minimizes the loss function based on a variational representation of the Kullback-Leibler (KL) divergence. Without any presupposed discretization of the phase space and using the rich expressivity of neural networks, the NEEP suffers far less from the complications of the sampling issues and is applicable to a diverse range of stochastic processes [19 ###reference_19###].\nStill, the NEEP has its limits. Its accuracy deteriorates when the nonequilibrium driving is strong or when the dynamics slows down so that the phase space is poorly sampled. In this study, we show that the NEEP can be significantly improved by changing the loss function. Toward this purpose, we propose the -NEEP, which generalizes the NEEP. Instead of the KL divergence, the -NEEP utilizes the -divergence, which has been mainly used in the machine learning community [22 ###reference_22###, 23 ###reference_23###, 24 ###reference_24###, 25 ###reference_25###]. We demonstrate that the -NEEP with nonzero values of shows much more robust performance for a broader range of nonequilibrium driving and sampling quality, with showing the optimal performance overall. This is corroborated by an analytically tractable simplification of the -NEEP that shows the optimality of .\nThe rest of this paper is organized as follows. After reviewing the original NEEP and its limitations (Sec. II ###reference_###), we introduce the -NEEP (Sec. III ###reference_###) and demonstrate its enhanced performance for three different examples of nonequilibrium systems (Sec. IV ###reference_###). Then we investigate the rationale behind the observed results using a simplified model describing how the -NEEP works (Sec. V ###reference_###). Finally, we sum up the results and discuss their implications (Sec. VI ###reference_###).\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Overview of the Original NEEP",
15
+ "text": "We first give a brief overview of how the original NEEP [21 ###reference_21###] estimates EP at the trajectory level. Suppose our goal is to estimate EP of a Markov process in discretized time, , in a -dimensional space. For every ordered pair of states, denoted by , there is EP associated with the transition between them, which is given by the ratio between the forward and the backward path probabilities\nwhere . Note that, throughout this study, we use the unit system in which the Boltzmann constant can be set to unity (). Then it follows that the ensemble average of this EP is equivalent to the KL divergence, which satisfies the inequality\nfor any positive function , given that denotes the average with respect to the distribution . This inequality can be proven as follows: since is a concave function, the line tangent to any point never falls below the function. Thus, for any and . By putting and taking the average with respect to , we get the inequality. In this derivation, we immediately note that the equality condition is satisfied if and only if . Hence, by varying to maximize the right-hand side of Eq. (II ###reference_###), we accurately estimate the average EP . For this reason, Eq. (II ###reference_###) is called the variational representation of the KL divergence. Moreover, as a byproduct, we also obtain the function , which yields an accurate estimate for trajectory-level EP by .\nKim et al. [21 ###reference_21###] used these properties to construct the loss function of the NEEP. More specifically, they introduce , an estimator for trajectory-level EP parametrized by , and put . Then, Eq. (II ###reference_###) can be rewritten as\nwhere has been used based on the one-to-one correspondence between and . Furthermore, since EP is odd under time reversal, i.e., , it is natural to impose the same condition on . This leads to the inequality\nwhich motivates the loss function\nso that the minimization of ensures the accurate EP estimation .\nIt is notable that defined above is a convex functional of . Thus, as long as the -dependence of is well behaved, any gradient-descent algorithm can reach the global minimum of without getting trapped in a local minimum. In this regard, the rugged loss function landscape is not a major issue of the NEEP.\nHowever, the performance of the NEEP strongly depends on how well is sampled. Since the second term of depends exponentially on , rare transitions with minute can make nonnegligible contributions to when is extremely large. Since the frequency of rare events is subject to considerable sampling noise, the performance of the original NEEP deteriorates in the presence of a strong nonequilibrium driving which induces rare transitions with large negative EP. In the following section, we propose a loss function that remedies this weakness of the NEEP."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Formulation of the -NEEP",
21
+ "text": "Here we formulate a generalization of the NEEP loss function with the goal of mitigating its strong sampling-noise dependence. We note that the loss function needs not be an estimator of average EP , for our goal is to estimate at the level of each trajectory. Thus, while the original NEEP uses the variational representation of the KL divergence corresponding to , we propose a different approach based on the variational representation of the -divergence, which quantifies the difference between a pair of probability distributions and as\nSince this reduces to the KL divergence in the limit , our approach generalizes the NEEP by introducing an extra parameter . To emphasize this aspect, we term our method the -NEEP.\nThe goal of the -NEEP is to find that minimizes the loss function\nwhere and are probability density functions,\nand is a real number other than and . See Appendix B ###reference_### for discussions of these two exceptional cases. It can be rigorously shown (see Appendix B ###reference_###) that satisfies the inequality\nwhere the equality is achieved if and only if for all . In other words, by minimizing to find , we also obtain an estimate for the ratio . We note that the properties of used here are also valid for a much more general class of loss functions, as discussed in [22 ###reference_22###, 23 ###reference_23###] (also see Appendix B ###reference_###).\nBased on Eq. (8 ###reference_###), we can construct a loss function\nNote that this reduces to the loss function of the original NEEP shown in Eq. (5 ###reference_###) in the limit .\nIf is sufficiently well behaved, the minimization of yields the minimizer which satisfies and . The former is generally not equal to average EP (unless ), but the latter ensures the accurate estimation of trajectory-level EP .\nComparing Eqs. (5 ###reference_###) and (9 ###reference_###), one readily observes that the exponential dependence on can be made much weaker in by choosing the value of between and . Since this mitigates the detrimental effects of the sampling error associated with rare trajectories with large negative , one can naturally expect that the performance of the -NEEP is much more robust against strong nonequilibrium driving. This is confirmed in the following sections.\nBefore proceeding, a few remarks are in order:\nThe loss function satisfies , so the -NEEP is symmetric under the exchange . For this reason, in the rest of this paper, we focus on the regime (the regime leads to very poor performance and is left out).\nFrom the antisymmetry , we may set the estimator to be related to the feedforward neural network (FNN) output as\nso that the neural network focuses on the estimators that satisfy the antisymmetry of EP for more efficient training. The method described so far is schematically illustrated in Fig. 1 ###reference_###.\nWe emphasize that the minimized is not directly related to average EP. In all cases, we compute the average EP by averaging over the sampled transitions."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "IV Examples",
27
+ "text": "To assess the performance of the -NEEP for various values of , we apply the method to toy models of nonequilibrium systems, namely the two-bead model, the Brownian gyrator, and the driven Brownian particle.\n###figure_2### (i) The two-bead model. This model has been used in a number of previous studies as a benchmark for testing EP estimators [4 ###reference_4###, 16 ###reference_16###, 18 ###reference_18###, 21 ###reference_21###]. The model consists of two one-dimensional (1D) overdamped beads which are connected to each other and to the walls on both sides by identical springs, see Fig. 2 ###reference_###(a). The beads are in contact with heat baths at temperatures and with . Denoting by () the bead in contact with the hot (cold) bath, the stochastic equations of motion are given by\nHere is the spring constant, the friction coefficient, and the Gaussian thermal noise with zero means and . For infinitesimal displacements , the associated EP is given by\nwhere denotes the Stratonovich product and the change of the system\u2019s Shannon entropy, namely\nfor the steady-state distribution . Since the system is fully linear, can be calculated analytically. Thus the EP of this model can be calculated exactly using Eq. (12 ###reference_###) and compared with the -NEEP result.\nTo see how the predicted EP differs from the true EP, we observe the behavior of the mean square error (MSE) . In Fig. 2 ###reference_###(b), we observe that strengthening the nonequilibrium driving (by increasing while keeping ) tends to impair the EP estimation. This is because a stronger driving makes the reverse trajectories of typical trajectories rarer, lowering the sample quality. The adverse effects of the nonequilibrium driving are the strongest for the original NEEP (), which are mitigated by choosing different values of . Remarkably, choosing leads to the most robust performance against the driving.\nAs an alternative measure of the estimator\u2019s performance, we also observe the ratio between the predicted average EP and the exact average EP . The results are shown in Fig. 2 ###reference_###(c), which exhibit two different regimes. As increases, there is a regime where the estimator overestimates average EP, which is followed by an underestimation regime. A detailed explanation for this behavior will be given in Sec. V ###reference_### using a simplified model. At the moment, we note that tends to deviate away from most strongly for the original NEEP (), while choosing different values of makes the ratio stay closer to . Again, the optimal value of seems to be .\n###figure_3### (ii) The Brownian gyrator. This simple model of a single-particle heat engine allows us to check the effects of a nonequilibrium driving apart from the temperature difference . The dynamics of the model is governed by\nwhere is the harmonic potential, and is a nonconservative force that drives the system out of equilibrium and enables work extraction. See Fig. 3 ###reference_###(a) for an illustration of this system. For infinitesimal displacements , the associated EP is given by\nwhere\nand the change of the system entropy. Again, the system is fully linear and the steady-state distribution can be calculated analytically, allowing exact calculations of EP at the trajectory level.\nSetting and , we vary the magnitude of to assess the robustness of the -NEEP in terms of the MSE and the ratio , as shown in Figs. 3 ###reference_###(b) and (c), respectively. The results are qualitatively similar to the case of the two-bead model: as the nonconservative driving gets stronger, the performance of the original NEEP () deteriorates the most, while other values of yield more robust results. Again, seems to be the optimal choice.\n(iii) The driven Brownian particle. While the two examples given above were both linear systems, we also consider a nonlinear system featuring a 1D overdamped Brownian particle in a periodic potential driven by a constant force . The motion of the particle is described by the Langevin equation\nwhere is a Gaussian white noise with unit variance. See Fig. 4 ###reference_###(a) for an illustration of the model. For sufficiently large , this model can approximate the behaviors of the Markov jump process on a discrete chain. For this model, the EP associated with the infinitesimal displacement is given by\nwhere again denotes the Shannon entropy change for the steady-state distribution . Since the system is 1D, it is straightforward to obtain by numerical integration. Thus, the EP of this model can also be calculated exactly and compared to the -NEEP result.\n###figure_4### Fixing , the performance of the -NEEP for this model is shown in Figs. 4 ###reference_###(b) and (c) in terms of the MSE and the ratio , respectively. Due to the presence of a strong background driving (), there are already considerable differences among different methods at . But it is worth noting that increasing the amplitude of the periodic potential clearly increases the MSE and makes deviate farther away from for the original NEEP (). This may be the consequence of rarer movements (jumps from one potential well to the next) across the system as the potential well gets deeper, which means rare trajectories are even more poorly sampled. The -NEEPs with nonzero values of are much more robust against the increase of , with showing the best performance overall.\n###figure_5###"
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Simple Gaussian Model",
33
+ "text": "The results shown thus far clearly indicate that, by choosing a nonzero value of , the -NEEP can exhibit a much more robust performance against the adverse effects of the nonequilibrium driving. Moreover, seems to exhibit the best performance in many cases. To gain more intuition into these results, we simplify the EP estimation problem to the density-ratio estimation problem for a 1D random variable. To be specific, we estimate the log ratio given samples drawn from the distribution . It is intuitively clear that this problem is structurally equivalent to EP estimation.\nFor further simplification, we set\nHere is a suitable normalization factor, the positive mean of the distribution, the width of the distribution, and a positive number truncating the tails of the distribution. While corresponds to the perfect sampling of a Gaussian distribution, a finite corresponds to the case where the tails of the distribution are poorly sampled.\nFor , the correct answer to the problem is a linear function , where . Thus, for further simplicity, we focus on the one-parameter model , which estimates using only a single parameter . For this problem, the suitable loss function is obtained as an analog of Eq. (9 ###reference_###):\nIf is large but finite, the minimum of this loss function shifts to , where can be expanded to the leading orders in :\nThis clearly shows that gives the least shift , as also illustrated by various results shown in Fig. 5 ###reference_###.\nIn Fig. 5 ###reference_###(a), we show that the shift of the minimum tends to increase as the tail sampling becomes poorer (i.e., decreases). The landscapes of the loss function , shown in the inset of Fig. 5 ###reference_###(a), also confirm this observation. The increase of the error with the potential depth in Figs. 1 ###reference_###(d) and 2 ###reference_###(b) may primarily be due to the same effect.\nIn Fig. 5 ###reference_###(b), we plot the ratio between the estimated minimum and the true minimum as a function of the mean , which is an analog of the nonequilibrium driving. We note that here is the lowest value of at which the slope of the loss function becomes less then . We observe that an overestimation regime () crosses over to an underestimation regime () as grows. This is in striking agreement with the trends shown in Fig. 2 ###reference_###(a). The reason why underestimates for large can be understood by the flattened loss function landscapes shown in the inset of Fig. 5 ###reference_###(b). In this regime, the dynamics of (starting from = 0) slows down, ending up at a value (filled diamonds) even lower than (empty diamonds). This effect is due to the samples with vanishing when is too large. We expect that a similar mechanism might be at play behind the observed behavior of shown in Fig. 2 ###reference_###(a). If we had used a broader range of nonequilibrium driving, the same behaviors might have been observed for other models as well, although this remains to be checked.\nThe one-parameter model also allows us to examine the effects of the finite minibatch size . While the ideal loss function is given in Eq. (20 ###reference_###), the loss function used in the actual training looks like\nwhere are i.i.d. Gaussian random variables of mean and variance . When is large and finite, using the central limit theorem (CLT), the gradient of this loss function can be approximated as [26 ###reference_26###, 27 ###reference_27###]\nwhere , , and . When the stochastic gradient descent reaches the steady state, the MSE of is given by\nThis leading-order behavior is shown in Fig. 5 ###reference_###(c) for various values of . For all cases, the MSE of is minimized at , which is consistent with the smallest error bars observed at in Figs. 1 ###reference_### and 2 ###reference_###. Hence, yields the most consistent EP estimator.\nDirect measurements of the loss function gradient at the minimum also confirm the above result. As shown in Fig. 5 ###reference_###(d), the gradient is far more broadly distributed for than for . Moreover, due to the subleading effects (beyond the CLT) of finite , the gradient for features a large skewness. These show that the training dynamics for the original NEEP () tends to be far more volatile and unstable than for the -NEEP with ."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "VI Summary and outlook",
39
+ "text": "We proposed the -NEEP, a generalization of the NEEP for estimating steady-state EP at the trajectory level. By choosing a value of between and , the -NEEP weakens the exponential dependence of the loss function on the EP estimator, effectively mitigating the adverse effects induced by poor sampling of transitions associated with large negative EP in the presence of strong nonequilibrium driving and/or deep potential wells. We also observed that tends to exhibit the optimal performance, which can be understood via a simplification of the original EP estimation problem, whose loss function landscape and relaxation properties are analytically tractable. The -NEEP thus provides a powerful method for estimating the EP for much broader range of the nonequilibrium driving force and the time scale of dynamics. Identification of even better loss functions and optimization of other hyperparameters (network size, number of iterations, etc.) are left as future works. It would also be interesting to apply the -NEEP to estimations of the EP of the Brownian movies [28 ###reference_28###] and stochastic systems with odd-parity variables [29 ###reference_29###], which have been studied using the original NEEP method.\nAcknowledgments. \u2014 This work was supported by the POSCO Science Fellowship of the POSCO TJ Park Foundation. E.K. and Y.B. also thank Junghyo Jo and Sangyun Lee for helpful comments.\n###figure_6### ###figure_7###"
40
+ }
41
+ ],
42
+ "appendix": [
43
+ {
44
+ "section_id": "Appendix 1",
45
+ "parent_section_id": null,
46
+ "section_name": "Appendix A Training details",
47
+ "text": "We always use the fully connected network (FCN) with three hidden layers, with each layer composed of nodes. Each training dataset consists of trajectories. The neural network parameters are updated using the ReLU activation function and the Adam optimizer. The learning rate is fixed to and the weight decay is fixed to . We halt the training after iterations, except for the results shown in Figs. 8 ###reference_### and 9 ###reference_### (see Appendix C ###reference_###), where we continue the training for a longer time to check the overfitting effects. All trainings are done on PyTorch with NVIDIA GeForce RTX 3090.\nIn subfigure (b) of Figs. 2 ###reference_###\u20134 ###reference_###, each minibatch consists of trajectories. On the other hand, in subfigure (c) of Figs. 2 ###reference_###\u20134 ###reference_###, each minibatch consists of trajectories."
48
+ },
49
+ {
50
+ "section_id": "Appendix 2",
51
+ "parent_section_id": null,
52
+ "section_name": "Appendix B Density ratio estimation via -divergence",
53
+ "text": "Here we show that the the loss function given in Eq. (7 ###reference_###), whose minimization allows us to estimate the ratio between two probability density functions, can be generalized even further using the concept of -divergence. Consider a convex, twice-differentiable real-valued function . Then, the inequality\nholds. We can verify this by differentiating the left-hand side (LHS) with respect to , which yields . Thus, the LHS has a local minimum at , and this is the only local minimum since is convex. In addition, the second derivative of the LHS at equals , which is positive by the convexity. This proves the inequality (25 ###reference_###).\nUsing this result, we can design a loss function whose minimum is equal to the negative -divergence between two probability distributions and . To be specific, for any function , we define\nUsing Eq. (B ###reference_###), we conclude that\nwhere is the -divergence between the distributions and , and the equality holds if and only if for all . By minimizing , we can estimate as well as .\nThe loss function and the associated -divergence discussed in the main text are obtained by choosing the function to be\nNote that and . It is straightforward to obtain Eq. (9 ###reference_###) and its extensions to the cases and from this choice.\n###figure_8### ###figure_9###"
54
+ },
55
+ {
56
+ "section_id": "Appendix 3",
57
+ "parent_section_id": null,
58
+ "section_name": "Appendix C Extra numerical results",
59
+ "text": "In the literature, the extent of agreement between a prediction and the true value is often expressed by the coefficient of determination . Here we check how the behaviors of differ as the value of changes for the cases of the two-bead model and the driven 1D Brownian particle.\nFor the two-bead model, as shown in Fig. 6 ###reference_###(a), exhibits a nonmonotonic behavior as a function of . The decrease of with increasing reflects the detriment of the -NEEP performance as the nonequilibrium driving gets stronger. Meanwhile, the decrease of as decreases (getting closer to equilibrium ) is due to the overfitting phenomenon discussed in the next section, which disrupts the linear relationship between the predicted EP and the true EP.\nFor the driven Brownian particle, as shown in Fig. 6 ###reference_###(b), always increases with . This may seem contradictory to how the MSE tends to increase or stay constant with increasing in Fig. 4 ###reference_###(b). Indeed, higher only means that there is a good linear relationship between the EP estimate and the true EP , not that and are close to each other. When is increased, due to the slower dynamics, we may have for transitions with positive EP and for transitions with negative EP, which can make the linear relationship between and appear stronger. This example clearly shows that is not an adequate measure of the performance of EP estimators.\nThe minibatch refers to the group of samples used for computing the gradient of the loss function. Smaller (larger) minibatches increase (decrease) the noisy component of the gradient, which in turn affects the performance of the -NEEP.\nWe explicitly check the effects of the minibatch size using the two-bead model with and , as shown in Fig. 7 ###reference_###. We use the ratio and the MSE as two different measures of the -NEEP performance. For small minibatches, the highly skewed distribution of the stochastic gradient shown in Fig. 5 ###reference_###(d) causes underestimation of the EP. For large minibatches, the noisy component of the loss-function gradient decreases, revealing the properties of the loss function landscape of the training dataset. As discussed using the Gaussian model in Sec. V ###reference_###, the loss function landscape at a moderately strong nonequilibrium driving leads to the overestimation of the EP. Thus, as the minibatch size is increased, grows beyond .\nThe nonmonotonic behaviors of the MSE also hint at the existence of an optimal minibatch size at the tradeoff between the skewed noise in the gradient (which drives the neural network towards underestimation) and the loss function landscape tilted towards overestimation. For both measures, the superiority of to is manifest.\nIn many cases, when the training continues for too many iterations, artificial neural networks are known to exhibit overfitting behaviors. As shown in Figs. 8 ###reference_### and 9 ###reference_###, we checked whether the -NEEP is also subject to the same phenomena as the training continues up to iterations. Towards this end, we created two independent datasets of trajectories exhibited by the two-bead model, namely the training set and the test set. Only the former was used during the training of the -NEEP, and we measured the MSE and the ratio to assess the performance of the -NEEP for each dataset.\nIn Fig. 8 ###reference_###, we show the results for the weak nonequilibrium driving ( and ). The first and the third columns show the two different measures of performance for the training dataset and the test dataset. Meanwhile, the second and the fourth columns show the difference between the corresponding measures obtained for two datasets. The overfitting phenomena are manifest from the increase of the MSE towards the end of the training. Interestingly, overfitting leads to an overestimation of the average EP only for the training dataset. We also note that the value of is largely irrelevant to the extent of overfitting. This phenomenon can be explained as follows. Near equilibrium, the neural network swiftly reaches the loss function minimum. However, as the training continues, the neural network starts to see the detailed fluctuations of the training dataset. This makes the functional form of the estimator very rough, leading to the increase of the MSE for both datasets. But while the neural network now believes all trajectories in the training dataset to be highly irreversible and assigns high EP to them, the EP assigned to the trajectories in the test dataset stay unbiased. Thus, grows larger only for the training dataset.\nIn Fig. 9 ###reference_###, we show the results for the strong nonequilibrium driving ( and ). The subfigures are organized in exactly the same way as in Fig. 8 ###reference_###. In this case, the overfitting effects do exist. But they are not as pronounced as in the case of the weaker nonequilibrium driving, and the differences between the training and the test datasets stay small. Note that the curves for exhibit strong fluctuations, which is in agreement with the large fluctuations of the gradient shown in Fig. 5 ###reference_###(d)."
60
+ }
61
+ ],
62
+ "tables": {},
63
+ "image_paths": {
64
+ "1": {
65
+ "figure_path": "2303.02901v2_figure_1.png",
66
+ "caption": "Figure 1: Schematic illustration of the neural-network implementation of the \u03b1\ud835\udefc\\alphaitalic_\u03b1-NEEP.",
67
+ "url": "http://arxiv.org/html/2303.02901v2/x1.png"
68
+ },
69
+ "2": {
70
+ "figure_path": "2303.02901v2_figure_2.png",
71
+ "caption": "Figure 2: (a) Illustration of the two-bead model. (b) Mean square error (MSE) of the EP estimate for various temperature differences. (c) Ratio between the estimated value \u03c3predsubscript\ud835\udf0epred\\sigma_{\\mathrm{pred}}italic_\u03c3 start_POSTSUBSCRIPT roman_pred end_POSTSUBSCRIPT and the true value \u03c3\ud835\udf0e\\sigmaitalic_\u03c3 of average EP for the two-bead model. Temperature of the cold bath is fixed at Tc=1subscript\ud835\udc47c1T_{\\mathrm{c}}=1italic_T start_POSTSUBSCRIPT roman_c end_POSTSUBSCRIPT = 1. Each data point and error bar are obtained from 40404040 independent trainings.",
72
+ "url": "http://arxiv.org/html/2303.02901v2/x2.png"
73
+ },
74
+ "3": {
75
+ "figure_path": "2303.02901v2_figure_3.png",
76
+ "caption": "Figure 3: (a) Illustration of the Brownian gyrator. Circles represent the equipotential lines and the dashed arrows indicate the directions of the nonconservative driving. (b) MSE of the EP estimate for the Brownian gyrator model as the magnitude of nonconservative force, \u03b5=\u2212\u03b4\ud835\udf00\ud835\udeff\\varepsilon=-\\deltaitalic_\u03b5 = - italic_\u03b4, is varied. (c) Ratio between the estimated value \u03c3predsubscript\ud835\udf0epred\\sigma_{\\mathrm{pred}}italic_\u03c3 start_POSTSUBSCRIPT roman_pred end_POSTSUBSCRIPT and the true value \u03c3\ud835\udf0e\\sigmaitalic_\u03c3 of average EP for the Brownian gyrator. Temperatures are fixed at Th=10subscript\ud835\udc47h10T_{\\mathrm{h}}=10italic_T start_POSTSUBSCRIPT roman_h end_POSTSUBSCRIPT = 10 and Tc=1subscript\ud835\udc47c1T_{\\mathrm{c}}=1italic_T start_POSTSUBSCRIPT roman_c end_POSTSUBSCRIPT = 1. Each data point and error bar are obtained from 40404040 independent trainings.",
77
+ "url": "http://arxiv.org/html/2303.02901v2/x3.png"
78
+ },
79
+ "4": {
80
+ "figure_path": "2303.02901v2_figure_4.png",
81
+ "caption": "Figure 4: (a) Illustration of the driven Brownian particle. (b) MSE of the EP estimate for the driven Brownian particle as the potential depth A\ud835\udc34Aitalic_A is varied. (c) Ratio between the estimated value \u03c3predsubscript\ud835\udf0epred\\sigma_{\\mathrm{pred}}italic_\u03c3 start_POSTSUBSCRIPT roman_pred end_POSTSUBSCRIPT and the true value \u03c3\ud835\udf0e\\sigmaitalic_\u03c3 of the average EP for the driven Brownian particle. Strength of the nonequilibrium driving is fixed at f=32\ud835\udc5332f=32italic_f = 32 and the temperature at T=1\ud835\udc471T=1italic_T = 1. Each data point and error bar are obtained from 40404040 independent trainings.",
82
+ "url": "http://arxiv.org/html/2303.02901v2/x4.png"
83
+ },
84
+ "5": {
85
+ "figure_path": "2303.02901v2_figure_5.png",
86
+ "caption": "Figure 5: Performance of the exactly solvable one-parameter model. (a) Shift \u0394\u2062\u03b8\u0394\ud835\udf03\\Delta\\thetaroman_\u0394 italic_\u03b8 of the loss function minimum as a function of the truncation parameter k\ud835\udc58kitalic_k. Circles are results obtained by numerical minimization, and solid lines are from the small 1/k1\ud835\udc581/k1 / italic_k expansion. (Inset) Loss function landscapes, with circles indicating the minima. We fixed \u03bc=3\ud835\udf073\\mu=3italic_\u03bc = 3, \u03c3=1\ud835\udf0e1\\sigma=1italic_\u03c3 = 1, and \u03b1=0\ud835\udefc0\\alpha=0italic_\u03b1 = 0. (b) Ratio of the estimated minimum \u03b8*superscript\ud835\udf03\\theta^{*}italic_\u03b8 start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT to the true minimum \u03b80subscript\ud835\udf030\\theta_{0}italic_\u03b8 start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT as the bias \u03bc\ud835\udf07\\muitalic_\u03bc is varied. The optimal points are calculated using the criterion that the loss function gradient satisfies |\u2202\u03b8\u2112\u03b1\u2062(\u03b8)|<10\u22123subscript\ud835\udf03subscript\u2112\ud835\udefc\ud835\udf03superscript103|\\partial_{\\theta}\\mathcal{L}_{\\alpha}(\\theta)|<10^{-3}| \u2202 start_POSTSUBSCRIPT italic_\u03b8 end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_\u03b1 end_POSTSUBSCRIPT ( italic_\u03b8 ) | < 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT for the first time as \u03b8\ud835\udf03\\thetaitalic_\u03b8 increases from 00. We fixed k=4\ud835\udc584k=4italic_k = 4 and \u03c3=1\ud835\udf0e1\\sigma=1italic_\u03c3 = 1. (Inset) Loss function landscape. Open diamonds indicate the true minima \u03b80subscript\ud835\udf030\\theta_{0}italic_\u03b8 start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, and the filled diamonds represent the estimated minima \u03b8*superscript\ud835\udf03\\theta^{*}italic_\u03b8 start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT. The parameters \u03b1=\u22120.5\ud835\udefc0.5\\alpha=-0.5italic_\u03b1 = - 0.5 and k=4\ud835\udc584k=4italic_k = 4 are fixed. (c) MSE of \u03b8\ud835\udf03\\thetaitalic_\u03b8. The vertical dashed line shows that the error is minimized at \u03b1=\u22120.5\ud835\udefc0.5\\alpha=-0.5italic_\u03b1 = - 0.5. (d) Distribution of the loss function gradient \u2202\u03b8\u2112\u03b1subscript\ud835\udf03subscript\u2112\ud835\udefc\\partial_{\\theta}\\mathcal{L}_{\\alpha}\u2202 start_POSTSUBSCRIPT italic_\u03b8 end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_\u03b1 end_POSTSUBSCRIPT at the minimum \u03b80=2subscript\ud835\udf0302\\theta_{0}=2italic_\u03b8 start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 2 for \u03bc=\u03c3=1\ud835\udf07\ud835\udf0e1\\mu=\\sigma=1italic_\u03bc = italic_\u03c3 = 1.",
87
+ "url": "http://arxiv.org/html/2303.02901v2/x5.png"
88
+ },
89
+ "6": {
90
+ "figure_path": "2303.02901v2_figure_6.png",
91
+ "caption": "Figure 6: Coefficient of determination R2superscript\ud835\udc452R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT for (a) the two-bead model with Tc=1subscript\ud835\udc47c1T_{\\mathrm{c}}=1italic_T start_POSTSUBSCRIPT roman_c end_POSTSUBSCRIPT = 1 and for (b) the driven Brownian particle with f=32\ud835\udc5332f=32italic_f = 32. 104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT trajectories are used for each minibatch, and error bars indicate the standard deviations obtained from 40404040 independent trainings.",
92
+ "url": "http://arxiv.org/html/2303.02901v2/x6.png"
93
+ },
94
+ "7": {
95
+ "figure_path": "2303.02901v2_figure_7.png",
96
+ "caption": "Figure 7: Effects of the minibatch size on the performance of the \u03b1\ud835\udefc\\alphaitalic_\u03b1-NEEP for the two-bead model with Th=1000subscript\ud835\udc47h1000T_{\\mathrm{h}}=1000italic_T start_POSTSUBSCRIPT roman_h end_POSTSUBSCRIPT = 1000 and Tc=1subscript\ud835\udc47c1T_{\\mathrm{c}}=1italic_T start_POSTSUBSCRIPT roman_c end_POSTSUBSCRIPT = 1. Error bars indicate the standard deviations obtained from 40404040 independent trainings.",
97
+ "url": "http://arxiv.org/html/2303.02901v2/x7.png"
98
+ },
99
+ "8": {
100
+ "figure_path": "2303.02901v2_figure_8.png",
101
+ "caption": "Figure 8: Training dynamics of the \u03b1\ud835\udefc\\alphaitalic_\u03b1-NEEP for the two-bead model at Th=10subscript\ud835\udc47h10T_{\\mathrm{h}}=10italic_T start_POSTSUBSCRIPT roman_h end_POSTSUBSCRIPT = 10. Each minibatch consists of 105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT trajectories. The first and the third columns show the performance for the training set (blue/dark gray curve) and the test set (orange/light gray curve). The second and the fourth columns show the difference in performance between the two datasets. The first (last) two columns correspond to \u03b1=\u22120.5\ud835\udefc0.5\\alpha=-0.5italic_\u03b1 = - 0.5 (\u03b1=0\ud835\udefc0\\alpha=0italic_\u03b1 = 0).",
102
+ "url": "http://arxiv.org/html/2303.02901v2/x8.png"
103
+ },
104
+ "9": {
105
+ "figure_path": "2303.02901v2_figure_9.png",
106
+ "caption": "Figure 9: \nTraining dynamics of the \u03b1\ud835\udefc\\alphaitalic_\u03b1-NEEP for the two-bead model at Th=3000subscript\ud835\udc47h3000T_{\\mathrm{h}}=3000italic_T start_POSTSUBSCRIPT roman_h end_POSTSUBSCRIPT = 3000 and Tc=1subscript\ud835\udc47c1T_{\\mathrm{c}}=1italic_T start_POSTSUBSCRIPT roman_c end_POSTSUBSCRIPT = 1. Each minibatch consists of 105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT trajectories. The first and the third columns show the performance for the training set (blue/dark gray curve) and the test set (orange/light gray curve). The second and the fourth columns show the difference in performance between the two datasets. The first (last) two columns correspond to \u03b1=\u22120.5\ud835\udefc0.5\\alpha=-0.5italic_\u03b1 = - 0.5 (\u03b1=0\ud835\udefc0\\alpha=0italic_\u03b1 = 0).",
107
+ "url": "http://arxiv.org/html/2303.02901v2/x9.png"
108
+ }
109
+ },
110
+ "validation": true,
111
+ "references": [],
112
+ "url": "http://arxiv.org/html/2303.02901v2"
113
+ }
20240119/2303.05015v2.json ADDED
@@ -0,0 +1,364 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Smooth and Stepwise Self-Distillation for Object Detection",
3
+ "abstract": "Distilling the structured information captured in feature maps has contributed to improved results for object detection tasks, but requires careful selection of baseline architectures and substantial pre-training.\nSelf-distillation addresses these limitations and has recently achieved state-of-the-art performance for object detection despite making several simplifying architectural assumptions.\nBuilding on this work, we propose Smooth and Stepwise Self-Distillation (sssd) for object detection.\nOur sssd architecture forms an implicit teacher from object labels and a feature pyramid network backbone to distill label-annotated feature maps using Jensen-Shannon distance, which is smoother than distillation losses used in prior work.\nWe additionally add a distillation coefficient that is adaptively configured based on the learning rate.\nWe extensively benchmark sssd against a baseline and two state-of-the-art object detector architectures on the COCO dataset by varying the coefficients and backbone and detector networks.\nWe demonstrate that sssd achieves higher average precision in most experimental settings, is robust to a wide range of coefficients, and benefits from our stepwise distillation procedure.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Knowledge distillation is a technique for transferring the information contained in the feature maps and model outputs of a large teacher model to a typically smaller student model [1 ###reference_1###, 2 ###reference_2###].\nAs a result, student models have lower storage and memory requirements and yield more efficient inference, enabling use in limited resource or real-time settings like in edge devices or autonomous vehicles [3 ###reference_3###, 4 ###reference_4###].\nObject detection is among the largest beneficiary of knowledge distillation [5 ###reference_5###, 6 ###reference_6###, 7 ###reference_7###] and transfer learning on related tasks [8 ###reference_8###], but these techniques require careful selection of a baseline teacher model and expensive pre-training [7 ###reference_7###, 9 ###reference_9###].\nRecent work removes the dependency on a pre-trained teacher entirely, e.g. by collaboratively training a collection of student networks (collaborative learning) [10 ###reference_10###] or smoothing class labels (label regularization) [11 ###reference_11###, 12 ###reference_12###]; however, these methods have largely focused on image classification.\nUnlike traditional transfer learning and knowledge distillation, self-distillation aims at extracting knowledge from the data labels during feature extraction within the same backbone model [5 ###reference_5###, 6 ###reference_6###, 7 ###reference_7###, 13 ###reference_13###, 14 ###reference_14###]; this eliminates the need for expensive pre-training of a teacher network.\nLabelEnc is a recently developed self-distillation method for object detection that encodes label information within the feature maps, providing intermediate supervision at internal neural network layers and achieving an approximately 2% improvement over prior work in the COCO dataset [14 ###reference_14###].\nBuilding on LabelEnc, label-guided self-distillation (LGD) leverages both label- and feature map-encodings as knowledge and improved the benchmark set by LabelEnc on COCO [13 ###reference_13###].\nWhile LabelEnc and LGD achieve state-of-the-art performance, they make simplifying\narchitectural assumptions.\nFirst, they consider mean squared error (MSE) as the only distillation loss, which is not robust to the noisy or imperfect teachers that are commonplace in self-distillation settings [15 ###reference_15###]. Second, there is no consideration for how the knowledge distillation coefficient affects the total loss or overall performance. In this paper, we explore the limitations of MSE as a self-distillation loss and the sensitivity of self-distillation to .\nWe propose Smooth and Stepwise Self-Distillation (sssd) by combining the Jensen-Shannon (JS) divergence with a that is adaptively configured based on the learning rate in a stepwise manner (Fig. 1 ###reference_###).\nWe summarize our contributions as follows:\nWe present Smooth and Stepwise Self-Distillation (sssd), which combines stepwise self-distillation with a smooth, bounded, and symmetric distance that is robust to noise (JS) [16 ###reference_16###, 17 ###reference_17###, 18 ###reference_18###].\nWe study the sensitivity of self-distillation to the distillation coefficient under a variety of architectural assumptions, providing insight on how influences model performance.\nWe thoroughly benchmark sssd and demonstrate higher average precision than previous self-distillation approaches in most configurations of the backbone and detector networks.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Proposed Method",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Smooth Self-Distillation",
21
+ "text": "Leveraging prior work on self-distillation for object detection, the features are obtained from a backbone feature pyramid network with scales [14 ###reference_14###, 13 ###reference_13###].\nWe define to be the set of features from the backbone feature pyramid network where is a vector of features at the scale, each pyramid has dimension , and .\nSimilarly, let be the feature maps obtained from a spatial transformer network [19 ###reference_19###] (STN) by the label-annotated feature maps (denoted by ) in the fusion component (Fig. 1 ###reference_###).\nExisting self-distillation methods for object detection use mean squared error (MSE) to calculate the distillation loss [13 ###reference_13###, 14 ###reference_14###]:\nwhere is the total number of feature map elements.\nThe Kullback-Leibler (KL) divergence is another commonly used loss function that was used to initially define knowledge distillation [1 ###reference_1###] and used subsequently across many applications [20 ###reference_20###, 21 ###reference_21###, 22 ###reference_22###, 23 ###reference_23###]:\nHowever, the KL divergence has several limitations.\nFor probability distributions and , is not bounded, which may result in model divergence during training, and is sensitive to regions of and that have low probability; e.g., can be large when for an event even if is small when is close to [24 ###reference_24###].\nTo address these issues, we use the Jensen-Shannon (JS) divergence as a new measure for knowledge distillation in object detection tasks.\nUnlike KL divergence, the JS divergence is bounded by , symmetric, does not require absolute continuity [25 ###reference_25###], and has been shown to be robust to label noise [16 ###reference_16###, 17 ###reference_17###, 18 ###reference_18###] and imperfect teachers that are commonplace in self-distillation settings [15 ###reference_15###]:\nwhere .\nIn this work, we consider the JS distance, which is a metric defined by .\nWe define the distillation loss, , as:\nThe detection loss is defined as:\nwhere the refers to the shared detection head, is the ground truth, and is a classification and regression object detection loss.\nThereby, we obtain the total training objective as:\nwhere is a coefficient for the distillation loss. Our choice of functional form for was motivated by research suggesting smooth loss functions improve deep neural network training and performance [26 ###reference_26###, 27 ###reference_27###]; since the JS distance is considered to be a smooth compromise between and , we term this knowledge distillation method as smooth self-distillation."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Stepwise Self-Distillation",
27
+ "text": "Learning rate scheduling is broadly used in large scale deep learning as an important mechanism to adjust the learning rate during training, typically through learning rate reduction according to a predefined schedule.\nTo help the model continue learning from self-distillation during learning rate decay, we propose stepwise self-distillation to compensate for the lessened impact of the self-distillation loss caused by a reduced learning rate.\nIn our setting, the backbone model is frozen and the detector is trained in the first 20 iterations.\nAn initial is assigned to the distillation loss empirically after the first 20 iterations; selection of an empirical is elaborated in the experimental section.\nWe redefine the in stepwise self-distillation as a step function of and a that depends on the training iteration.\nSince in our model training the learning rate begins decaying at iteration , we define as:"
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Experiments",
33
+ "text": "We compared sssd with two state-of-the-art (SOTA) self-distillation architectures for object detection, LabelEnc [14 ###reference_14###] and LGD [13 ###reference_13###], and a non-distillation baseline model.\nAll experiments were conducted using the official code repositories for LabelEnc [28 ###reference_28###] and LGD [29 ###reference_29###], using a batch size of on NVIDIA v100 GPUs and configurations specified in their official GitHub repositories.\nOur experiments tested different backbone networks, ResNet-50 (R-50) and ResNet-101 (R-101), and explored three popular detectors: Faster R-CNN (FRCN) [30 ###reference_30###], fully convolutional one-stage object detector (FCOS) [31 ###reference_31###] and RetinaNet [32 ###reference_32###].\nAll experiments were validated on the Microsoft Common Objects in Context (COCO) dataset with categories using commonly reported metrics based on mean average precision (AP) and other detailed metrics: APs, APm, and APl, which are the AP for small, medium and large objects, and AP50 and AP75, which are the AP at IoU=0.50 and IoU=0.75 where IoU is the intersection over union [33 ###reference_33###]."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Comparisons with SOTA Results",
39
+ "text": "We first compared sssd with competing methods on the COCO data based on AP and using two backbone networks, R-50 and R-101, and three detectors, FRCN, RetinaNet and FCOS (Table 1 ###reference_###).\nCompared to the baseline model, our approach achieved an AP improvement of approximately , , and for the FRCN, RetinaNet, and FCOS detectors respectively.\nOur method improved on the AP of LabelEnc by approximately 2.8% for FRCN, 2.2% for FRCN, and more than 1% for other architectural configurations.\nWith respect to LGD, sssd achieves an almost gain in AP for the FRCN and FCOS configurations and improvements in other FRCN and FCOS settings.\nSince the performance of LGD is most comparable to sssd, we further investigated the performance of LGD and sssd using variations of AP (Table 2 ###reference_###).\nIn the RetinaNet setting, our proposed method achieved a AP performance gain ( versus ) for objects with small bounding boxes (APs).\nThe results for the other detectors demonstrate that sssd performs relatively well compared with LGD primarily due to improved AP for objects with medium or large bounding boxes (APm and APl).\nFCOS-based architectures yielded the best AP results for both methods where sssd outperformed LGD in all AP-related measures besides APs, including a , , and gain over LGD in AP50, APm, and APl respectively."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Effect of Adjusting",
45
+ "text": "Next, we considered the effect of varying the distillation coefficient, .\nWhile previous work assumed a [13 ###reference_13###], we conjectured that adjusting may be beneficial for model training due to varying the contribution of the distillation loss to the overall loss function during learning rate decay.\nSince we are using a different distillation loss than LGD, we first calibrated the parameter between LGD and sssd.\nFirst, we reproduced the original experiments by setting in LGD with the FRCN detector and R50 backbone; the mean contribution of the penalized distillation loss to the total loss () was 45% after iterations.\nWe computed a in the domain of using binary search that yielded a mean after iterations, which led to an equivalent of for sssd.\nTo explore the impact of adjusting , we considered for LGD and for sssd.\nThe at iteration was similar across the two architectures (Table 3 ###reference_###). Interestingly, the final was close to for both LGD and sssd regardless of the .\nWe compared the performance between LGD and sssd after calibrating to be in a comparable range (Fig. 2 ###reference_### and Table 4 ###reference_###).\nThe top performing for sssd () consistently outperformed the top performing LGD configuration () in all AP measures besides AP50; when considering all , sssd compares favorably to LGD among most of the AP variants, including up to a ( versus ) improvement in AP75 (Table 4 ###reference_###).\nThe top performing sssd also maintains an advantage over LGD from iterations to (Fig. 2 ###reference_###).\n###figure_2###"
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Stepwise Distillation",
51
+ "text": "Finally, we evaluated the effectiveness of stepwise distillation in both LGD and sssd using a fixed architecture (FRCN-R50) over the final iterations (Fig. 3 ###reference_###).\nWe tested LGD and sssd since these were the best performing for this architecture (Table 4 ###reference_###).\nAdditionally, we tested a slightly increased LGD and sssd .\nWe compared these static settings with stepwise distillation, which switches from to at iteration (in the learning rate scheduler period).\nStepwise distillation improves both LGD and sssd resulting in an approximately improvement in AP over fixed settings (Fig. 3 ###reference_###).\nSince stepwise distillation does not impose additional computational costs and is independent of the architecture, we optimistically believe that stepwise distillation may be beneficial for other knowledge distillation applications.\n###figure_3###"
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion",
57
+ "text": "In this paper, we proposed Smooth and Stepwise Self-Distillation (sssd) for object detection, which can efficiently improve model performance without requiring a large teacher model.\nThrough extensive benchmarking, we demonstrated that sssd achieves improved performance when compared with current SOTA self-distillation approaches for a variety of backbones and detectors. We investigated the effects of varying the distillation coefficient and justified stepwise distillation as a potentially beneficial procedure for improving the performance of knowledge distillation schemes."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {
62
+ "1": {
63
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.1.1\">Table 1</span>: </span>Comparisons with baseline and SOTA methods based on mean average precision (AP).</figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S3.T1.3\" style=\"width:433.6pt;height:259.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(111.5pt,-66.7pt) scale(2.05872577108436,2.05872577108436) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S3.T1.3.1.1.1.1\">Detector</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.3.1.1.1.2\">Backbone</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.3.1.1.1.3\">Baseline</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.3.1.1.1.4\">LabelEnc</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.3.1.1.1.5\">LGD</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T1.3.1.1.1.6\">Ours</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.3.1.2.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.3.1.2.1.1.1\">FRCN</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.2.1.2\">R-50</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.2.1.3\">39.6</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.2.1.4\">39.6</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.2.1.5\">40.4</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.2.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.1.2.1.6.1\">40.6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.1.3.2.1\">R-101</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.1.3.2.2\">41.7</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.1.3.2.3\">41.4</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.1.3.2.4\">42.2</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.1.3.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.1.3.2.5.1\">42.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.3.1.4.3.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.3.1.4.3.1.1\">RetinaNet</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.4.3.2\">R-50</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.4.3.3\">38.8</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.4.3.4\">39.6</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.4.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.1.4.3.5.1\">40.3</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.4.3.6\">40.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.1.5.4.1\">R-101</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.1.5.4.2\">40.6</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.1.5.4.3\">41.5</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.1.5.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.1.5.4.4.1\">42.1</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.3.1.5.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.1.5.4.5.1\">42.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S3.T1.3.1.6.5.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S3.T1.3.1.6.5.1.1\">FCOS</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.6.5.2\">R-50</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.6.5.3\">41.0</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.6.5.4\">41.8</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.6.5.5\">42.3</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.1.6.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.1.6.5.6.1\">42.4</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.1.7.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T1.3.1.7.6.1\">R-101</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T1.3.1.7.6.2\">42.9</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T1.3.1.7.6.3\">43.6</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T1.3.1.7.6.4\">44.0</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S3.T1.3.1.7.6.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.1.7.6.5.1\">44.2</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
64
+ "capture": "Table 1: Comparisons with baseline and SOTA methods based on mean average precision (AP)."
65
+ },
66
+ "2": {
67
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.14.1.1\">Table 2</span>: </span>Detailed Comparisons with LGD.</figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S3.T2.12\" style=\"width:433.6pt;height:440.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(101.6pt,-103.3pt) scale(1.88267951567132,1.88267951567132) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.12.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T2.12.12.13.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.12.12.13.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.12.12.13.1.2\">AP</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.12.12.13.1.3\">AP50</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.12.12.13.1.4\">AP75</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.12.12.13.1.5\">APs</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.12.12.13.1.6\">APm</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T2.12.12.13.1.7\">APl</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.1.1.1.1\">FRCN-Ours</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.2.1\">40.6</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.1.1.1.3\">61.2</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.4.1\">44.0</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.1.1.1.5\">23.8</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.1.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.6.1\">43.9</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.1.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.1.1.1.7.1\">53.2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.2.2.2.1\">FRCN-LGD</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.2.2.2.2\">40.4</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.2.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.2.2.2.3.1\">61.3</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.2.2.2.4\">43.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.2.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.2.2.2.5.1\">24.0</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.2.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.2.2.2.6.1\">43.9</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.2.2.2.7\">52.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.3.3.3.1\">FRCN-Ours</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.3.3.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.3.3.3.2.1\">42.3</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.3.3.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.3.3.3.3.1\">62.9</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.3.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.3.3.3.4.1\">45.8</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.3.3.3.5\">25.3</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.3.3.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.3.3.3.6.1\">45.9</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.3.3.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.3.3.3.7.1\">56.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.4.4.4.1\">FRCN-LGD</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.4.2\">42.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.4.3\">62.8</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.4.4\">45.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.4.4.4.5.1\">25.9</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.4.6\">45.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.4.4.4.7\">56.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.5.5.5.1\">RetinaNet-Ours</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.5.5.5.2\">40.2</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.5.5.5.3\">60.0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.5.5.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.5.5.4.1\">43.0</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.5.5.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.5.5.5.1\">24.2</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.5.5.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.5.5.6.1\">44.2</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.5.5.5.7\">52.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.6.6.6.1\">RetinaNet-LGD</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.6.6.6.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.6.6.6.2.1\">40.3</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.6.6.6.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.6.6.6.3.1\">60.1</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.6.6.6.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.6.6.6.4.1\">43.0</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.6.6.6.5\">24.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.6.6.6.6\">44.1</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.6.6.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.6.6.6.7.1\">52.4</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.7.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.7.7.7.1\">RetinaNet-Ours</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.7.7.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.7.7.7.2.1\">42.1</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.7.7.7.3\">61.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.7.7.7.4\">44.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.7.7.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.7.7.7.5.1\">26.1</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.7.7.7.6\">46.2</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.7.7.7.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.7.7.7.7.1\">55.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.8.8.8.1\">RetinaNet-LGD</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.8.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.8.8.8.2.1\">42.1</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.8.8.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.8.8.8.3.1\">62.1</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.8.8.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.8.8.8.4.1\">45.1</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.8.8.8.5\">24.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.8.8.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.8.8.8.6.1\">46.5</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.8.8.8.7\">55.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.9.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.9.9.9.1\">FCOS-Ours</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.9.9.9.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.9.9.9.2.1\">42.4</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.9.9.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.9.9.9.3.1\">61.2</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.9.9.9.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.9.9.9.4.1\">46.0</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.9.9.9.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.9.9.9.5.1\">26.4</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.9.9.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.9.9.9.6.1\">46.1</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.9.9.9.7\">54.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.10.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.10.10.10.1\">FCOS-LGD</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.10.10.10.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.10.10.10.2.1\">42.4</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.10.10.10.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.10.10.10.3.1\">61.2</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.10.10.10.4\">45.8</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.10.10.10.5\">26.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.10.10.10.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.10.10.10.6.1\">46.1</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T2.10.10.10.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.10.10.10.7.1\">54.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.11.11.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.11.11.11.1\">FCOS-Ours</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.11.11.11.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.11.11.11.2.1\">44.2</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.11.11.11.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.11.11.11.3.1\">63.3</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.11.11.11.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.11.11.11.4.1\">47.6</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.11.11.11.5\">27.1</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.11.11.11.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.11.11.11.6.1\">48.3</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T2.11.11.11.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.11.11.11.7.1\">57.5</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T2.12.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S3.T2.12.12.12.1\">FCOS-LGD</th>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T2.12.12.12.2\">44.0</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T2.12.12.12.3\">62.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T2.12.12.12.4\">47.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T2.12.12.12.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.12.12.12.5.1\">27.2</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T2.12.12.12.6\">47.9</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T2.12.12.12.7\">56.9</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
68
+ "capture": "Table 2: Detailed Comparisons with LGD."
69
+ },
70
+ "3": {
71
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.14.1.1\">Table 3</span>: </span>Comparisons of with different after iterations .</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T3.12.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.7.1.1\">LGD\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.8.2.2\">LGD\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.9.3.3\">LGD\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.10.4.4\">Ours\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.11.5.5\">Ours\n</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T3.12.6.6\">Ours\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T3.12.7.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T3.12.7.1.1\">0.44</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T3.12.7.1.2\">0.46</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T3.12.7.1.3\">0.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T3.12.7.1.4\">0.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T3.12.7.1.5\">0.49</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S3.T3.12.7.1.6\">0.47</td>\n</tr>\n</tbody>\n</table>\n</figure>",
72
+ "capture": "Table 3: Comparisons of with different after iterations ."
73
+ },
74
+ "4": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.10.1.1\">Table 4</span>: </span>Detailed Comparisons with different selections.</figcaption>\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T4.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T4.8.7.1\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_t\" id=\"S3.T4.8.7.1.1\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T4.8.7.1.2\">AP</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T4.8.7.1.3\">AP50</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T4.8.7.1.4\">AP75</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T4.8.7.1.5\">APs</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T4.8.7.1.6\">APm</th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S3.T4.8.7.1.7\">APl</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T4.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T4.3.1.1\">LGD\n</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.3.1.2\">40.4</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.3.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.3.1.3.1\">61.3</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.3.1.4\">43.6</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.3.1.5\">23.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.3.1.6\">43.7</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.3.1.7\">53.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T4.4.2.1\">LGD\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.4.2.2\">40.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.4.2.3\">60.8</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.4.2.4\">43.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.4.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.4.2.5.1\">23.8</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.4.2.6\">43.3</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.4.2.7\">52.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T4.5.3.1\">LGD\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.5.3.2\">40.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.5.3.3\">60.9</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.5.3.4\">43.6</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.5.3.5\">23.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.5.3.6\">43.5</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.5.3.7\">53.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T4.6.4.1\">Ours\n</th>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.6.4.2\">40.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.6.4.3\">61.2</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.6.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.6.4.4.1\">44.1</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.6.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.6.4.5.1\">23.8</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.6.4.6\">43.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T4.6.4.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.6.4.7.1\">53.2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T4.7.5.1\">Ours\n</th>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.7.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.7.5.2.1\">40.6</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.7.5.3\">61.2</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.7.5.4\">\u00a044.0</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.7.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.7.5.5.1\">23.8</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.7.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.7.5.6.1\">43.9</span></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T4.7.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T4.7.5.7.1\">53.2</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T4.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S3.T4.8.6.1\">Ours\n</th>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T4.8.6.2\">40.2</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T4.8.6.3\">60.7</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T4.8.6.4\">43.4</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T4.8.6.5\">23.5</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T4.8.6.6\">43.4</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S3.T4.8.6.7\">52.6</td>\n</tr>\n</tbody>\n</table>\n</figure>",
76
+ "capture": "Table 4: Detailed Comparisons with different selections."
77
+ }
78
+ },
79
+ "image_paths": {
80
+ "1": {
81
+ "figure_path": "2303.05015v2_figure_1.png",
82
+ "caption": "Fig. 1: Smooth and Stepwise Self-Distillation (sssd). The feature maps (\ud835\udc72\ud835\udc72\\bm{K}bold_italic_K) extracted from the backbone (ResNet-50) are sent to the fusion component along with the ground truth annotations.\nThe distillation loss (Ld\u2062i\u2062s\u2062t\u2062i\u2062l\u2062lsubscript\ud835\udc3f\ud835\udc51\ud835\udc56\ud835\udc60\ud835\udc61\ud835\udc56\ud835\udc59\ud835\udc59L_{distill}italic_L start_POSTSUBSCRIPT italic_d italic_i italic_s italic_t italic_i italic_l italic_l end_POSTSUBSCRIPT) is calculated using the feature maps and label enhanced feature maps (\ud835\udc72\ud835\udc86subscript\ud835\udc72\ud835\udc86\\bm{K_{e}}bold_italic_K start_POSTSUBSCRIPT bold_italic_e end_POSTSUBSCRIPT).\nThe detection loss (Ld\u2062e\u2062tsubscript\ud835\udc3f\ud835\udc51\ud835\udc52\ud835\udc61L_{det}italic_L start_POSTSUBSCRIPT italic_d italic_e italic_t end_POSTSUBSCRIPT) is calculated as classification and bounding-box regression losses by a shared detection head.",
83
+ "url": "http://arxiv.org/html/2303.05015v2/x1.png"
84
+ },
85
+ "2": {
86
+ "figure_path": "2303.05015v2_figure_2.png",
87
+ "caption": "Fig. 2: Performance comparison with different \u03bb\ud835\udf06\\lambdaitalic_\u03bb. After calibrating the distillation loss, the AP for sssd with \u03bb=75\ud835\udf0675\\lambda=75italic_\u03bb = 75 (Ours7575{}_{75}start_FLOATSUBSCRIPT 75 end_FLOATSUBSCRIPT) is higher than LGD configurations. The learning rates for each architecture are 00 after iteration 17\u00d710417superscript10417\\times 10^{4}17 \u00d7 10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT.",
88
+ "url": "http://arxiv.org/html/2303.05015v2/x2.png"
89
+ },
90
+ "3": {
91
+ "figure_path": "2303.05015v2_figure_3.png",
92
+ "caption": "Fig. 3: Stepwise self-distillation comparisons.\nThe stepwise self-distillation strategy for both LGD and sssd (Ours) improves final AP over a fixed \u03bb\ud835\udf06\\lambdaitalic_\u03bb.",
93
+ "url": "http://arxiv.org/html/2303.05015v2/x3.png"
94
+ }
95
+ },
96
+ "validation": true,
97
+ "references": [
98
+ {
99
+ "1": {
100
+ "title": "\u201cDistilling the Knowledge in a Neural Network,\u201d 2015.",
101
+ "author": "Geoffrey Hinton et al.,",
102
+ "venue": null,
103
+ "url": null
104
+ }
105
+ },
106
+ {
107
+ "2": {
108
+ "title": "\u201cModel Compression,\u201d",
109
+ "author": "Cristian Bucilua et al.,",
110
+ "venue": "in SIGKDD, 2006.",
111
+ "url": null
112
+ }
113
+ },
114
+ {
115
+ "3": {
116
+ "title": "\u201cDetecting Vehicles on the Edge: Knowledge Distillation To Improve\nPerformance in Heterogeneous Road Traffic,\u201d",
117
+ "author": "Manoj Bharadhwaj et al.,",
118
+ "venue": "in CVPR, 2022.",
119
+ "url": null
120
+ }
121
+ },
122
+ {
123
+ "4": {
124
+ "title": "\u201cDomain adaptive knowledge distillation for driving scene semantic\nsegmentation,\u201d",
125
+ "author": "Divya Kothandaraman et al.,",
126
+ "venue": "in WACV, 2021.",
127
+ "url": null
128
+ }
129
+ },
130
+ {
131
+ "5": {
132
+ "title": "\u201cKnowledge Distillation for Object Detection via Rank Mimicking and\nPrediction-Guided Feature Imitation,\u201d",
133
+ "author": "Gang Li et al.,",
134
+ "venue": "AAAI, 2022.",
135
+ "url": null
136
+ }
137
+ },
138
+ {
139
+ "6": {
140
+ "title": "\u201cInstance-Conditional Knowledge Distillation for Object\nDetection,\u201d",
141
+ "author": "Zijian Kang et al.,",
142
+ "venue": "in NeurIPS. 2021, Curran Associates, Inc.",
143
+ "url": null
144
+ }
145
+ },
146
+ {
147
+ "7": {
148
+ "title": "\u201cImprove Object Detection with Feature-based Knowledge\nDistillation: Towards Accurate and Efficient Detectors,\u201d",
149
+ "author": "Linfeng Zhang and Kaisheng Ma,",
150
+ "venue": "in ICLR, 2021.",
151
+ "url": null
152
+ }
153
+ },
154
+ {
155
+ "8": {
156
+ "title": "\u201cProper Reuse of Image Classification Features Improves Object\nDetection,\u201d",
157
+ "author": "Cristina Vasconcelos et al.,",
158
+ "venue": "in CVPR, 2022.",
159
+ "url": null
160
+ }
161
+ },
162
+ {
163
+ "9": {
164
+ "title": "\u201cG-DetKD: Towards general distillation framework for object\ndetectors via contrastive and semantic-guided feature imitation,\u201d",
165
+ "author": "Lewei Yao et al.,",
166
+ "venue": "in ICCV, 2021.",
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "10": {
172
+ "title": "\u201cOnline knowledge distillation via collaborative learning,\u201d",
173
+ "author": "Qiushan Guo et al.,",
174
+ "venue": "in CVPR, 2020.",
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "11": {
180
+ "title": "\u201cWhen does label smoothing help?,\u201d",
181
+ "author": "Rafael M\u00fcller et al.,",
182
+ "venue": "NeurIPS, 2019.",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "12": {
188
+ "title": "\u201cAdaptive regularization of labels,\u201d",
189
+ "author": "Qianggang Ding et al.,",
190
+ "venue": "arXiv, 2019.",
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "13": {
196
+ "title": "\u201cLGD: Label-guided Self-distillation for Object Detection,\u201d",
197
+ "author": "Peizhen Zhang et al.,",
198
+ "venue": "in AAAI, 2022.",
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "14": {
204
+ "title": "\u201cLabelEnc: A New Intermediate Supervision Method for Object\nDetection,\u201d",
205
+ "author": "Miao Hao et al.,",
206
+ "venue": "in ECCV, 2020.",
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "15": {
212
+ "title": "\u201cComparing Kullback-Leibler divergence and mean squared error loss\nin knowledge distillation,\u201d",
213
+ "author": "Taehyeon Kim et al.,",
214
+ "venue": "arXiv, 2021.",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "16": {
220
+ "title": "\u201c: A novel information-theoretic loss function\nfor training deep nets robust to label noise,\u201d",
221
+ "author": "Yilun Xu et al.,",
222
+ "venue": "NeurIPS, 2019.",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "17": {
228
+ "title": "\u201cWhen optimizing -divergence is robust with label noise,\u201d",
229
+ "author": "Jiaheng Wei and Yang Liu,",
230
+ "venue": "arXiv, 2020.",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "18": {
236
+ "title": "\u201cGeneralized Jensen-Shannon divergence loss for learning with noisy\nlabels,\u201d",
237
+ "author": "Erik Englesson et al.,",
238
+ "venue": "NeurIPS, 2021.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "19": {
244
+ "title": "\u201cSpatial transformer networks,\u201d",
245
+ "author": "Max Jaderberg et al.,",
246
+ "venue": "in NeurIPS, 2015.",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "20": {
252
+ "title": "\u201cDeep Mutual Learning,\u201d",
253
+ "author": "Y. Zhang et al.,",
254
+ "venue": "in CVPR, 2018.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "21": {
260
+ "title": "\u201cTraining deep neural networks in generations: A more tolerant\nteacher educates better students,\u201d",
261
+ "author": "Chenglin Yang et al.,",
262
+ "venue": "AAAI, 2019.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "22": {
268
+ "title": "\u201cMiniLM: Deep Self-Attention Distillation for Task-Agnostic\nCompression of Pre-Trained Transformers,\u201d",
269
+ "author": "Wenhui Wang et al.,",
270
+ "venue": "in NeurIPS, 2020.",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "23": {
276
+ "title": "\u201c\u201dMiniLMv2: Multi-Head Self-Attention Relation Distillation for\nCompressing Pretrained Transformers\u201d,\u201d",
277
+ "author": "Wenhui Wang et al.,",
278
+ "venue": "in ACL 2021.",
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "24": {
284
+ "title": "\u201cA tight parallel repetition theorem for partially simulatable\ninteractive arguments via smooth KL-divergence,\u201d",
285
+ "author": "Itay Berman et al.,",
286
+ "venue": "in Crypto, 2020.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "25": {
292
+ "title": "\u201cDivergence measures based on the Shannon entropy,\u201d",
293
+ "author": "Jianhua Lin,",
294
+ "venue": "IEEE Transactions on Information theory, 1991.",
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "26": {
300
+ "title": "\u201cSmooth Loss Functions for Deep Top-k Classification,\u201d",
301
+ "author": "Leonard Berrada et al.,",
302
+ "venue": "in ICLR, 2018.",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "27": {
308
+ "title": "\u201cLoss Functions for Top-k Error: Analysis and Insights,\u201d",
309
+ "author": "Maksim Lapin et al.,",
310
+ "venue": "CVPR, pp. 1468\u20131477, 2016.",
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "28": {
316
+ "title": "\u201cLabelEnc Software,\u201d\nhttps://github.com/megvii-model/LabelEnc, 2020.",
317
+ "author": "Miao Hao et al.,",
318
+ "venue": null,
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "29": {
324
+ "title": "\u201cLGD Software,\u201d https://github.com/megvii-research/LGD,\n2022.",
325
+ "author": "Peizhen Zhang et al.,",
326
+ "venue": null,
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "30": {
332
+ "title": "\u201cFaster R-CNN: Towards Real-Time Object Detection with Region\nProposal Networks,\u201d",
333
+ "author": "Shaoqing Ren et al.,",
334
+ "venue": "in NeurIPS, 2015.",
335
+ "url": null
336
+ }
337
+ },
338
+ {
339
+ "31": {
340
+ "title": "\u201cFCOS: Fully Convolutional One-Stage Object Detection,\u201d",
341
+ "author": "Zhi Tian et al.,",
342
+ "venue": "in ICCV, 2019.",
343
+ "url": null
344
+ }
345
+ },
346
+ {
347
+ "32": {
348
+ "title": "\u201cFocal Loss for Dense Object Detection,\u201d",
349
+ "author": "Tsung-Yi Lin et al.,",
350
+ "venue": "TPAMI, 2018.",
351
+ "url": null
352
+ }
353
+ },
354
+ {
355
+ "33": {
356
+ "title": "\u201cMicrosoft COCO: Common Objects in Context,\u201d 2014.",
357
+ "author": "Tsung-Yi Lin et al.,",
358
+ "venue": null,
359
+ "url": null
360
+ }
361
+ }
362
+ ],
363
+ "url": "http://arxiv.org/html/2303.05015v2"
364
+ }
20240119/2304.11171v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2305.01120v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2305.03077v2.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Explaining dark matter halo density profiles with neural networks",
3
+ "abstract": "We use explainable neural networks to connect the evolutionary history of dark matter halos with their density profiles. The network captures independent factors of variation in the density profiles within a low-dimensional representation, which we physically interpret using mutual information. Without any prior knowledge of the halos\u2019 evolution, the network recovers the known relation between the early time assembly and the inner profile, and discovers that the profile beyond the virial radius is described by a single parameter capturing the most recent mass accretion rate. The results illustrate the potential for machine-assisted scientific discovery in complicated astrophysical datasets.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Supplemental Material",
9
+ "text": ""
10
+ }
11
+ ],
12
+ "appendix": [],
13
+ "tables": {},
14
+ "image_paths": {
15
+ "1": {
16
+ "figure_path": "2305.03077v2_figure_1.png",
17
+ "caption": "Figure 1: A neural network is trained to discover the underlying degrees of freedom in halo density profiles in the form of a latent representation, when presented with the full 3D density structure of a halo. We physically interpret the discovered representation by measuring the MI between the latent parameters and the assembly history of the halos.",
18
+ "url": "http://arxiv.org/html/2305.03077v2/x1.png"
19
+ },
20
+ "2": {
21
+ "figure_path": "2305.03077v2_figure_2.png",
22
+ "caption": "Figure 2: The MI between the latent parameters and the ground-truth halo profiles \u03c1true\u2062(r)subscript\ud835\udf0ctrue\ud835\udc5f\\rho_{\\mathrm{true}}(r)italic_\u03c1 start_POSTSUBSCRIPT roman_true end_POSTSUBSCRIPT ( italic_r ) for the IVEinfallinfall{}_{\\mathrm{infall}}start_FLOATSUBSCRIPT roman_infall end_FLOATSUBSCRIPT (top) and the IVEvirialvirial{}_{\\mathrm{virial}}start_FLOATSUBSCRIPT roman_virial end_FLOATSUBSCRIPT (bottom) models. In the IVEvirialvirial{}_{\\mathrm{virial}}start_FLOATSUBSCRIPT roman_virial end_FLOATSUBSCRIPT case, we also show MI with the NFW concentration. (For clarity we do not show the IVEvirialvirial{}_{\\mathrm{virial}}start_FLOATSUBSCRIPT roman_virial end_FLOATSUBSCRIPT normalization latent, since it behaves identically to the IVEinfallinfall{}_{\\mathrm{infall}}start_FLOATSUBSCRIPT roman_infall end_FLOATSUBSCRIPT normalization latent.)",
23
+ "url": "http://arxiv.org/html/2305.03077v2/x2.png"
24
+ },
25
+ "3": {
26
+ "figure_path": "2305.03077v2_figure_3.png",
27
+ "caption": "Figure 3: The MI between the latent parameters and the mass accretion histories (denoted MIM\u2062(z)\ud835\udc40\ud835\udc67{}_{M(z)}start_FLOATSUBSCRIPT italic_M ( italic_z ) end_FLOATSUBSCRIPT; top row), and that between the latent parameters and the mass accretion rate (denoted MId\u2062M\u2062(z)/d\u2062zd\ud835\udc40\ud835\udc67d\ud835\udc67{}_{\\mathrm{d}M(z)/\\mathrm{d}z}start_FLOATSUBSCRIPT roman_d italic_M ( italic_z ) / roman_d italic_z end_FLOATSUBSCRIPT; bottom row). The inner shape latent and the NFW concentration carry memory of the early-time mass assembly history, as well as the later-time mass accretion rate. The outer shape latent carries information about the halos\u2019 most recent mass accretion rate over the past dynamical time (indicated by the arrow).",
28
+ "url": "http://arxiv.org/html/2305.03077v2/x3.png"
29
+ },
30
+ "4": {
31
+ "figure_path": "2305.03077v2_figure_4.png",
32
+ "caption": "Figure S.1: Mean and 90%percent\\%% confidence interval of the residuals log\u2061[\u03c1predicted/\u03c1true]subscript\ud835\udf0cpredictedsubscript\ud835\udf0ctrue\\log[\\rho_{\\rm predicted}/\\rho_{\\rm true}]roman_log [ italic_\u03c1 start_POSTSUBSCRIPT roman_predicted end_POSTSUBSCRIPT / italic_\u03c1 start_POSTSUBSCRIPT roman_true end_POSTSUBSCRIPT ] of the IVEvirialvirial{}_{\\mathrm{virial}}start_FLOATSUBSCRIPT roman_virial end_FLOATSUBSCRIPT and IVEinfallinfall{}_{\\mathrm{infall}}start_FLOATSUBSCRIPT roman_infall end_FLOATSUBSCRIPT models, as a function of reffsubscript\ud835\udc5feffr_{\\rm eff}italic_r start_POSTSUBSCRIPT roman_eff end_POSTSUBSCRIPT defined as the median radius in each bin. The grey band shows the NFW residuals.",
33
+ "url": "http://arxiv.org/html/2305.03077v2/x4.png"
34
+ },
35
+ "5": {
36
+ "figure_path": "2305.03077v2_figure_5.png",
37
+ "caption": "Figure S.2: The MI between the densities on two fixed radial scales, \u03c1\u2062(r1)\ud835\udf0csubscript\ud835\udc5f1\\rho(r_{1})italic_\u03c1 ( italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) and \u03c1\u2062(r2)\ud835\udf0csubscript\ud835\udc5f2\\rho(r_{2})italic_\u03c1 ( italic_r start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ), and (i) the mass accretion histories (top row) and (ii) the mass accretion rate (bottom row). The two radial scales, r1subscript\ud835\udc5f1r_{1}italic_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and r2subscript\ud835\udc5f2r_{2}italic_r start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, are the locations at which the MI between the ground-truth profile and the inner shape latent peaks (Fig 2, bottom panel).",
38
+ "url": "http://arxiv.org/html/2305.03077v2/x5.png"
39
+ }
40
+ },
41
+ "validation": true,
42
+ "references": [],
43
+ "url": "http://arxiv.org/html/2305.03077v2"
44
+ }
20240119/2305.11834v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2305.12997v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2305.13310v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2305.14402v3.json ADDED
@@ -0,0 +1,280 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Enhancing Speech Emotion Recognition Through Differentiable Architecture Search",
3
+ "abstract": "Speech Emotion Recognition (SER) is a critical enabler of emotion-aware communication in human-computer interactions. Recent advancements in Deep Learning (DL) have substantially enhanced the performance of SER models through increased model complexity. However, designing optimal DL architectures requires prior experience and experimental evaluations. Encouragingly, Neural Architecture Search (NAS) offers a promising avenue to determine an optimal DL model automatically. In particular, Differentiable Architecture Search (DARTS) is an efficient method of using NAS to search for optimised models. This paper proposes a DARTS-optimised joint CNN and LSTM architecture, to improve SER performance, where the literature informs the selection of CNN and LSTM coupling to offer improved performance. While DARTS has previously been applied to CNN and LSTM combinations, our approach introduces a novel mechanism, particularly in selecting CNN operations using DARTS. In contrast to previous studies, we refrain from imposing constraints on the order of the layers for the CNN within the DARTS cell; instead, we allow DARTS to determine the optimal layer order autonomously. Experimenting with the IEMOCAP and MSP-IMPROV datasets, we demonstrate that our proposed methodology achieves significantly higher SER accuracy than hand-engineering the CNN-LSTM configuration. It also outperforms the best-reported SER results achieved using DARTS on CNN-LSTM.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Recognising the emotion embedded in speech is a crucial but challenging problem. Encouragingly, Speech Emotion Recognition (SER) research has made significant progress in the last decade utilising the steep rise of deep learning [1 ###reference_1###, 2 ###reference_2###, 3 ###reference_3###, 4 ###reference_4###]. Deep learning offers the ability to learn features automatically; however, finding the right deep-learning architecture for SER is challenging as the model needs to be gradually modified and trained recursively until the best configuration is found. This can be prohibitively time-consuming, given the time needed to train and test numerous configurations.\nAn alternative to the conventional approach is \u201cneural architecture search\u201d (NAS), which can help discover the optimal neural network for a given task. In NAS, the search is conducted over a discrete set of candidate operations. This requires the model to be trained on a specific configuration before moving on to the next configuration. This is, however, time-consuming [5 ###reference_5###]. The differentiable architecture search (DARTS) [6 ###reference_6###] found a way of relaxing the discrete set of candidate operations, allowing the search space to be continuous. Researchers show that DARTS can decrease the computation time of 2000 GPU days of reinforcement learning or 3150 GPU days of evolution to 2\u20133 GPU days [6 ###reference_6###, 7 ###reference_7###]. This motivates us to focus on DARTS in this paper.\nFurthermore, previous studies have shown that a multi-temporal CNN stacked on LSTM offers to capture contextual information at multiple temporal resolutions, complementing LSTM for modelling long-term contextual information, thus offering improved performance [8 ###reference_8###, 9 ###reference_9###, 10 ###reference_10###, 11 ###reference_11###]. Therefore, this paper aims to utilise DARTS for a joint CNN LSTM configuration. Figure 1 ###reference_### shows the overview of our proposed architecture.\nIn the literature, researchers have used DARTS in SER tasks [12 ###reference_12###, 13 ###reference_13###]; however, several manual processes are involved in their methods. For example, the most relevant study to our proposal is [13 ###reference_13###], where authors predefined the order of several layers, structures and operations from where DARTS make the optimum selection of parameters for those layers. In contrast, our proposed methodology offers a novel mechanism that minimises the need to predefine the layers, offering improved autonomy. The key contributions of this paper are summarised as follows:\nWe propose a novel DARTS-optimised joint CNN and LSTM architecture showcasing the feasibility of using network architecture search methods for SER.\nUnlike previous research, our approach provides greater autonomy to DARTS in selecting optimal network configurations.\nExperimental results conducted with IEMOCAP and MSP-IMPROV datasets validate that the SER model obtained through our approach surpasses the performance achieved in prior studies."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Related Work",
15
+ "text": "In this section, we primarily discuss the application of Neural Architecture Search and DARTS for speech emotion recognition.\nOur findings highlight a shortage of studies, emphasising the necessity for additional exploration of NAS and DARTS in the field of SER.\nThe initial paper proposing the application of NAS in SER employs a controller network that shapes the architecture by the number of layers and nodes per layer and the hyperparameter activation function of a child network by reinforcement learning [14 ###reference_14###]. The authors show a competitive improvement over human-designed architectures.\nEmotionNAS is a two-branch NAS strategy introduced by Sun H. et al. [12 ###reference_12###] in 2022. The authors use DARTS to optimise the two models in the two branches separately, the CNN model and RNN model, which use a spectrogram and a waveform as inputs, respectively. They obtained an unweighted accuracy of from the combined model for the IEMOCAP dataset. They also report the performance of in the spectrogram branch, which only uses a CNN based layer architecture.\nThe key distinction between our approach and EmotionNAS is that we use an LSTM layer coupled in series with the CNN layer as in Figure 1 ###reference_###. In contrast, EmotionNAS uses an RNN layer parallel to the CNN layer in a different branch. Moreover, we integrate the attention in LSTM to enhance the overall performance of the CNN-LSTM coupling for SER further.\nWu X. et al. [13 ###reference_13###] used SER as their DARTS application and proposed a uniform path dropout strategy to optimise candidate architecture. They used the IEMOCAP dataset to develop an SER model with an accuracy of for a four-class classification problem using discrete Fourier transform spectrograms extracted from audio as input.\nHere, authors apply DARTS on CNN and LSTM with attention for SER. This is a preliminary study where authors predefine several configurations of layers, structures and operations and let DARTS select only from those configurations. We aim to extend this study by allowing DARTS to select the best network architecture without providing any predefined configurations.\nLiu et al. [15 ###reference_15###] utilised an attention-based bi-directional LSTM followed by a CNN layer for a SER problem. They have achieved a significant performance of for the IEMOCAP Dataset. Their idea of \u2018CNN - LSTM attention\u2019 paved the foundation for our model architecture."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III DARTS framework for CNN and LSTM coupling",
21
+ "text": "The proposed methodology applies DARTS to enhance SER accuracy using a CNN LSTM network, influenced by research demonstrating improved performance when combining CNN and LSTM layers [8 ###reference_8###, 10 ###reference_10###, 11 ###reference_11###, 15 ###reference_15###]. DARTS is first applied to CNN, which is then fused with LSTM. The LSTM layer follows the DARTS-optimised CNN, facilitating joint training for loss minimisation (Figure 1 ###reference_###).\n###figure_1### DARTS uses a differentiable approach to network optimisation. The building block of the DARTS algorithm is a computation cell. It seeks to optimise the cell to gain maximum performance from the architecture. A DARTS cell is modelled as a directed graph where each node is a representation, and the edge is an operation that can be applied to a representation.\nTo use the DARTS cell in a CNN network, we model the node as a feature map and an edge for an operation. One speciality of this graph is that each node connects by an edge with all its preceding nodes as in Figure 2 ###reference_### (a). If the output of the node is and operation on the edge connecting the nodes and is ,\n can be obtained by the Equation 1 ###reference_###:\nInitially, the candidate search space is created by connecting each node of the DARTS cell with a set of operations as shown in Figure 2 ###reference_### (b). A weight parameter \u2018\u2019 is introduced to Equation 1 ###reference_### to find the optimum edge (operation) between two nodes, and , out of the candidate search space of all the operations. The output from the node can be expressed as in Equation 2 ###reference_###.\nThen the continuous relaxation of the search space updates the weights () of the edges. The final architecture can be obtained by selecting the operation between two nodes with the highest weight by using Equation 3 ###reference_###.\nSearched discrete cell architecture can be shown in Figure 2 ###reference_### (d).\n###figure_2### The number of cells () in a model is a\nparameter to the DARTS algorithm which defines how many DARTS cells are stacked to create the model. Each cell uses the output from the last two cells as the input. If output from each cell is and the function inside the cell is , can be expressed as;\nDARTS uses two types of CNN cells, namely \u2018normal\u2019 and \u2018reduction\u2019 cells.\nIt sets the stride as 1 in normal cells and 2 in reduction cells, so the output is down-sampled in reduction cells. This down-sampling enables the model to remove the redundancy of intermediate features and reduce the complexity."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "IV Experimental Setup",
27
+ "text": ""
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "IV-A Dataset and Feature Selection",
33
+ "text": "We use the widely used IEMOCAP [16 ###reference_16###] and MSP-IMPROV [17 ###reference_17###] datasets for our experiments.\nOur study uses the improvised subset of IEMOCAP and the four categorical labels, happiness, sadness, anger, and neutral as classes from the datasets.\nWe use five-fold cross-validation with at least one speaker out in our training and evaluations. At each fold, the training dataset is divided into two subsets, \u2018search\u2019, and \u2018training\u2019, by a fraction. The \u2018search\u2019 set is used in architecture search; the \u2018training\u2019 set is used in optimising the searched architecture, and the remaining testing dataset is used to infer and obtain the testing performance of the searched and optimised model. This way, we manage to split the dataset into three sets in each cross-validation session. Also, this allows the utterances in each split to be mutually exclusive.\nIn this paper, we use Mel Frequency Cepstral Coefficients (MFCC) as input features to the model. MFCC has been used as the input feature in many SER studies in the literature [18 ###reference_18###, 19 ###reference_19###] and has obtained promising results. We extract MFCCs from each -second audio utterance from the dataset. If the audio utterance length is less than seconds, we added padding with zeros while the lengthier utterances are truncated. The MFCC extraction from Librosa python library [20 ###reference_20###] outputs a shape , downsampled with max pooling to create a spectrogram of the shape ."
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "IV-B Baseline Models",
39
+ "text": "We compare the performance of our methodology with three hand-engineered baseline models: 1) CNN, 2) CNN+ LSTM, and 3) CNN+LSTM with attention. The CNN baseline model consists of one CNN layer (kernel size=2, stride=2, and padding=2) followed by a Max-Pooling layer (kernel size=2 and stride=2). Two dense layers then consume the output from the Max-Pooling layer after applying a dropout of . Finally, the last dense layer has four output units depicting the four emotion classes, and the model outputs the probability estimation of each emotion for a given input by a Softmax function. This model architecture is inspired by the CNN+LSTM SER model by Etienne C. et al. [21 ###reference_21###].\nThe CNN+LSTM baseline model is built, including an additional bi-directional LSTM layer of 128 units after the Max-Pooling layer. An attention layer is added to the LSTM layer in the \u2018CNN+LSTM attention\u2019 baseline model."
40
+ },
41
+ {
42
+ "section_id": "4.3",
43
+ "parent_section_id": "4",
44
+ "section_name": "IV-C DARTS Configuration",
45
+ "text": "The DARTS cell search space consists of pooling operations such as max pooling and average pooling, convolutional operations such as and separable convolutions, and dilated convolution, identity connections, and no connections.\nThe stochastic gradient descent is used with a learning rate using a cosine annealing schedule as the optimiser to optimise the weights of the operations.\nThe search is run for epochs.\nIn our experiments, we set , setting four DARTS cells.\nAs defined in [6 ###reference_6###], we use reduction cells at every and position of the layers. We randomly initialise values and the DARTS search algorithm optimises values related to each operation."
46
+ },
47
+ {
48
+ "section_id": "4.4",
49
+ "parent_section_id": "4",
50
+ "section_name": "IV-D Model Configuration",
51
+ "text": "In this study, the output from the CNN component is passed through to the LSTM layer with 256 units as a vector after flattening the 2D matrix from CNN.\nAttention is introduced into the LSTM layer by combining the attention module with the output of the LSTM layer.\nWe use the popular deep learning library PyTorch for model development and training. The experiments are run on a NVIDIA A40-24Q GPU with 24GB of VRAM. The implementation code can be found on GitHub Repository111https://github.com/jayaneetha/NAS-for-SER"
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Evaluation",
57
+ "text": "###figure_3### We use Unweighted Accuracy (UA) as obtained by dividing the sum of the recall of all classes by their number, to report our results. This is known to reflect imbalanced data tasks well, as is usual in SER [21 ###reference_21###].\nWe also measure the number of parameters of the model by summation of all the trainable parameters in the built model as the representation of the complexity of the model.\nFirst, we compare the performance of the DARTS generated CNN model (CNN \u2013 DARTS) with our benchmark Hand Engineered CNN model (CNN \u2013 HE). The results are presented in Table I ###reference_###, showing that the DARTS-generated CNN model outperforms the hand-engineered SER model. This table also shows the performance of the DARTS-generated model with eight cells (): it performs poorer than the one with four cells (). This is due to the increased complexity of the model; it gets overfitted [12 ###reference_12###].\nIn Table I ###reference_###, we also compare the performance of our model with the results of EmotionNAS [12 ###reference_12###]. The authors have implemented NAS in two branches; in one branch, they have CNN, and in the other branch, they have RNN, where the inputs are spectrogram and waveform, respectively.\nWe present the combined CNN + RNN performance and performance of the CNN branch in Table I ###reference_###. The CNN \u2013 DARTS outperforms both.\nIt is also notable by comparing the number of parameters and accuracy, the 69.36 UA% accuracy by CNN \u2013 DARTS was obtained by a model only with 417\u2009612 parameters whereas the EmotionNAS model contains 2\u2009370\u2009000 parameters. This shows that CNN\u2013DARTS model not only outperforms but reduces the size of the model as well. The ease of maintaining the model is enhanced by this. We also report weighted accuracy (WA) in Table I ###reference_### as it was reported in the EmotionNAS paper. Similar to using UA, CNN-DARTS outperforms EmotionNAS while using WA.\nCNN \u2013 DARTS\nCNN \u2013 DARTS\nCNN \u2013 HE\nEmotionNAS [CNN]\nWe now compare the performance of the CNN model generated by DARTS (CNN \u2013 DARTS), CNN+LSTM model generated by DARTS (CNN+LSTM \u2013 DARTS), CNN+LSTM with attention model generated by DARTS (CNN+LSTM att. \u2013 DARTS),\nand hand-engineered CNN+LSTM (CNN+LSTM \u2013 HE). We perform these comparisons using both IEMOCAP and MSP-IMPROV datasets and present the results in Figure 3 ###reference_### (a) and Figure 3 ###reference_### (b), respectively. We note that for both datasets, (i) \u2018CNN+LSTM att. \u2013 DARTS\u2019 performs the best compared to CNN \u2013 DARTS and CNN+LSTM \u2013 DARTS; (ii) DARTS optimised models perform significantly better than the hand-engineered models; (iii) DARTS can produce higher-performing (accuracy) models with fewer parameters than hand-engineered models.\nIn addition to the above comparisons, \u2018CNN+LSTM att. \u2013 DARTS\u2019 produces better results compared to Wu X. et al. [13 ###reference_13###], who obtained 56.28% UA for the CNN+RNN attention model for the IEMOCAP dataset.\nBy comparing the number of parameters in the Hand Engineered model and DARTS-generated models, it is seen that the Hand Engineered model has a significantly smaller number of parameters. This is because it only contains 1 Convolution 2D layer and 1 Max Pooling layer. However, the DARTS cell has at least four layers producing more parameters. But, as supported by the literature, we conjecture that the improved performance of the proposed \u2018CNN+LSTM att. \u2013 DARTS\u2019 model is not due to an increased number of parameters [22 ###reference_22###]. The performance improvement is achieved by allowing DARTS for optimal network selection. For example, Figure 4 ###reference_### shows the searched normal and reduction cell (c_{t}) structures for the CNN+LSTM att. \u2013 DARTS model generated by DARTS. Note that normal Cell uses pooling layers as the initial set of layers for optimal performance, which deviates from the usual practice of choosing the convolution layer first.\n###figure_4### ###figure_5###"
58
+ },
59
+ {
60
+ "section_id": "6",
61
+ "parent_section_id": null,
62
+ "section_name": "VI Conclusions",
63
+ "text": "This study aimed to assess the viability of using the DARTS algorithm to optimise a neural architecture for a joint CNN and LSTM-based SER model. The approach involved augmenting a DARTS-optimised CNN with an LSTM component and jointly training the model to minimise the SER loss. The research findings demonstrate the effectiveness of DARTS in optimising neural architectures, surpassing hand-engineered models. Notably, the study also reveals that as model complexity increases, SER performance decreases, highlighting the superiority of simplified models and emphasising the continued relevance of DARTS in architectural refinement/simplifications for SER models. This research contributes valuable insights into the optimisation of neural architectures for SER applications."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {
68
+ "1": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Performance comparison between the DARTS generated CNN model (CNN \u2013 DARTS) and hand-engineered benchmark model (CNN \u2013 HE) for the IEMOCAP dataset.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.6.7.1\">\n<th class=\"ltx_td ltx_align_justify ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.6.7.1.1\" style=\"width:93.2pt;padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S5.T1.6.7.1.1.1\">Model</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.6.7.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.7.1.2.1\">Param.</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.6.7.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.7.1.3.1\">Cell</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.6.7.1.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.7.1.4.1\">UA (%)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.6.7.1.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.6.7.1.5.1\">WA (%)</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"S5.T1.2.2.3\" style=\"width:93.2pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T1.2.2.3.1\">CNN \u2013 DARTS</p>\n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.2.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.2.4.1\">417\u2009612</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S5.T1.2.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.1.1.1.1\">69.36 3.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T1.4.4.3\" style=\"width:93.2pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T1.4.4.3.1\">CNN \u2013 DARTS</p>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.4.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">428\u2009812</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.4.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.3.3.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T1.6.6.3\" style=\"width:93.2pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T1.6.6.3.1\">CNN \u2013 HE</p>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.6.6.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">35\u2009017</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.6.6.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.8.1\">\n<td class=\"ltx_td ltx_align_justify\" id=\"S5.T1.6.8.1.1\" style=\"width:93.2pt;padding-top:1.5pt;padding-bottom:1.5pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S5.T1.6.8.1.1.1\">EmotionNAS [CNN]</p>\n</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.6.8.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">130\u2009000</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S5.T1.6.8.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.8.1.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">57.3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.8.1.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">63.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.9.2\">\n<td class=\"ltx_td ltx_align_justify ltx_border_b\" id=\"S5.T1.6.9.2.1\" style=\"width:93.2pt;padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_align_top\" id=\"S5.T1.6.9.2.1.1\" style=\"font-size:90%;\">EmotionNAS [CNN + RNN]</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T1.6.9.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">2\u2009370\u2009000</td>\n<td class=\"ltx_td ltx_align_right ltx_border_b\" id=\"S5.T1.6.9.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.9.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">69.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S5.T1.6.9.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">72.1</td>\n</tr>\n</tbody>\n</table>\n</figure>",
70
+ "capture": "TABLE I: Performance comparison between the DARTS generated CNN model (CNN \u2013 DARTS) and hand-engineered benchmark model (CNN \u2013 HE) for the IEMOCAP dataset."
71
+ }
72
+ },
73
+ "image_paths": {
74
+ "1": {
75
+ "figure_path": "2305.14402v3_figure_1.png",
76
+ "caption": "Figure 1: The proposed model architecture comprises input features processed through CNN, LSTM, and Dense layers, and utilises DARTS for optimising the CNN component.",
77
+ "url": "http://arxiv.org/html/2305.14402v3/x1.png"
78
+ },
79
+ "2": {
80
+ "figure_path": "2305.14402v3_figure_2.png",
81
+ "caption": "Figure 2: DARTS employs steps (a) to (d) to search cell architectures: (a) initialises the graph, (b) forms a search space, (c) updates edge weights, and (d) determines the final cell structure. Edges represent operations, nodes signify representations, with light-coloured edges indicating weaker and dark-coloured edges representing stronger operations.",
82
+ "url": "http://arxiv.org/html/2305.14402v3/x2.png"
83
+ },
84
+ "3": {
85
+ "figure_path": "2305.14402v3_figure_3.png",
86
+ "caption": "Figure 3: Comparison of UA% and Number of Parameters between the DARTS generated (DARTS)(C=4\ud835\udc364C=4italic_C = 4) and Hand Engineered (HE), CNN, CNN+LSTM and CNN+LSTM with attention models for (a) IEMOCAP and (b) MSP-IMPROV datasets.",
87
+ "url": "http://arxiv.org/html/2305.14402v3/x3.png"
88
+ },
89
+ "4(a)": {
90
+ "figure_path": "2305.14402v3_figure_4(a).png",
91
+ "caption": "Figure 4: DARTS searched tt\u2062hsuperscript\ud835\udc61\ud835\udc61\u210et^{th}italic_t start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT cell structure for Normal Cell (Top) Reduction Cell (Bottom) for the CNN+LSTM att. \u2013 DARTS model.",
92
+ "url": "http://arxiv.org/html/2305.14402v3/x4.png"
93
+ },
94
+ "4(b)": {
95
+ "figure_path": "2305.14402v3_figure_4(b).png",
96
+ "caption": "Figure 4: DARTS searched tt\u2062hsuperscript\ud835\udc61\ud835\udc61\u210et^{th}italic_t start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT cell structure for Normal Cell (Top) Reduction Cell (Bottom) for the CNN+LSTM att. \u2013 DARTS model.",
97
+ "url": "http://arxiv.org/html/2305.14402v3/x5.png"
98
+ }
99
+ },
100
+ "validation": true,
101
+ "references": [
102
+ {
103
+ "1": {
104
+ "title": "\u201cSpeech emotion recognition using deep 1D & 2D CNN LSTM\nnetworks,\u201d",
105
+ "author": "Jianfeng Zhao, Xia Mao, and Lijiang Chen,",
106
+ "venue": "Biomedical Signal Processing and Control, vol. 47, pp.\n312\u2013323, 1 2019.",
107
+ "url": null
108
+ }
109
+ },
110
+ {
111
+ "2": {
112
+ "title": "\u201cEmpirical interpretation of speech emotion perception with\nattention based model for speech emotion recognition,\u201d",
113
+ "author": "Md Asif Jalal, Rosanna Milner, and Thomas Hain,",
114
+ "venue": "Proceedings of the Annual Conference of the International Speech\nCommunication Association, INTERSPEECH, vol. 2020-October, pp. 4113\u20134117,\n2020.",
115
+ "url": null
116
+ }
117
+ },
118
+ {
119
+ "3": {
120
+ "title": "\u201cA Review on Speech Emotion Recognition Using Deep Learning and\nAttention Mechanism,\u201d",
121
+ "author": "Eva Lieskovsk\u00e1, Maro\u0161 Jakubec, Roman Jarina, Michal Chmul\u00edk,\nYuan-Fu Liao, Patrick Bours, and Chiman Kwan,",
122
+ "venue": "Electronics 2021, Vol. 10, Page 1163, vol. 10, no. 10, pp.\n1163, 5 2021.",
123
+ "url": null
124
+ }
125
+ },
126
+ {
127
+ "4": {
128
+ "title": "\u201cMultitask Learning From Augmented Auxiliary Data for Improving\nSpeech Emotion Recognition,\u201d",
129
+ "author": "Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, and Bjorn W. Schuller,",
130
+ "venue": "IEEE Transactions on Affective Computing, pp. 1\u201313, 7 2022.",
131
+ "url": null
132
+ }
133
+ },
134
+ {
135
+ "5": {
136
+ "title": "\u201cA Comprehensive Survey of Neural Architecture Search,\u201d",
137
+ "author": "Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po Yao Huang, Zhihui Li, Xiaojiang Chen,\nand Xin Wang,",
138
+ "venue": "ACM Computing Surveys (CSUR), vol. 54, no. 4, pp. 76, 5 2021.",
139
+ "url": null
140
+ }
141
+ },
142
+ {
143
+ "6": {
144
+ "title": "\u201cDARTS: Differentiable Architecture Search,\u201d",
145
+ "author": "Hanxiao Liu, Karen Simonyan, and Yiming Yang,",
146
+ "venue": "7th International Conference on Learning Representations, ICLR\n2019, 6 2018.",
147
+ "url": null
148
+ }
149
+ },
150
+ {
151
+ "7": {
152
+ "title": "\u201cLearning Transferable Architectures for Scalable Image\nRecognition,\u201d",
153
+ "author": "Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le,",
154
+ "venue": "Proceedings of the IEEE Computer Society Conference on Computer\nVision and Pattern Recognition, pp. 8697\u20138710, 7 2017.",
155
+ "url": null
156
+ }
157
+ },
158
+ {
159
+ "8": {
160
+ "title": "\u201cTowards temporal modelling of categorical speech emotion\nrecognition,\u201d",
161
+ "author": "Wenjing Han, Huabin Ruan, Xiaomin Chen, Zhixiang Wang, Haifeng Li, and Bj\u00f6rn\nSchuller,",
162
+ "venue": "Proceedings of the Annual Conference of the International Speech\nCommunication Association, INTERSPEECH, vol. 2018-September, pp. 932\u2013936,\n2018.",
163
+ "url": null
164
+ }
165
+ },
166
+ {
167
+ "9": {
168
+ "title": "\u201cImage denoising and restoration with CNN-LSTM Encoder Decoder with\nDirect Attention,\u201d",
169
+ "author": "Kazi Nazmul Haque, Mohammad Abu Yousuf, and Rajib Rana,",
170
+ "venue": "arXiv Prepr., pp. 1\u201312, 1 2018.",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "10": {
176
+ "title": "\u201cImproved end-to-end speech emotion recognition using self\nattention mechanism and multitask learning,\u201d",
177
+ "author": "Yuanchao Li, Tianyu Zhao, and Tatsuya Kawahara,",
178
+ "venue": "Proceedings of the Annual Conference of the International Speech\nCommunication Association, INTERSPEECH, vol. 2019-September, pp. 2803\u20132807,\n2019.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "11": {
184
+ "title": "\u201cSelf Supervised Adversarial Domain Adaptation for Cross-Corpus and\nCross-Language Speech Emotion Recognition,\u201d",
185
+ "author": "Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, and Bjorn Wolfgang\nSchuller,",
186
+ "venue": "IEEE Transactions on Affective Computing, 2022.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "12": {
192
+ "title": "\u201cEmotionNAS: Two-stream Neural Architecture Search for Speech\nEmotion Recognition,\u201d",
193
+ "author": "Haiyang Sun, Zheng Lian, Bin Liu, Ying Li, Licai Sun, Cong Cai, Jianhua Tao,\nMeng Wang, and Yuan Cheng,",
194
+ "venue": "arXiv Prepr., pp. 1\u20135, 3 2022.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "13": {
200
+ "title": "\u201cNEURAL ARCHITECTURE SEARCH FOR SPEECH EMOTION RECOGNITION,\u201d",
201
+ "author": "Xixin Wu, Shoukang Hu, Zhiyong Wu, Xunying Liu, and Helen Meng,",
202
+ "venue": "ICASSP, IEEE International Conference on Acoustics, Speech and\nSignal Processing - Proceedings, vol. 2022-May, pp. 6902\u20136906, 2022.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "14": {
208
+ "title": "\u201cEvolving Learning for Analysing Mood-Related Infant\nVocalisation,\u201d",
209
+ "author": "Zixing Zhang, Jing Han, Kun Qian, and Bj\u00f6rn Schuller,",
210
+ "venue": "Proceedings INTERSPEECH 2018, 19. Annual Conference of the\nInternational Speech Communication Association, pp. 142\u2013146, 2018.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "15": {
216
+ "title": "\u201cSpeech emotion recognition based on convolutional neural network\nwith attention-based bidirectional long short-term memory network and\nmulti-task learning,\u201d",
217
+ "author": "Zhen Tao Liu, Meng Ting Han, Bao Han Wu, and Abdul Rehman,",
218
+ "venue": "Applied Acoustics, vol. 202, pp. 109178, 1 2023.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "16": {
224
+ "title": "\u201cIEMOCAP: interactive emotional dyadic motion capture database,\u201d",
225
+ "author": "Carlos Busso, Murtaza Bulut, Chi-Chun Lee, Abe Kazemzadeh, Emily Mower, Samuel\nKim, Jeannette N Chang, Sungbok Lee, and Shrikanth S Narayanan,",
226
+ "venue": "Language Resources and Evaluation, vol. 42, no. 4, pp. 335,\n2008.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "17": {
232
+ "title": "\u201cMSP-IMPROV: An Acted Corpus of Dyadic Interactions to Study\nEmotion Perception,\u201d",
233
+ "author": "C Busso, S Parthasarathy, A Burmania, M AbdelWahab, N Sadoughi, and E M\nProvost,",
234
+ "venue": "IEEE Transactions on Affective Computing, vol. 8, no. 1, pp.\n67\u201380, 2017.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "18": {
240
+ "title": "\u201cComparison of parametric representations for monosyllabic word\nrecognition in continuously spoken sentences,\u201d",
241
+ "author": "Steven Davis and Paul Mermelstein,",
242
+ "venue": "IEEE transactions on acoustics, speech, and signal processing,\nvol. 28, no. 4, pp. 357\u2013366, 1980.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "19": {
248
+ "title": "\u201cDirect Modelling of Speech Emotion from Raw Speech,\u201d",
249
+ "author": "Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, and Julien Epps,",
250
+ "venue": "in Proceedings of the Annual Conference of the International\nSpeech Communication Association, INTERSPEECH, 2019, pp. 3920\u20133924.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "20": {
256
+ "title": "\u201clibrosa: Audio and music signal analysis in python,\u201d",
257
+ "author": "Brian McFee, Colin Raffel, Dawen Liang, Daniel P W Ellis, Matt McVicar, Eric\nBattenberg, and Oriol Nieto,",
258
+ "venue": "2015, vol. 8.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "21": {
264
+ "title": "\u201cCNN+LSTM Architecture for Speech Emotion Recognition with Data\nAugmentation,\u201d",
265
+ "author": "Caroline Etienne, Guillaume Fidanza, Andrei Petrovskii, Laurence Devillers, and\nBenoit Schmauch,",
266
+ "venue": "in Workshop on Speech, Music and Mind (SMM 2018), ISCA, 9 2018,\nISCA.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "22": {
272
+ "title": "\u201cUnderstanding deep learning (still) requires rethinking\ngeneralization,\u201d",
273
+ "author": "Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals,",
274
+ "venue": "Communications of the ACM, vol. 64, no. 3, pp. 107\u2013115, 2\n2021.",
275
+ "url": null
276
+ }
277
+ }
278
+ ],
279
+ "url": "http://arxiv.org/html/2305.14402v3"
280
+ }
20240119/2306.00119v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2306.16199v2.json ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Shape and parameter identification by the linear sampling method for a restricted Fourier integral operator",
3
+ "abstract": "In this paper we provide a new linear sampling method based on the same data but a different definition of the data operator for two inverse problems: the multi-frequency inverse source problem for a fixed observation direction and the Born inverse scattering problems. We show that the associated regularized linear sampling indicator converges to the average of the unknown in a small neighborhood as the regularization parameter approaches to zero.\nWe develop both a shape identification theory and a parameter identification theory which are stimulated, analyzed, and implemented with the help of the prolate spheroidal wave functions and their generalizations. We further propose a prolate-based implementation of the linear sampling method and provide numerical experiments to demonstrate how this linear sampling method is capable of reconstructing both the shape and the parameter.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Inverse scattering and inverse source problems play important roles in geophysical exploration, non-destructive testing, medical diagnosis, and numerous problems associated with shape and parameter identification. The linear sampling method, first proposed in [11 ###reference_11###], is a non-iterative imaging method for shape identification. It requires little a priori information (such as boundary conditions and the number of connected components) about the object, provides a direct computational implementation, and is robust to noise.\nThe factorization method proposed later in [16 ###reference_16###] gives a complete theory for shape identification and also provides a direct implementation. With the help of the factorization method, a complete theoretical justification of the linear sampling method was given by [1 ###reference_1###, 2 ###reference_2###] and was further developed in [3 ###reference_3###] (since they called their formulation as an alternative formulation of the linear sampling method so we simply refer to their method as the alternative linear sampling method for convenience in this paper). The generalized linear sampling method proposed in [6 ###reference_6###] modifies the regularizer and also provides a complete theoretical justification of the linear sampling method with the minimal restriction. The linear sampling and the factorization methods play important roles in inverse problems associated with shape identification such as inverse scattering problem and electrical impedance tomography. We refer to the monographs [9 ###reference_9###, 10 ###reference_10###, 12 ###reference_12###, 19 ###reference_19###] for a more comprehensive discussion.\nIn this paper we provide a linear sampling type method for shape identification based on a different definition of the data operator and show that the indicator represents an average of the unknown which leads to parameter identification/estimation. The demonstration in this paper is in the context of the multi-frequency inverse source problem for a fixed observation direction and the Born inverse scattering problems. Part of the motivations are due to the numerical examples performed by [21 ###reference_21###] on a multi-frequency inverse source problem in waveguides and the data-driven basis based on prolate spheroidal wave functions (PSWFs) and their generalizations in [22 ###reference_22###].\nIn this paper we use the PSWFs to analyze and implement the linear sampling method.\nIn a broader context, it is noted that there are many existing results on parameter identification for the multi-frequency inverse source problem and the Born inverse scattering problem, such as [17 ###reference_17###, 22 ###reference_22###, 23 ###reference_23###, 24 ###reference_24###] and diffraction tomography [15 ###reference_15###, 26 ###reference_26###] and the numerous references therein. In the recent paper [14 ###reference_14###], it was shown that the convergence for parameter identification for the inverse source problem on the ball is of H\u00f6lder-logarithmic type where their analysis was based on the PSWFs in one dimension and the Radon transform.\nShape identification. Our formulation of the linear sampling method is based on solving the usual data equation for a slightly different data operator by a general regularization scheme which gives an indicator function that allows to characterize the shape.\nThis indicator function is similar to an alternative linear sampling method proposed in [3 ###reference_3###] and is closely related to the generalized linear sampling method in [6 ###reference_6###], and as is noted in [3 ###reference_3###], this formulation can be dated back to the first paper of linear sampling method [11 ###reference_11###] where they suggested an alternative indicator function. Our investigation is in the context of the multi-frequency inverse source problem for a fixed observation direction and the Born inverse scattering problems. It is worth noting that the multi-frequency factorization method for the inverse source problem was initially studied in [13 ###reference_13###] that motives our present study. In this paper, we first obtain the shape identification result, based on the assumption that, roughly speaking, the factorization method theory applies. Several remarks: (1) We obtain shape identification theories based on the alternative linear sampling method and the generalized linear sampling method. Essentially the factorization method, the alternative linear sampling method, and the generalized linear sampling method are all capable of shape and parameter identification in our case. (2) The regularized solution can be obtained using any general regularization scheme (which can be the standard Tikhonov or the singular value cut off regularization) so that it is a general shape identification theory, as is similar to [3 ###reference_3###].\nParameter identification. The parameter identification is based on the same indicator function and we show that one can reconstruct the average of over a small user-defined region (where is the unknown parameter). This result is quite general and relates in the classical setting of the LSM to the convergence of solution to the usual data equation to a specific function depending on . We provide a proof in this setting. Additionally we utilize the PSWFs to prove again the parameter identification theory. This result is stimulated by our efforts to use PSWFs as a tool for both the analysis and the implementation and demonstrate the relevance of PSWFs as it is striking, at least to us, that one could use a basis independant of to obtains such a result. Again the parameter identification theory is a general theory, since the regularized solution can be obtained using any general regularization scheme.\nProlate spheroidal wave functions (PSWFs) and their generalizations. The PSWFs and their generalizations in the context of the restricted Fourier integral operator were studied in [27 ###reference_27###, 29 ###reference_29###, 30 ###reference_30###] and play important roles in Fourier analysis, uncertainty quantification, and signal processing. Their remarkable property is due to that the PSWFs are eigenfunctions of a restricted Fourier integral operator (which is one of the factorized operator associated with the data operator) and of a Sturm-Liouville differential operator at the same time. The generalizations of PSWFs in two dimension are referred to as the disk PSWFs.\nFor a more complete picture on the theory and computation of the (disk) PSWFs we refer to [8 ###reference_8###, 31 ###reference_31###, 32 ###reference_32###] and the numerous references therein. For extension to domain that are not disk, however with less theoretical results, we refer to [28 ###reference_28###] and references therein. Recently, (disk) PSWFs were applied to the inverse source problem in [14 ###reference_14###] and to the Born inverse scattering problems in [22 ###reference_22###].\nImplementation of the linear sampling method. In addition to our general theory on shape and parameter identification of the linear sampling and factorization methods, we propose a prolate-based formulation of the linear sampling method. The key observation is that one of the factorized operator has (disk) PSWFs as its eigenfunctions. In this way we obtain a reduced indicator function in a high dimensional subspace. For sake of rigor and completeness, we give the full details on the computation of the PSWFs and their corresponding prolate eigenvalues that are needed in our prolate-based linear sampling method.\nThe remaining of the paper is as follows. We first introduce in Section 2 ###reference_### an inverse source problem for a fixed observation direction and the Born inverse scattering problem, and summarize these two inverse problems into an inverse problem associated with a restricted Fourier integral operator in Section 2.3 ###reference_###. For later purposes, we also introduce the (disk) PSWFs that stimulate our analysis and computation. Section 3 ###reference_### introduces the data operator, analyzes its factorizations and shows a range identity. In Section 4 ###reference_###, we obtain the shape identification theories based on the alternative linear sampling method and the generalized linear sampling method with the help of the factorization method theory. Section 5 ###reference_### is devoted to the general parameter identification theory. We prove that our indicator function has capability in reconstructing the average of over a small user-defined region (where is the unknown parameter). In Section 6 ###reference_### we study an explicit example to discuss the nature of the inverse problem followed by several preliminaries on the computation of PSWFs and the Legendre-Gauss-Lobatto quadrature. We also propose the prolate-based formulation of the linear sampling method. Finally, Section 7 ###reference_### is devoted to numerical experiments that illustrate the shape and parameter identification theory."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "The Mathematical Model for Inverse Source and Born Inverse Scattering Problems",
15
+ "text": "In this section, we first introduce an inverse source problem for a fixed observation direction and the Born inverse scattering problem. We then summarize these two inverse problems in Section 2.3 ###reference_###."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Multi-frequency inverse source problem for a fixed observation direction",
21
+ "text": "In this section we introduce the inverse source problem with multi-frequency data measured at sparse observation directions.\nNote that the notations in this section are only for the purpose of introducing the inverse source model which leads to a model given later by (6 ###reference_###), thus remain relevant only in this section.\nWhen considering the acoustic wave propagation due to a source in an homogeneous isotropic medium in (),\none has the nonhomogeneous Helmholtz equation\nwhere the wave number is denoted by , the support of the unknown source is denote by which is a bounded Lipschitz domain in with connected complement . We suppose that the support of is a subset of .\nThe scattered field is required to satisfy the Sommerfeld radiation condition\nuniformly for all directions . It is known that\nand that\n(see, for instance, [9 ###reference_9###])\nwhere is an observation direction belonging to which denotes the unit circle, and the wavenumber belongs to the interval with . Decomposing and\n, and identify as the line orthogonal to , we proceed to\nwhere is the Radon transform (see, for instance, [24 ###reference_24###]).\nNote\nthat\nthen the knowledge of amounts to the knowledge of where\nBy appropriate scaling, one will be led to the problem (6 ###reference_###) in one dimension. In particular set , equation (2 ###reference_###) yields\nNote that we have assumed that the support of is a subset of so that the support of is a subset of , then one is led to\nwhere and and for . The inverse problem is to determine certain information (which will be made precise later) about from the knowledge of . In this way one formulate the inverse problems in the form of (3 ###reference_###) which is the one dimensional case of (6 ###reference_###).\nThe above inverse problem associated with (2 ###reference_###) and (3 ###reference_###) is the multi-frequency inverse problem for a fixed observation direction.\nWhen considering the multi-frequency inverse problem of determining (or its support) from with all possible observation directions and all the in where is given by (1 ###reference_###), one is led to the problem (6 ###reference_###) in two dimensions (after appropriate scalling) with some corresponding parameter .\nIf one is concerned with recovering or its support, results in that direction could be found in [13 ###reference_13###]."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Born inverse scattering problem",
27
+ "text": "In this section we introduce the Born inverse scattering problem.\nNote again that the notations in this section are only for the purpose of introducing the Born inverse scattering model which leads to (6 ###reference_###), thus remain relevant only in this section. Let be the wave number. A plane wave takes the following form\nwhere is the direction of propagation.\nLet be an open and bounded set with\nLipschitz boundary such that is connected. The set is referred to as the medium. Let the real-valued function be the contrast of the medium and on . The medium scattering due to a plane wave is to find total wave filed belonging to such that\nwhere the last equation, i.e., the Sommerfeld radiation condition, holds uniformly for all directions (and a solution is called radiating if it satisfies this radiation condition). The scattered wave field is . This scattering problem is well-posed and there exists a unique radiating solution; see, for instance, [12 ###reference_12###, 19 ###reference_19###]. This model is referred to as the full model.\nBorn approximation is a widely used method to treat inverse problems; see, for instance, [12 ###reference_12###, 23 ###reference_23###].\nIn the Born approximation region, one can approximate the solution by its Born approximation , which is the unique radiating solution to\nNote that (cf. [9 ###reference_9###])\nuniformly with respect to all directions , we arrive at which is known as the scattering amplitude or far field pattern with denoting the observation direction. It directly follows from [9 ###reference_9###] that\ntherefore the knowledge of amounts to the knowledge of \nwhere is a truncated Fourier transform of given by\nand is a disk centered at origin with radius .\nThis equivalent formulation is due to (4 ###reference_###) and that is the interior of .\nSimilar to Section 2.1 ###reference_### by introducing such that , one can reformulate the above problem (5 ###reference_###) to (6 ###reference_###) with some corresponding parameter after some scaling. We omit the details."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "A model that summarizes the inverse source and Born inverse scattering problem",
33
+ "text": "The inverse source problem for a fixed observation direction in Section 2.1 ###reference_### and the Born inverse scattering problem in Section 2.2 ###reference_### can be summarized as follows: for an unknown function , we consider determination of and its support given the available data (and their perturbations which are called the noisy data) where\nwith and denoting the unit interval/disk in . This corresponds to the knowledge of a restricted Fourier transform. Here the unknown function has compact support , and denotes an open and bounded set with Lipschitz boundary such that is connected. The parameter is a positive constant that is given by the model (cf. Section 2.1 ###reference_###).\nIn this paper we consider two classical inverse problems using the linear sampling method for a new data operator based on instead of : determination of the support of and determination of the function . The inverse problem of determining the support of is referred to as shape identification and the one of determining the function is referred to as parameter identification."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "PSWFs and their generalization",
39
+ "text": "For later purposes we introduce the PSWFs and their generalizations which stimulate our analysis and computation. In one dimension , the PSWFs [27 ###reference_27###] are that are eigenfunctions of where\nand (we choose to normalize the eigenfunctions so that)\nhere denotes the operator given by\nIn two dimensions, the corresponding normalized eigenfunctions are related to the generalized PSWFs (specifically the radial part of is called the generalized PSWFs according to [29 ###reference_29###]; in this paper we simply refer to in two dimensions as the disk PSWFs for convenience). As such, is referred to as the (disk) PSWFs in dimension . Note that in two dimensions the indexes in (7 ###reference_###)-(8 ###reference_###) are multiple indexes given by (9 ###reference_###) where\nNote that the eigenfunctions are real-valued, analytic, orthonormal, and complete in in both one dimension and two dimensions. All the prolate eigenvalues are non-zero. For more details we refer to [8 ###reference_8###, 27 ###reference_27###, 31 ###reference_31###] for the one dimensional case and [22 ###reference_22###, 29 ###reference_29###, 32 ###reference_32###] for the two dimensional case."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Data operator, factorization, and range identity",
45
+ "text": "In this section we introduce a data operator defined by the given data (6 ###reference_###), and study its factorization and a range identity. Following [13 ###reference_13###] (see also [21 ###reference_21###]), we introduce the data operator by\nwhere the kernel is given by the data (6 ###reference_###). Note that the data are functions in .\nThe above data operator enjoys a factorization as follows. Introduce by\nand it follows directly that its adjoint is given by\nwhich is dictated by . Here represents the inner product with conjugation in the second function, and we further denote the corresponding norm. From now on we drop the subscript when the inner product is in and will explicitly indicate a subscript for other cases. Another operator is needed for the factorization, namely which is given by\nNow we are ready to prove the factorization theorem.\nLet the data operator be given by (10 ###reference_###). Then it holds that\nwhere , , and are given by (11 ###reference_###), (12 ###reference_###), and (13 ###reference_###), respectively.\nProof.\u2009\u2009 From the definition (10 ###reference_###) of , one gets\nfor any . This completes the proof.\nSeveral properties hold.\nThe operator is compact, injective and has dense range.\nThe operator is compact, injective and has dense range in .\nProof.\u2009\u2009 Note that the kernel is analytic, then both and are compact. Note that and has non-empty interior, then is injective follows directly from that coincide with an analytic function, namely the Fourier transform of a function, , with compact support in . This yields that has dense range in . Reversing and in the previous arguments give the injectivity of . This completes the proof.\nAssume that , a.e., , for some positive constants and some constant phase . Then is self-adjoint and positive definite.\nProof.\u2009\u2009 From the definition (13 ###reference_###) of , one has for any\nthen is self-adjoint and\nwhich completes the proof.\nTo proceed with the factorization method and linear sampling method, one works with another operator given by\n\nTo conveniently illustrate how the linear sampling and factorization methods go beyond shape identification, we choose to work in the case that\nfor some positive constants . This is assumed throughout the remaining of this paper.\nNote that is self-adjoint and positive definite due to assumption (14 ###reference_###). We now state the following lemma on range identity.\nAssume that (14 ###reference_###) holds. Then it follows that\n.\nProof.\u2009\u2009 Note that the middle operator is positive definite and self-adjoint, then the proof follows from [19 ###reference_19###, Corollary 1.22] and Proposition 1 ###reference_1###.\nThe above factorization in Theorem 1 ###reference_rem1### gives a factorization of the data operator. One can get another factorization of the data operator as follows.\nIntroduce by\nand it follows directly that its adjoint is given by\nwhich is dictated by . Note that the operator is nothing but the operator . Furthermore, we introduce by\nwhere is the extension of given by\nIt follows directly that\nwhere , , and are given by (15 ###reference_###), (16 ###reference_###), and (17 ###reference_###), respectively.\nIt is noted that the middle operator is no longer positive definite unless , this is in contrast to the first factorization where is positive definite. It is also noted that and are parameter-independent.\nFor later purposes, we introduce the eigensystem of the self-adjoint, positive definite operator by\nhere and .\nWe would also like to stress that PSWFs are an interesting object to analyse the data operator . Indeed we would like to prove that it sort of compresses the operator since\nwhere in the last step we applied the Cauchy-Schwartz inequality twice and the fact that is an orthonormal set.\nTherefore as evidenced by the super fast decay of the prolate eigenvalues (see, for instance, [31 ###reference_31###, equation 2.17]) where\nwe can deduce that the operator has a compressed representation in the basis . This could allow to speed up the computation by truncated or to help for denoising the data."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Linear sampling and factorization methods for shape identification",
51
+ "text": "In this section we study the factorization method, generalized linear sampling method and a formulation of the linear sampling method for shape identification. To begin with, let be given by\nwhere\nhere is the length/area of that satisfies for some positive constants . In practical applications, we usually choose an interval/square region or an interval/disk region .\nThroughout the paper we fix the parameter in the analysis and thereby chose to omit the dependence of and on ; we also sample the sampling point so that which is assumed later on.\nThe function allows us to characterize the support of . More precisely we have the following lemma.\nLet . The following characterizations of the support hold.\nIf , then .\nIf , then .\nProof.\u2009\u2009 We first prove the first part. Let , then is supported in so that according to (12 ###reference_###) and (21 ###reference_###)\nwhich shows that .\nFor the second part, let and we prove by showing that if then mush vanish. To show this we first extend to that\nthen , i.e., which yields that .\nNote that the left hand side is supported in but the right hand side is not supported in (since ), this is a contradiction which shows that mush vanish and this completes the proof.\nThe linear sampling method (LSM) and factorization method (FM) for shape identification state the following.\nThe linear sampling method solves the data equation\nusing a regularization scheme to get a regularized solution and indicates that\nis large for with and is bounded for with (due to Proposition 1 ###reference_a1### and Lemma 2 ###reference_a2###). This is suggested by a partial theory similar to [11 ###reference_11###]; we omit this partial theory since we will show a formulation of the linear sampling method and the generalized linear sampling method with complete theoretical justification later on.\nA direct application of Lemma 1 ###reference_a1###\nand Lemma 2 ###reference_a2### yields the factorization method: If , then . If , then . Here\nwhere is the eigensystem of given by (20 ###reference_###).\nIn this paper, we study a formulation of the linear sampling method in the form of\nand we show later that such a LSM has capability in both shape and parameter identification. We will also show that the factorization method and the generalized linear sampling method also have capability in both shape and parameter identification. In this section we first demonstrate its viability in shape identification. The idea is similar to the earlier work [1 ###reference_1###, 2 ###reference_2###, 6 ###reference_6###] in inverse scattering to justify/generalize the linear sampling method.\nTo begin with, we introduce a family of regularization schemes by\nwhere is a regularizing filter that is a bounded, real-valued, and piecewise continuous function such that\nhere is a constant.\nWith this family of regularization schemes , one can introduce a family of regularized solutions by .\nClassical regularizations include the Tikhonov regularization with\nand the singular value cut off regularization with\nOur shape identification result is as follows.\nSuppose that is a family of regularization schemes given by (24 ###reference_###)\u2013 (25 ###reference_###) and set . The following characterizations of the support hold.\nIf , then remains bounded as . Moreover\nwhere is the unique solution to .\nIf , then .\nProof.\u2009\u2009 It is sufficient to prove the theorem for since given by (22 ###reference_###) differs from by a scaling. To begin with, we first remind the readers that one always has .\nWe first derive an expression of . From the definition of , one gets whereby\n,\nin this way we obtain\nWe now prove the first part. Let , then from the factorization method result (23 ###reference_###) one can obtain that there exists the unique solution to and\n.\nNote that satisfies (25 ###reference_###), then we have from (27 ###reference_###) that\ni.e., remains bounded as . Then from the dominated convergence theorem we can take the limit so that\nThis proves the first part.\nFor the second part when , first note from the factorization method result (23 ###reference_###) that so that\n,\nthen for any large , there exists such that\n.\nNow we chose (due to the property of in (25 ###reference_###)) such that\n,\nthis yields that for any large , there exists such that\nThis proves , i.e., which completes the proof.\nFrom the above theorem and its proof, one can also prove in the same way the following result, which uses the indicator introduces in the generalized linear sampling method first proposed by [6 ###reference_6###].\nSuppose that is a family of regularization schemes given by (24 ###reference_###)\u2013 (25 ###reference_###) and set . The following characterizations of the support hold.\nIf , then remains bounded as . Moreover\nwhere is the unique solution to .\nIf , then .\nProof.\u2009\u2009 The proof is almost the same as the proof of Theorem 2 ###reference_rem2###. We omit the details but only highlight the difference. The difference arises from\nand one can complete the proof by following line by line the proof of Theorem 2 ###reference_rem2###.\nTheorem 2 ###reference_rem2### and Theorem 3 ###reference_rem3### allow to determine an neighborhood of the support since it is capable of determining whether or , as is similar to [13 ###reference_13###, 21 ###reference_21###].\nWe show in the next section that such a formulation has the capability in parameter identification in addition to shape identification."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Linear sampling and factorization methods for parameter identification",
57
+ "text": "In this section we demonstrate that the linear sampling and factorization methods have capability in parameter identification.\nWe first prove the following lemma.\nSuppose that is a family of regularization schemes given by (24 ###reference_###)\u2013 (25 ###reference_###) and set . Then it holds for any that\nwhere given by (25 ###reference_###) is a constant independent of , and given by (14 ###reference_###) is the lower bound of .\nProof.\u2009\u2009 Note that is self-adjoint and bounded below by (here is the identity operator), then it follows that\nNote that is given by whereby\nwhich yields (where one notes that and are real-valued)\nthis together with yields that\nwhere the last step is due to that is supported in .\nNow combining (29 ###reference_###)\u2013(30 ###reference_###) we have that and\nthis proves the lemma by noting that due to the definition of in (22 ###reference_###).\nNow we are ready to prove the parameter identification theorem. For convenience we let be the pseudo inverse of given by\nThe following (disk) PSWFs expansion of and will be often used.\nLet and let the (disk) PSWFs expansion of be\nthen it follows directly from (21 ###reference_###) that and (7 ###reference_###) so that\nSuppose that is a family of regularization schemes given by (24 ###reference_###)\u2013 (25 ###reference_###) and set . Then it holds for any that\nProof.\u2009\u2009 Throughout the proof, we let denote the extension of a generic function by setting in .\n(a). We first give representations for and , respectively. For the regularized solution , we get the (disk) PSWFs expansion of by\n,\nnote that is supported in , then we further get (where we recall is given by (31 ###reference_###))\nso that (by noting that , and are real-valued)\nOn the other hand,\nwhere in the last step we used the (disk) PSWFs expansion of and (32 ###reference_###) to evaluate their inner product, and the fact that and are real-valued.\n(b). Note that and that the -norm of is bounded uniformly with respect to (due to Lemma 3 ###reference_a3###), then the infinite series in (35 ###reference_###) is uniformly convergent. By the dominated convergence theorem, one then gets\nBy noting that\none has\nThis equation together with (37 ###reference_###) yield that\nThis completes the proof.\nIt is possible to prove the result of Theorem 4 ###reference_rem4### using the singular system of . We include such a proof since this idea is expected to be generalized to other inverse scattering problems.\nAlternate Proof of Theorem 4 ###reference_rem4###.\u2009\u2009 \nThe proof is very similar to the previous one except that we expand the quantity of interest with respect to an orthonormal basis given by and denote by .\nFirst we have the following expression\nthen we have the intermediate result :\nFinally using\nand combining the previous expression we obtain the previous results.\nNote the connections between the linear sampling method and the factorization method, we can immediately obtain the following.\nSuppose that is a family of regularization schemes given by (24 ###reference_###)\u2013 (25 ###reference_###) and set .\nFor any , let be the unique solution to . Then it holds that\nwhere denotes the length/area of .\nProof.\u2009\u2009 This follows directly from (26 ###reference_###) and (28 ###reference_###), Theorem 4 ###reference_rem4###, and .\nWe highlight the fact that Theorem 2 ###reference_rem2### and 3 ###reference_rem3### are able to determine the shape (almost) exactly. However our above result on parameter identification is \u201dblind\u201d near the boundary of the obstacle.\nNote that Theorem 2 ###reference_rem2### is not valid when is not sign definite. It is mainly an open question in the general case to deal with sign changing contrast. Two types of results exist one [19 ###reference_19###] when it is assume that one know in advance two domains that include respectively positive and negative sign definite contrast and the other [4 ###reference_4###] when changes sign strictly inside the support of the scatterer. The first approach could be extended straightforwardly to our cases however we won\u2019t pursue this as it will need additional apriori information. The second approach is not possible as our operator is defined over and not over more regular spaces that allow one to analyse the contribution of inside as a compact perturbation.\nYet our results on parameter identification allow us to us to retrieve information even when changes sign even if the shape identification is not valid. To do so one needs to introduced,\nwhere is a domain that contains . Clearly is positive definite its data operator is given by\nBy applying Theorem 4 ###reference_rem4### to , one can retrieve information on through the following corollary. In spirit our method is related or inspired by imaging using differential measurements first introduce in [5 ###reference_5###], here we compare and .\nWe introduce a family of regularization schemes given by (24 ###reference_###)\u2013 (25 ###reference_###) where one as substitute by and set . Then it holds for any that\nIt could be of interest numerically to consider associated to , which will give access to :\nFinally introducing the reconstruction formula\nwhich will prove to give better numerical reconstruction."
58
+ },
59
+ {
60
+ "section_id": "6",
61
+ "parent_section_id": null,
62
+ "section_name": "An explicit example and numerical preliminaries",
63
+ "text": "In this section, we first study an explicit example to discuss the nature of the inverse problem. Our numerical experiments in shape and parameter identification later on will be based on the inverse source problem with multi-frequency measurements for a fixed observation direction. This motivates us to discuss preliminaries for the computation of PSWFs and the evaluation of integrals involved in the prolate-Galerkin linear sampling method."
64
+ },
65
+ {
66
+ "section_id": "6.1",
67
+ "parent_section_id": "6",
68
+ "section_name": "An explicit example",
69
+ "text": "In this section, we study an explicit example in one dimension where is constant one supported in . In this regard, one can directly show that for all ,\ni.e.,\nwhere is given by (7 ###reference_###)\nwith replacing by . This gives the explicit eigensystem of for this particular case. This example is extremely simple but delivers several important messages.\nThe first message is that the inverse problem is challenging as evidenced by the super fast decay of the prolate eigenvalues (see, for instance, [31 ###reference_31###, equation 2.17]) where\nThis indicates that the smaller the radius , the more ill-conditioned the inverse problem; it also indicates that the number of eigenvalues (say for a range of ) larger than machine epsilon is limited (and we will see more in the numerical examples).\nThe second message is that one is necessarily led to the computation of the eigensystem for this particular case and in general one needs appropriate quadrature rules for evaluating the integrals involved in the implement of LSM. Note that the PSWFs can be approximated by truncated Legendre series [8 ###reference_8###], the Legendre-Gauss-Lobatto (LGL) quadrature rule is a decent method (which requires more quadrature nodes than other prolate based Gaussian quadrature rules such as [8 ###reference_8###]) that at least serves our needs in this paper. Note also that there exist Gaussian quadrature rules such as [8 ###reference_8###] but this requires a little more computational efforts. However note that equidistant quadrature nodes may be less efficient for approximating some integral equations (cf. [18 ###reference_18###, Example 1.16]).\nIn the next subsections, we discuss the numerical approximation of the PSWFs eigensystem and introduce the Legendre-Gauss-Lobatto (LGL) quadrature rule for numerical evaluation of integrals involved in the implement of LSM."
70
+ },
71
+ {
72
+ "section_id": "6.2",
73
+ "parent_section_id": "6",
74
+ "section_name": "Legendre polynomials and Legendre-Gauss-Lobatto quadrature",
75
+ "text": "In the following we introduce the Legendre-Gauss-Lobatto (LGL) quadrature rule with points that integrates all polynomials of degree less than or equal to exactly (see [25 ###reference_25###, Section 10.1\u201310.4] for more details). Denote by the Legendre polynomial of degree which satisfies the following recurrence relation\nand let (where the overline bar associated with is not supposed to be confused with the conjugation) be the normalized Legendre polynomial where\nwith denoting the Kronecker delta.\nLet be given distinct points over the interval , for the approximation of , we consider quadrature rules of the type\nwhere the points and coefficients are referred to as the nodes and weights of the quadrature, respectively. The Legendre-Gauss-Lobatto (LGL) quadrature rule has nodes and weights given by\nThe Legendre-Gauss-Lobatto quadrature rule, which includes the end points and , has degree of exactness , i.e., the quadrature formula integrates all polynomials of degree less than or equal to ."
76
+ },
77
+ {
78
+ "section_id": "6.3",
79
+ "parent_section_id": "6",
80
+ "section_name": "Computation of PSWFs system",
81
+ "text": "One can approximate the PSWFs by the Legendre-Galerkin method and the coefficients are determined by solving a linear system with a symmetric, tridiagonal matrix. It is based on another remarkable property of PSWFs that\n are also eigenfunctions to the following Sturm-Liouville differential operator (cf. [27 ###reference_27###, Section V] or [31 ###reference_31###, equation 2.1])\nwhere is the Sturm-Liouville differential operator given by\nHere the corresponding Sturm-Liouville eigenvalues are ordered in strictly increasing order and they satisfy\nIn particular to approximate the first PSWFs and Sturm-Liouville eigenvalues , following [8 ###reference_8###, Section 2], one expands\nwhere determines the truncation of the Legendre series. We then substitute this expansion into the Sturm-Liouville problem (40 ###reference_###) (and note that the Legendre polynomials satisfies this equation when ) to get the linear system\nwhere is an approximation of the exact eigenvalue , , and the matrix has non-zero entries given by\n[8 ###reference_8###, Section 2] suggested a truncation with to have a good approximation of the eigenvalue and .\nAfter the evaluation of the PSWFs, one can compute the prolate eigenvalues as follows (cf. [31 ###reference_31###, Section 2]). First set in equation (7 ###reference_###) to get\nwhere in the last step we applied (41 ###reference_###) and (39 ###reference_###). Note that is even for even and is odd for odd (see, for instance, [31 ###reference_31###, Section 2]), thereby vanishes for odd and we first get the approximation for eigenvalues with even indexes by\nSimilarly differentiating (7 ###reference_###) allows us to get for odd that\nIn this paper the formulas (42 ###reference_###)\u2013(43 ###reference_###) are sufficient to help us implement the linear sampling method."
82
+ },
83
+ {
84
+ "section_id": "6.4",
85
+ "parent_section_id": "6",
86
+ "section_name": "A prolate-based formulation of the linear sampling method",
87
+ "text": "Note that we have highly accurate algorithms to compute the (disk) PSWFs system,\nin this section we propose a prolate-based formulation of LSM. To begin with, let be the following set\nwhere we simply identify as the maximum index (which is a scalar index in one dimension and a multiple index in two dimensions).\nWe consider a prolate-Galerkin formulation by\nwhere the data operator and the z-dependent function gives the matrix and the right hand side by\nThis is a prolate-based formulation of the linear sampling method where we seek a reduced solution in the span of (disk) PSWFs .\nWe further define\nLet be a family of regularized solution obtained by regularizing (44 ###reference_###) with a family of regularization schemes (see, for instance, Section 4 ###reference_###; standard schemes include such as the Tikhonov regularization and the singular value cut off), then\naccording to Theorem 2 ###reference_rem2###, it is expected that the indicator function\nremains bounded as and for and cannot be bounded as and for . Moreover according to Theorem 4 ###reference_rem4###,\nas and . It is also possible to establish a semi-explicit convergence result for parameter identification with both noiseless and noisy data, this is ongoing work and will be reported in a forthcoming paper. Finally we remark that the prolate-based linear sampling method shares a similar spirit to the modal formulation of the linear sampling method in waveguide [7 ###reference_7###]; see also [20 ###reference_20###]."
88
+ },
89
+ {
90
+ "section_id": "7",
91
+ "parent_section_id": null,
92
+ "section_name": "Numerical experiments for parameter and shape identification",
93
+ "text": "To demonstrate the shape and parameter identification theory, in this section we perform relevant numerical experiments for the inverse source problem with multi-frequency measurements for a fixed observation direction. The inverse source model was given by Section 2.1 ###reference_### and we in particular consider the following different parameters that can be divided into the following four types:\nConstant . This can be obtained by a constant source supported in a square given by and . Here . This leads to\nNote that in this case is a constant.\n\u201cIncreasing-decreasing\u201d . This can be obtained by a constant source supported in a rhombus given by\nand (which is equivalent to a constant source supported in a square but with a -rotated observation direction). Here . This leads to\nNote that in this case is increasing in and decreasing in .\n\u201cDecreasing-increasing\u201d . This can be obtained by a constant source supported in \u201cM\u201d given by\nand (which is equivalent to a source supported in a square but with non-constant intensities). Here . This leads to\nNote that in this case is decreasing in and increasing in .\nOscillatory . This can be obtained by a constant source supported in an oscillatory waveguide given by\nand (which is equivalent to a source supported in a square but with oscillatory intensities). Here is a positive integer that introduce the oscillatory nature, . This leads to\nNote that in this case is oscillatory in .\n###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### In the following we give details of the implementation of the prolate-Galerkin formulation of the linear sampling.\nThe PSWFs are computed as detailed in Section 6.3 ###reference_### where we use the Matlab code developed in [8 ###reference_8###]. The prolate eigenvalues are then computed using the formulas (42 ###reference_###) \u2013 (43 ###reference_###) by adding a simple Matlab script to the existing code of [8 ###reference_8###]. The exact data are calculated analytically by hand using the four different given by (46 ###reference_###)\u2013(49 ###reference_###). Given a noisy operator , we obtain a noisy data matrix according to (45 ###reference_###) so that\nwhere the noisy data are given by adding Gaussian noise to the exact data which introduce noise level such that\n.\nWe integrate the product of the data and the PSWFs using a Legendre-Gauss-Lobatto (LGL) quadrature rule. The right hand side is given by (45 ###reference_###) and (33 ###reference_###) where\nhere we approximate the integral over with using again the Legendre-Gauss-Lobatto (LGL) quadrature rule in this interval .\nHaving a regularized solution computed from the noisy linear system (as is similar to the noiseless case (44 ###reference_###))\nwe then proceed by\nand approximate each integral over using again the Legendre-Gauss-Lobatto (LGL) quadrature rule. We further set the indicator function by\nwhich is an harmonic mean, as expected the numerical examples confirm the well known fact that harmonic mean are larger than the classical mean.\nWe chose in this paper the spectral cut off regularization where we chose such that all the corresponding prolate eigenvalues (with indexes in ) are larger that the noise level .\nFirst we illustrate in figure 1 ###reference_### the fact that PSWFs compressed the data operator where we compare the operator expressed in PSWFs basis which is the matrix with the operator discretize on a cartesian grid and computed using Discrete (fast) Fourrier Transform which is the matrix .\nMotivated by the particular example in Section 6.1 ###reference_###,\nit is likely that we need a large index set to achieve a sufficiently good approximation to the unknown. This first motivates us to perform a set of numerical examples with noiseless data where a large index set can be available (the index set cannot be too large since this means that we have to compute many prolate eigenvalues which is computationally challenging due to the super fast exponential decay of the prolate eigenvalues; see also the particular example studied in Section 6.1 ###reference_###).\nIn figure 2 ###reference_###, we plot the indicator function with noiseless data, , and two different . Here (left column) allows us to have an index set of dimension and (right column) allows us to have an index set of dimension . We approximate all integrals using a Legendre-Gauss-Lobatto (LGL) quadrature rule with quadrature nodes. The Matlab command \u201cro-\u201d line represents the indicator function , which is an approximation (according to Theorem 4 ###reference_rem4###) to\nIt is seen that is an approximation of , the Matlab command \u201cb*-\u201d line represents (plotted only in ) and the Matlab command \u201ck-\u201d line represents the exact (plotted only in ). The four types of parameters are given by setting (which is roughly ) in (46 ###reference_###)\u2013(49 ###reference_###) and we set to have oscillations in the case of oscillatory . It is observed that the larger the index set , the better the convergence.\nTo further demonstrate its viability, we report the results in Figure 3 ###reference_### by changing the noise level to for the case when . The robustness of LSM with respect to noises is observed.\nThe next set of examples in Figure 4 ###reference_### is devoted to testing the performance with respect to different for both noiseless and noisy data.\nWe observe that larger is expected to give better convergence for parameter identification, thereby in order to observe a possible convergence for noisy data, we set in this set of examples. In the case of noisy data we set noisy level . In these examples, we apply the LGL quadrature rule with quadrature nodes and we report that all the results hold similarly with quadrature nodes. In the case of noiseless data, we observe similar performance with respect to different ; in the case of noisy data, we observe that the performance becomes better as becomes larger.\n###figure_23### ###figure_24### To illustrate the case of sign changing parameter we report the results in Figure 5 ###reference_###. First we show the reconstruction of for of radius 0.8 where is of radius 0.6 which are similar to the one from the previous examples. Then we show the comparison between and and as expected by remark 6 ###reference_rk6### the superiority of the second reconstruction similarly to our inspiration from [5 ###reference_5###].\nTo conclude this numerical section we would like to report the numerical results for disjoint supports. Figure 6 ###reference_### shows the results for the case of a constant with 2 connected component; the distance between the two connected components is from the set . This case is covered by the theory but the results are very promising and theoretical analysis of the resolution limit of our method is an interesting subject for future work.\n###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30###"
94
+ }
95
+ ],
96
+ "appendix": [],
97
+ "tables": {},
98
+ "image_paths": {
99
+ "1(a)": {
100
+ "figure_path": "2306.16199v2_figure_1(a).png",
101
+ "caption": "Figure 1: \n\u2016Ai,j\ud835\udd41\u2016normsubscriptsuperscript\ud835\udc34\ud835\udd41\ud835\udc56\ud835\udc57\\|A^{\\mathbbm{J}}_{i,j}\\|\u2225 italic_A start_POSTSUPERSCRIPT blackboard_J end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT \u2225 on the left and \u2016Ai,jD\u2062F\u2062T\u2016normsubscriptsuperscript\ud835\udc34\ud835\udc37\ud835\udc39\ud835\udc47\ud835\udc56\ud835\udc57\\|A^{DFT}_{i,j}\\|\u2225 italic_A start_POSTSUPERSCRIPT italic_D italic_F italic_T end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT \u2225 on the right in both case 1\u2264i,j\u2264400formulae-sequence1\ud835\udc56\ud835\udc574001\\leq i,j\\leq 4001 \u2264 italic_i , italic_j \u2264 400",
102
+ "url": "http://arxiv.org/html/2306.16199v2/x1.png"
103
+ },
104
+ "1(b)": {
105
+ "figure_path": "2306.16199v2_figure_1(b).png",
106
+ "caption": "Figure 1: \n\u2016Ai,j\ud835\udd41\u2016normsubscriptsuperscript\ud835\udc34\ud835\udd41\ud835\udc56\ud835\udc57\\|A^{\\mathbbm{J}}_{i,j}\\|\u2225 italic_A start_POSTSUPERSCRIPT blackboard_J end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT \u2225 on the left and \u2016Ai,jD\u2062F\u2062T\u2016normsubscriptsuperscript\ud835\udc34\ud835\udc37\ud835\udc39\ud835\udc47\ud835\udc56\ud835\udc57\\|A^{DFT}_{i,j}\\|\u2225 italic_A start_POSTSUPERSCRIPT italic_D italic_F italic_T end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT \u2225 on the right in both case 1\u2264i,j\u2264400formulae-sequence1\ud835\udc56\ud835\udc574001\\leq i,j\\leq 4001 \u2264 italic_i , italic_j \u2264 400",
107
+ "url": "http://arxiv.org/html/2306.16199v2/x2.png"
108
+ },
109
+ "2(a)": {
110
+ "figure_path": "2306.16199v2_figure_2(a).png",
111
+ "caption": "Figure 2: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noiseless data and \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT. Left column: dim(\ud835\udd41)=37dimension\ud835\udd4137\\dim(\\mathbb{J})=37roman_dim ( blackboard_J ) = 37. Right column: dim(\ud835\udd41)=54dimension\ud835\udd4154\\dim(\\mathbb{J})=54roman_dim ( blackboard_J ) = 54. Row j\ud835\udc57jitalic_j corresponds to type j\ud835\udc57jitalic_j, j=1,2,3,4.",
112
+ "url": "http://arxiv.org/html/2306.16199v2/x3.png"
113
+ },
114
+ "2(b)": {
115
+ "figure_path": "2306.16199v2_figure_2(b).png",
116
+ "caption": "Figure 2: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noiseless data and \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT. Left column: dim(\ud835\udd41)=37dimension\ud835\udd4137\\dim(\\mathbb{J})=37roman_dim ( blackboard_J ) = 37. Right column: dim(\ud835\udd41)=54dimension\ud835\udd4154\\dim(\\mathbb{J})=54roman_dim ( blackboard_J ) = 54. Row j\ud835\udc57jitalic_j corresponds to type j\ud835\udc57jitalic_j, j=1,2,3,4.",
117
+ "url": "http://arxiv.org/html/2306.16199v2/x4.png"
118
+ },
119
+ "2(c)": {
120
+ "figure_path": "2306.16199v2_figure_2(c).png",
121
+ "caption": "Figure 2: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noiseless data and \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT. Left column: dim(\ud835\udd41)=37dimension\ud835\udd4137\\dim(\\mathbb{J})=37roman_dim ( blackboard_J ) = 37. Right column: dim(\ud835\udd41)=54dimension\ud835\udd4154\\dim(\\mathbb{J})=54roman_dim ( blackboard_J ) = 54. Row j\ud835\udc57jitalic_j corresponds to type j\ud835\udc57jitalic_j, j=1,2,3,4.",
122
+ "url": "http://arxiv.org/html/2306.16199v2/x5.png"
123
+ },
124
+ "2(d)": {
125
+ "figure_path": "2306.16199v2_figure_2(d).png",
126
+ "caption": "Figure 2: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noiseless data and \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT. Left column: dim(\ud835\udd41)=37dimension\ud835\udd4137\\dim(\\mathbb{J})=37roman_dim ( blackboard_J ) = 37. Right column: dim(\ud835\udd41)=54dimension\ud835\udd4154\\dim(\\mathbb{J})=54roman_dim ( blackboard_J ) = 54. Row j\ud835\udc57jitalic_j corresponds to type j\ud835\udc57jitalic_j, j=1,2,3,4.",
127
+ "url": "http://arxiv.org/html/2306.16199v2/x6.png"
128
+ },
129
+ "2(e)": {
130
+ "figure_path": "2306.16199v2_figure_2(e).png",
131
+ "caption": "Figure 2: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noiseless data and \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT. Left column: dim(\ud835\udd41)=37dimension\ud835\udd4137\\dim(\\mathbb{J})=37roman_dim ( blackboard_J ) = 37. Right column: dim(\ud835\udd41)=54dimension\ud835\udd4154\\dim(\\mathbb{J})=54roman_dim ( blackboard_J ) = 54. Row j\ud835\udc57jitalic_j corresponds to type j\ud835\udc57jitalic_j, j=1,2,3,4.",
132
+ "url": "http://arxiv.org/html/2306.16199v2/x7.png"
133
+ },
134
+ "2(f)": {
135
+ "figure_path": "2306.16199v2_figure_2(f).png",
136
+ "caption": "Figure 2: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noiseless data and \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT. Left column: dim(\ud835\udd41)=37dimension\ud835\udd4137\\dim(\\mathbb{J})=37roman_dim ( blackboard_J ) = 37. Right column: dim(\ud835\udd41)=54dimension\ud835\udd4154\\dim(\\mathbb{J})=54roman_dim ( blackboard_J ) = 54. Row j\ud835\udc57jitalic_j corresponds to type j\ud835\udc57jitalic_j, j=1,2,3,4.",
137
+ "url": "http://arxiv.org/html/2306.16199v2/x8.png"
138
+ },
139
+ "2(g)": {
140
+ "figure_path": "2306.16199v2_figure_2(g).png",
141
+ "caption": "Figure 2: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noiseless data and \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT. Left column: dim(\ud835\udd41)=37dimension\ud835\udd4137\\dim(\\mathbb{J})=37roman_dim ( blackboard_J ) = 37. Right column: dim(\ud835\udd41)=54dimension\ud835\udd4154\\dim(\\mathbb{J})=54roman_dim ( blackboard_J ) = 54. Row j\ud835\udc57jitalic_j corresponds to type j\ud835\udc57jitalic_j, j=1,2,3,4.",
142
+ "url": "http://arxiv.org/html/2306.16199v2/x9.png"
143
+ },
144
+ "2(h)": {
145
+ "figure_path": "2306.16199v2_figure_2(h).png",
146
+ "caption": "Figure 2: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noiseless data and \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT. Left column: dim(\ud835\udd41)=37dimension\ud835\udd4137\\dim(\\mathbb{J})=37roman_dim ( blackboard_J ) = 37. Right column: dim(\ud835\udd41)=54dimension\ud835\udd4154\\dim(\\mathbb{J})=54roman_dim ( blackboard_J ) = 54. Row j\ud835\udc57jitalic_j corresponds to type j\ud835\udc57jitalic_j, j=1,2,3,4.",
147
+ "url": "http://arxiv.org/html/2306.16199v2/x10.png"
148
+ },
149
+ "3(a)": {
150
+ "figure_path": "2306.16199v2_figure_3(a).png",
151
+ "caption": "Figure 3: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noisy level 5%percent55\\%5 %. \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT and c=20\ud835\udc5020c=20italic_c = 20.",
152
+ "url": "http://arxiv.org/html/2306.16199v2/x11.png"
153
+ },
154
+ "3(b)": {
155
+ "figure_path": "2306.16199v2_figure_3(b).png",
156
+ "caption": "Figure 3: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noisy level 5%percent55\\%5 %. \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT and c=20\ud835\udc5020c=20italic_c = 20.",
157
+ "url": "http://arxiv.org/html/2306.16199v2/x12.png"
158
+ },
159
+ "3(c)": {
160
+ "figure_path": "2306.16199v2_figure_3(c).png",
161
+ "caption": "Figure 3: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noisy level 5%percent55\\%5 %. \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT and c=20\ud835\udc5020c=20italic_c = 20.",
162
+ "url": "http://arxiv.org/html/2306.16199v2/x13.png"
163
+ },
164
+ "3(d)": {
165
+ "figure_path": "2306.16199v2_figure_3(d).png",
166
+ "caption": "Figure 3: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for four different types of parameters with noisy level 5%percent55\\%5 %. \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT and c=20\ud835\udc5020c=20italic_c = 20.",
167
+ "url": "http://arxiv.org/html/2306.16199v2/x14.png"
168
+ },
169
+ "4(a)": {
170
+ "figure_path": "2306.16199v2_figure_4(a).png",
171
+ "caption": "Figure 4: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data (left column) and noisy data with noise level \u03b4=5%\ud835\udeffpercent5\\delta=5\\%italic_\u03b4 = 5 % (right column). \u03f5=1\u00d710\u22121italic-\u03f51superscript101\\epsilon=1\\times 10^{-1}italic_\u03f5 = 1 \u00d7 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT. From top to bottom c=3,5,7,10\ud835\udc5035710c=3,5,7,10italic_c = 3 , 5 , 7 , 10.",
172
+ "url": "http://arxiv.org/html/2306.16199v2/x15.png"
173
+ },
174
+ "4(b)": {
175
+ "figure_path": "2306.16199v2_figure_4(b).png",
176
+ "caption": "Figure 4: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data (left column) and noisy data with noise level \u03b4=5%\ud835\udeffpercent5\\delta=5\\%italic_\u03b4 = 5 % (right column). \u03f5=1\u00d710\u22121italic-\u03f51superscript101\\epsilon=1\\times 10^{-1}italic_\u03f5 = 1 \u00d7 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT. From top to bottom c=3,5,7,10\ud835\udc5035710c=3,5,7,10italic_c = 3 , 5 , 7 , 10.",
177
+ "url": "http://arxiv.org/html/2306.16199v2/x16.png"
178
+ },
179
+ "4(c)": {
180
+ "figure_path": "2306.16199v2_figure_4(c).png",
181
+ "caption": "Figure 4: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data (left column) and noisy data with noise level \u03b4=5%\ud835\udeffpercent5\\delta=5\\%italic_\u03b4 = 5 % (right column). \u03f5=1\u00d710\u22121italic-\u03f51superscript101\\epsilon=1\\times 10^{-1}italic_\u03f5 = 1 \u00d7 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT. From top to bottom c=3,5,7,10\ud835\udc5035710c=3,5,7,10italic_c = 3 , 5 , 7 , 10.",
182
+ "url": "http://arxiv.org/html/2306.16199v2/x17.png"
183
+ },
184
+ "4(d)": {
185
+ "figure_path": "2306.16199v2_figure_4(d).png",
186
+ "caption": "Figure 4: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data (left column) and noisy data with noise level \u03b4=5%\ud835\udeffpercent5\\delta=5\\%italic_\u03b4 = 5 % (right column). \u03f5=1\u00d710\u22121italic-\u03f51superscript101\\epsilon=1\\times 10^{-1}italic_\u03f5 = 1 \u00d7 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT. From top to bottom c=3,5,7,10\ud835\udc5035710c=3,5,7,10italic_c = 3 , 5 , 7 , 10.",
187
+ "url": "http://arxiv.org/html/2306.16199v2/x18.png"
188
+ },
189
+ "4(e)": {
190
+ "figure_path": "2306.16199v2_figure_4(e).png",
191
+ "caption": "Figure 4: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data (left column) and noisy data with noise level \u03b4=5%\ud835\udeffpercent5\\delta=5\\%italic_\u03b4 = 5 % (right column). \u03f5=1\u00d710\u22121italic-\u03f51superscript101\\epsilon=1\\times 10^{-1}italic_\u03f5 = 1 \u00d7 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT. From top to bottom c=3,5,7,10\ud835\udc5035710c=3,5,7,10italic_c = 3 , 5 , 7 , 10.",
192
+ "url": "http://arxiv.org/html/2306.16199v2/x19.png"
193
+ },
194
+ "4(f)": {
195
+ "figure_path": "2306.16199v2_figure_4(f).png",
196
+ "caption": "Figure 4: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data (left column) and noisy data with noise level \u03b4=5%\ud835\udeffpercent5\\delta=5\\%italic_\u03b4 = 5 % (right column). \u03f5=1\u00d710\u22121italic-\u03f51superscript101\\epsilon=1\\times 10^{-1}italic_\u03f5 = 1 \u00d7 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT. From top to bottom c=3,5,7,10\ud835\udc5035710c=3,5,7,10italic_c = 3 , 5 , 7 , 10.",
197
+ "url": "http://arxiv.org/html/2306.16199v2/x20.png"
198
+ },
199
+ "4(g)": {
200
+ "figure_path": "2306.16199v2_figure_4(g).png",
201
+ "caption": "Figure 4: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data (left column) and noisy data with noise level \u03b4=5%\ud835\udeffpercent5\\delta=5\\%italic_\u03b4 = 5 % (right column). \u03f5=1\u00d710\u22121italic-\u03f51superscript101\\epsilon=1\\times 10^{-1}italic_\u03f5 = 1 \u00d7 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT. From top to bottom c=3,5,7,10\ud835\udc5035710c=3,5,7,10italic_c = 3 , 5 , 7 , 10.",
202
+ "url": "http://arxiv.org/html/2306.16199v2/x21.png"
203
+ },
204
+ "4(h)": {
205
+ "figure_path": "2306.16199v2_figure_4(h).png",
206
+ "caption": "Figure 4: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data (left column) and noisy data with noise level \u03b4=5%\ud835\udeffpercent5\\delta=5\\%italic_\u03b4 = 5 % (right column). \u03f5=1\u00d710\u22121italic-\u03f51superscript101\\epsilon=1\\times 10^{-1}italic_\u03f5 = 1 \u00d7 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT. From top to bottom c=3,5,7,10\ud835\udc5035710c=3,5,7,10italic_c = 3 , 5 , 7 , 10.",
207
+ "url": "http://arxiv.org/html/2306.16199v2/x22.png"
208
+ },
209
+ "5(a)": {
210
+ "figure_path": "2306.16199v2_figure_5(a).png",
211
+ "caption": "Figure 5: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), q+qi\u2062n\u2062f\ud835\udc5esubscript\ud835\udc5e\ud835\udc56\ud835\udc5b\ud835\udc53q+q_{inf}italic_q + italic_q start_POSTSUBSCRIPT italic_i italic_n italic_f end_POSTSUBSCRIPT on the left and I\u2062(z)\u2212qi\u2062n\u2062f\u2062(z)\ud835\udc3c\ud835\udc67subscript\ud835\udc5e\ud835\udc56\ud835\udc5b\ud835\udc53\ud835\udc67I(z)-q_{inf}(z)italic_I ( italic_z ) - italic_q start_POSTSUBSCRIPT italic_i italic_n italic_f end_POSTSUBSCRIPT ( italic_z ), I\u2062(z)\u2212Ii\u2062n\u2062f\u2062(z)\ud835\udc3c\ud835\udc67subscript\ud835\udc3c\ud835\udc56\ud835\udc5b\ud835\udc53\ud835\udc67I(z)-I_{inf}(z)italic_I ( italic_z ) - italic_I start_POSTSUBSCRIPT italic_i italic_n italic_f end_POSTSUBSCRIPT ( italic_z ), q\ud835\udc5eqitalic_q on the right and noiseless data with \u03f5=5\u00d710\u22121italic-\u03f55superscript101\\epsilon=5\\times 10^{-1}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT and c=40\ud835\udc5040c=40italic_c = 40.",
212
+ "url": "http://arxiv.org/html/2306.16199v2/x23.png"
213
+ },
214
+ "5(b)": {
215
+ "figure_path": "2306.16199v2_figure_5(b).png",
216
+ "caption": "Figure 5: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), q+qi\u2062n\u2062f\ud835\udc5esubscript\ud835\udc5e\ud835\udc56\ud835\udc5b\ud835\udc53q+q_{inf}italic_q + italic_q start_POSTSUBSCRIPT italic_i italic_n italic_f end_POSTSUBSCRIPT on the left and I\u2062(z)\u2212qi\u2062n\u2062f\u2062(z)\ud835\udc3c\ud835\udc67subscript\ud835\udc5e\ud835\udc56\ud835\udc5b\ud835\udc53\ud835\udc67I(z)-q_{inf}(z)italic_I ( italic_z ) - italic_q start_POSTSUBSCRIPT italic_i italic_n italic_f end_POSTSUBSCRIPT ( italic_z ), I\u2062(z)\u2212Ii\u2062n\u2062f\u2062(z)\ud835\udc3c\ud835\udc67subscript\ud835\udc3c\ud835\udc56\ud835\udc5b\ud835\udc53\ud835\udc67I(z)-I_{inf}(z)italic_I ( italic_z ) - italic_I start_POSTSUBSCRIPT italic_i italic_n italic_f end_POSTSUBSCRIPT ( italic_z ), q\ud835\udc5eqitalic_q on the right and noiseless data with \u03f5=5\u00d710\u22121italic-\u03f55superscript101\\epsilon=5\\times 10^{-1}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT and c=40\ud835\udc5040c=40italic_c = 40.",
217
+ "url": "http://arxiv.org/html/2306.16199v2/x24.png"
218
+ },
219
+ "6(a)": {
220
+ "figure_path": "2306.16199v2_figure_6(a).png",
221
+ "caption": "Figure 6: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data and components getting closer and closer. \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT and c=100\ud835\udc50100c=100italic_c = 100.",
222
+ "url": "http://arxiv.org/html/2306.16199v2/x25.png"
223
+ },
224
+ "6(b)": {
225
+ "figure_path": "2306.16199v2_figure_6(b).png",
226
+ "caption": "Figure 6: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data and components getting closer and closer. \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT and c=100\ud835\udc50100c=100italic_c = 100.",
227
+ "url": "http://arxiv.org/html/2306.16199v2/x26.png"
228
+ },
229
+ "6(c)": {
230
+ "figure_path": "2306.16199v2_figure_6(c).png",
231
+ "caption": "Figure 6: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data and components getting closer and closer. \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT and c=100\ud835\udc50100c=100italic_c = 100.",
232
+ "url": "http://arxiv.org/html/2306.16199v2/x27.png"
233
+ },
234
+ "6(d)": {
235
+ "figure_path": "2306.16199v2_figure_6(d).png",
236
+ "caption": "Figure 6: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data and components getting closer and closer. \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT and c=100\ud835\udc50100c=100italic_c = 100.",
237
+ "url": "http://arxiv.org/html/2306.16199v2/x28.png"
238
+ },
239
+ "6(e)": {
240
+ "figure_path": "2306.16199v2_figure_6(e).png",
241
+ "caption": "Figure 6: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data and components getting closer and closer. \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT and c=100\ud835\udc50100c=100italic_c = 100.",
242
+ "url": "http://arxiv.org/html/2306.16199v2/x29.png"
243
+ },
244
+ "6(f)": {
245
+ "figure_path": "2306.16199v2_figure_6(f).png",
246
+ "caption": "Figure 6: \n\nPlot of I\u2062(z)\ud835\udc3c\ud835\udc67I(z)italic_I ( italic_z ), qa\u2062v\u2062gsubscript\ud835\udc5e\ud835\udc4e\ud835\udc63\ud835\udc54q_{avg}italic_q start_POSTSUBSCRIPT italic_a italic_v italic_g end_POSTSUBSCRIPT, and q\ud835\udc5eqitalic_q for noiseless data and components getting closer and closer. \u03f5=5\u00d710\u22122italic-\u03f55superscript102\\epsilon=5\\times 10^{-2}italic_\u03f5 = 5 \u00d7 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT and c=100\ud835\udc50100c=100italic_c = 100.",
247
+ "url": "http://arxiv.org/html/2306.16199v2/x30.png"
248
+ }
249
+ },
250
+ "validation": true,
251
+ "references": [
252
+ {
253
+ "1": {
254
+ "title": "The linear sampling method in a waveguide: a modal formulation.",
255
+ "author": "L Bourgeois and E Lun\u00e9ville.",
256
+ "venue": "Inverse problems 24(1), 015018, 2008.",
257
+ "url": null
258
+ }
259
+ },
260
+ {
261
+ "2": {
262
+ "title": "Qualitative Approach to Inverse Scattering Theory, Springer, 2016.",
263
+ "author": "F Cakoni and D Colton.",
264
+ "venue": null,
265
+ "url": null
266
+ }
267
+ },
268
+ {
269
+ "3": {
270
+ "title": "Inverse Scattering Theory and Transmission Eigenvalues, CBMS-NSF, SIAM Publications 98, 2nd Edition, 2023.",
271
+ "author": "F Cakoni, D Colton and H Haddar.",
272
+ "venue": null,
273
+ "url": null
274
+ }
275
+ },
276
+ {
277
+ "4": {
278
+ "title": "Inverse Acoustic and Electromagnetic Scattering Theory, Springer Nature, New York, 2019.",
279
+ "author": "D Colton and R Kress.",
280
+ "venue": null,
281
+ "url": null
282
+ }
283
+ },
284
+ {
285
+ "5": {
286
+ "title": "The Factorization Method for Inverse Problems, Oxford University Press, Oxford, 2008.",
287
+ "author": "A Kirsch and N Grinberg.",
288
+ "venue": null,
289
+ "url": null
290
+ }
291
+ },
292
+ {
293
+ "6": {
294
+ "title": "Inverse Probl. Imaging 15(4), 745\u2013762, 2021.",
295
+ "author": "S Meng.\nA sampling type method in an electromagnetic waveguide.",
296
+ "venue": null,
297
+ "url": null
298
+ }
299
+ },
300
+ {
301
+ "7": {
302
+ "title": "The Mathematics of Computerized Tomography, SIAM, 2001.",
303
+ "author": "F Natterer.",
304
+ "venue": null,
305
+ "url": null
306
+ }
307
+ },
308
+ {
309
+ "8": {
310
+ "title": "Numerical Mathematics, Springer, 2000.",
311
+ "author": "A Quarteroni, R Sacco and F Saleri.",
312
+ "venue": null,
313
+ "url": null
314
+ }
315
+ }
316
+ ],
317
+ "url": "http://arxiv.org/html/2306.16199v2"
318
+ }
20240119/2307.08078v2.json ADDED
@@ -0,0 +1,459 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Efficient numerical method for multi-term time-fractional diffusion equations with Caputo-Fabrizio derivatives^*",
3
+ "abstract": "In this paper, we consider a numerical method for the multi-term Caputo-Fabrizio time-fractional diffusion equations (with orders , ).\nThe proposed method employs a fast finite difference scheme to approximate multi-term fractional derivatives in time, requiring only storage and computational complexity, where denotes the total number of time steps. Then we use a Legendre\nspectral collocation method for spatial discretization. The stability and convergence of the scheme have been thoroughly discussed and rigorously established.\nWe demonstrate that the proposed scheme is unconditionally stable and convergent with an order of , where , , and represent the timestep size, polynomial degree, and regularity in the spatial variable of the exact solution, respectively.\nNumerical results are presented to validate the theoretical predictions.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction",
9
+ "text": "Fractional differential equations have wide applications in various fields of science, including physics, economics, engineering, chemistry, biology and others[11 ###reference_11###, 27 ###reference_27###, 28 ###reference_28###, 29 ###reference_29###, 42 ###reference_42###]. There are many kinds of definitions for the fractional derivatives,\nthe most used fractional derivatives are the Riemann-Liouville fractional derivative and the Caputo fractional derivative [31 ###reference_31###, 18 ###reference_18###]. However, both of these operators still present challenges in practical applications.\nTo be more precise, the Riemann-Liouville derivative of a constant is non-zero and the Laplace transform of this derivative contains terms that lack physical significance. The Caputo fractional derivative has successfully addressed both issues, however, its definition involves a singular kernel which poses challenges in analysis and computation. Caputo and Fabrizio [8 ###reference_8###] have proposed a novel definition of the fractional derivative with a smooth kernel, referred to as the Caputo and Fabrizio (CF) derivatives, which present distinct representations for the temporal and spatial variables. The representation in time variable\nis suitable to use the Laplace transform, and the spatial representation is more convenient to use the Fourier transform.\nAlthough there is ongoing debate regarding the mathematical properties of fractional derivatives with non-singular kernels [20 ###reference_20###, 34 ###reference_34###], numerous scholars remain interested in studying differential equations involving such derivatives due to their nice performance in various applications.\nConsidering the CF derivative offers two primary advantages: 1) The utilization of a regular kernel in non-local systems is motivated by its potential to accurately depict material heterogeneities and fluctuations of various scales, which cannot be adequately captured by classical local theories or fractional models with singular kernels, see, e.g., [8 ###reference_8###, 9 ###reference_9###];\n2) CF derivatives have numerical advantages. As we know, the truncation error of the numerical calculation for fractional operators with singular kernels is typically dependent on the order . For instance, in the case of the Caputo fractional derivative, employing classical L1 discretization results in an error of order , which becomes highly unfavorable when . In order to enhance actuarial accuracy, the utilization of higher-order methods will lead to an increase in computational complexity, particularly for problems with high dimensions. However, within the same approximation framework, the CF derivative has a higher truncation error, see Remark 2.2 ###reference_em2###.\nFurther properties and diverse applications of this fractional derivative can be found in various references, such as [9 ###reference_9###, 14 ###reference_14###, 6 ###reference_6###, 5 ###reference_5###, 30 ###reference_30###, 36 ###reference_36###, 16 ###reference_16###].\nLet , . In this paper, we are concerned with the numerical approximation of the multi-term time-fractional diffusion equation\nwith the initial conditions\nand the boundary condition\nwhere\n, and . and are given sufficiently smooth functions in their respective domains.\nIn addition, is the Caputo-Fabrizio derivative\noperator of order [8 ###reference_8###, 9 ###reference_9###] defined as\nIf , then - reduces to the single-term time-fractional diffusion equation.\nThe model of -, which describes the temporal flow of water within a leaky aquifer at various scales [5 ###reference_5###, 12 ###reference_12###], as well as the electro-magneto-hydrodynamic flow of non-Newtonian biofluids with heat transfer [1 ###reference_1###], etc. For the well-posedness of -, we refer to, e.g., [3 ###reference_3###, 4 ###reference_4###, 36 ###reference_36###].\nMany researchers have explored the numerical approximation of both single-term and multi-term time fractional diffusion equations.\nIn [24 ###reference_24###], Liu et al. proposed a finite difference method for solving time-fractional diffusion equations in both space and time domains.\nLin and Xu [23 ###reference_23###] utilized a finite difference scheme in time and Legendre spectral methods in space to numerically solve the time-fractional diffusion equations. Subsequently, Li and Xu [22 ###reference_22###]improved upon their previous work by proposing a space-time spectral method for these equations.\nFor the numerical treatment of multi-term time-fractional diffusion equations, [33 ###reference_33###] proposed a fully-discrete schemes for one- and two-dimensional multi-term time fractional sub-diffusion equations. These schemes combine the compact difference method for spatial discretization with L1 approximation for time discretization.\nThe Galerkin finite element method and the spectral method were introduced in [19 ###reference_19###] and [40 ###reference_40###, 13 ###reference_13###], respectively.\nZhao et al. [39 ###reference_39###] developed a fully-discrete scheme for a class of two-dimensional multi-term time-fractional diffusion equations with Caputo fractional derivatives, utilizing the finite element method in spatial direction and classical L1 approximation in temporal direction.\nAkman et al. [2 ###reference_2###] proposed a numerical approximation called the L1-2 formula for the Caputo-Fabrizio derivative using quadratic interpolation.\nIn [37 ###reference_37###], finite difference/spectral approximations for solving two-dimensional time CF fractional diffusion equation were proposed and analyzed. Later, a second order scheme [10 ###reference_10###] was devised for addressing this problem.\nA compact alternating direction implicit (ADI) difference scheme was proposed by [35 ###reference_35###] for solving the two-dimensional time-fractional diffusion equation.\nSimulating models with fractional derivatives presents a challenge due to their non-locality, which significantly impedes algorithm efficiency and necessitates greater memory storage compared to traditional local models.\nIn particular, for fractional models, the computational complexity of obtaining an approximate solution is and the required memory storage is , which contrasts with local models that have a complexity of and require a memory storage of , where denotes the total number of time steps, see, e.g., [23 ###reference_23###, 37 ###reference_37###, 10 ###reference_10###].\nTo address this issue, several researchers have proposed efficient algorithms for computing the derivatives of Riemann-Liouville, Caputo, and Riesz fractional operators, see e.g., [17 ###reference_17###, 38 ###reference_38###, 21 ###reference_21###, 41 ###reference_41###] and the references therein.\nRecently, a fast compact finite difference method for quasi-linear time-fractional parabolic equations is presented and analyzed in [25 ###reference_25###]. Then, [26 ###reference_26###] proposed a fast second-order numerical scheme for approximating the Caputo-Fabrizio fractional derivative at node with computational complexity of and memory storage of .\nInspired by the above mentioned, we extend finite difference/spectral approximations for the multi-term Caputo-Fabrizio time-fractional diffusion equation (1.1 ###reference_###)-(1.3 ###reference_###).\nFirstly, we present a L1 formula for the Caputo-Fabrizio derivative. In this context, we introduce two discrete fractional differential operators, namely and , which are essentially equivalent.\nHowever, effectively utilizes the exponential kernel and incurs lower storage and computational costs compared to .\nThe idea of this approach is essentially identical to that of reference [26 ###reference_26###], albeit with a slightly different formulation in our case; specifically, the approximation is centered at point and presented in a more concise manner. The error bounds associated with these two operators will be examined in detail.\nSecondly, we develop a semi-discrete scheme based on finite difference method for multi-term time-fractional derivatives, with complete proofs of its unconditional stability and convergence rate.\nA detailed error analysis is carried out for the semi-discrete problem, showing that the temporal accuracy is second order.\nFinally, we present the fully-discrete scheme based on the Legendre spectral collocation method for spatial discretization. We will investigate both the convergence order of this method and its implementation efficiency, while providing a rigorous proof of its spectral convergence in this paper.\nThe rest of this paper is organized as follows. In Section 2, a semi-discrete scheme is proposed for (1.1 ###reference_###)-(1.3 ###reference_###) based on fast L1 finite difference scheme. The stability and convergence analysis of the semi-discrete scheme is presented. In Section 3,\nwe construct a Legendre spectral collocation method for the spatial discretization of the semi-discrete scheme.\nError estimates are provided for the full discrete problem. Some numerical results are reported in Section 4. Finally, the conclusions are given in Section 5."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Semi-discretization",
15
+ "text": "Define , , where is the time step."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1. Fast L1 formula for Caputo-Fabrizio derivative",
21
+ "text": "We first give L1 approximation for fractional Caputo-Fabrizio derivative of function defined by\nIn order to simplify the notations, we denote for . The L1 formula is obtained by substituting the linear Lagrange interpolation of into (2.1 ###reference_###). Precisely, the linear approximation of the function on is written as\nand the error in the approximation is\nThen we define the discrete fractional differential operator by\nwhere and\nThe right hand side of (2.4 ###reference_###) involves a sum of all previous solutions , which reflects the memory effect of the non-local fractional derivative. Thus it requires on average storage and the total computational cost is with the total number of time steps. This makes both the computation and memory expensive, specially in the case of\nlong time integration. In order to overcome this difficulty, we propose a further approach to the fractional derivative.\nThe idea consists in first splitting the convolution integral in (2.1 ###reference_###) into a sum of history part and local part as follows:\nNote that a comparable treatment is employed in reference [17 ###reference_17###].\nThen the history part can be rewritten as\nhence we have\nUsing the simple recurrence relation (2.5 ###reference_###), we define the discrete fractional differential operator by\nIt is not difficult to see that for . Comparing in (2.4 ###reference_###) with in (2.6 ###reference_###)-(2.7 ###reference_###), the former requires all the previous time step values of while the latter only needs , and . This implies that approximating by considerably\nreduces the storage and computational costs as compared to . Roughly speaking, replacing by allows to reduce the storage cost\nfrom to , and the computational cost from to .\nThe fast algorithm of Caputo derivative in [17 ###reference_17###] should be noted to retain an additional truncation error , whereas the fast algorithm of CF derivative does not introduce this error. Furthermore, it is worth mentioning that other algorithms, such as parallel computational methods [15 ###reference_15###], result in an augmented spatial complexity.\nThe following lemma provides an error bound for approximation .\nSuppose that . For any , let\nThen\nProof. We consider proving the following estimate by mathematical induction:\nFirst we have\nwhere . Therefore, (2.8 ###reference_###) holds for . Now suppose that (2.8 ###reference_###) holds for , we need to prove that it holds also for . Similar to the proof of , we can easily get that\nBy combining (2.5 ###reference_###) and (2.7 ###reference_###), we obtain\nThe estimate (2.8 ###reference_###) is proved. Hence\nwhich prove the conclusion of the lemma.\nThe second rate of convergence of L1 formula has been proven in [2 ###reference_2###] by different methods, here we obtained identical results herein. Note that the rate of convergence of L1 formula for classical Caputo fractional derivative with order is , this result seems reasonable since Caputo-Fabrizio derivative has smooth kernel."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "2.2. Discretization in time",
27
+ "text": "We denote and . Then from (2.6 ###reference_###)-(2.7 ###reference_###) and Lemma 2.1 ###reference_em1###, the time fractional derivative (1.5 ###reference_###) at can be approximated by\nwhere\nThen Eq.(1.1 ###reference_###) can be rewritten as\nwhere\nwith for . Notice that\nwe denote\nthe above equations are recast into\nLet be the approximation for , and . Then the semi-discrete problem of Eq. (1.1 ###reference_###) can be written as\nwhere\nMoreover, by utilizing relation\nwe can easily derive an alternative formulation of (2.15 ###reference_###)-(2.18 ###reference_###) as follows\nwhere\nSince , (2.19 ###reference_###)-(2.22 ###reference_###) can be also obtained by using in Eq.(1.1 ###reference_###).\nIt is noteworthy that equations (2.15 ###reference_###)-(2.18 ###reference_###) offer computational advantages over equations (2.19 ###reference_###)-(2.22 ###reference_###). This is primarily attributed to the straightforward recurrence relation presented in equations (2.6 ###reference_###) and (2.7 ###reference_###). However, (2.19 ###reference_###)-(2.22 ###reference_###) is more appropriate for our analysis than (2.15 ###reference_###)-(2.18 ###reference_###), hence it play a crucial role in the subsequent sections.\nLet be defined by (2.12 ###reference_###), then there exists a constant such that\nProof. Without loss of generality, we assume that . By the definition of and the inequalities of (2.10 ###reference_###), we have\nOn the other hand, since\nfor , we get\nThis implies that\nTherefore, there exists a constant such that\nwhich prove the conclusion of the lemma.\nLet the coefficients be defined by (2.9 ###reference_###), then for every ,\nProof. can be easily obtained by the definition of and the monotone property of the function . Finally, note that\nUsing the above equalities and the fact completes the proof of the lemma.\n(2.26 ###reference_###) gives a easy way to compute all the coefficients .\nLet the coefficients be defined by (2.23 ###reference_###), then\nProof. By Lemma 2.2 ###reference_em2###, and the definition of , we can readily arrive at these conclusions."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "2.3. Stability and convergence analysis of the semi-discrete scheme",
33
+ "text": "To discuss the stability and convergence of the semi-discrete scheme, we introduce functional spaces equipped with standard norms and inner products that will be utilized subsequently. Let is the space of measurable functions whose square is Lebesgue integrable in . Then\nThe inner products of and are defined, respectively, by\nand the corresponding norms by\nThe norm of the space is defined by\nIn this paper, instead of using the above standard -norm, we prefer to define by\nIt is widely acknowledged that the standard -norm and the norm defined by (2.27 ###reference_###) are equivalent; therefore, we will adopt the latter in subsequent discussions.\nThe variational (weak) formulation of the Eqs.(2.15 ###reference_###) and (2.16 ###reference_###)/(2.20 ###reference_###), subject to the boundary condition (2.18 ###reference_###), can be expressed as finding such that for\nwhere .\nFor the semi-discretized problem (2.28 ###reference_###)-(2.29 ###reference_###), we can establish a stability result as follows.\nThe semi-discretized problem (2.28 ###reference_###)-(2.29 ###reference_###) is unconditionally stable in the sense that for all , it holds\nProof. By mathematical induction. First of all, when , we have\nNotice that , taking and using the Cauchy-Schwarz inequality, we obtain immediately\nNow, suppose\nTaking in (2.29 ###reference_###) gives\nHence, by using (2.30 ###reference_###) and Lemma 2.3 ###reference_em3###, we have\nThus, the proof is completed.\nIn the proof of the following theorem, we will demonstrate that is bounded. As shown in equation (2.25 ###reference_###), . Therefore, it follows that is also bounded.\nWe now conduct an error analysis for the solution of the semi-discretized problem.\nAssuming and .\nLet be the exact solution of (1.1 ###reference_###)-(1.3 ###reference_###), be the solution of semi-discretized problem (2.28 ###reference_###)-(2.29 ###reference_###) with initial condition , then the following error estimates hold:\nwhere the constant is defined in (2.24 ###reference_###) and is the length of .\nProof. We shall establish the following estimate through a process of mathematical induction:\nLet , . By combining (2.13 ###reference_###) and (2.28 ###reference_###), the error satisfies\nTaking yields . This, in conjunction with (2.24 ###reference_###), yields\nTherefore, (2.32 ###reference_###) holds for . Assuming that (2.32 ###reference_###) holds for all , it is necessary to demonstrate its validity for . By combining (2.13 ###reference_###), (2.14 ###reference_###) and (2.29 ###reference_###), for , we have\nLet in the above equation, then\nUsing the induction assumption and Lemma 2.3 ###reference_em3###, we derive\nNext we show that is bounded. Considering that function is decreasing on , we have\nby combining equation (2.26 ###reference_###).\nTherefore,\nConsequently we obtain, for all such that ,"
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "3. Full discretization",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "3.1. A shifted Legendre collocation method in space",
45
+ "text": "We shall begin by providing a comprehensive overview of fundamental definitions and properties pertaining to Legendre Gauss-type quadratures.\nLet denote the space of algebraic polynomials of degree less than or\nequal to with respect to variable , and be the Legendre polynomial\nof degree on the interval . Then the discrete space, denoted by .\nLet be the -orthogonal projection operator from into , associated to the norm defined in (2.27 ###reference_###), that is, for all , define , such that, ,\nFrom [7 ###reference_7###], the following estimate of projection holds:\nDefine the Legendre-Gauss-Lobatto nodes and weights as and , , ,\nwhere are the zeroes of , and\nMoreover, the following quadrature holds\nThe discrete inner product and norm defined as follow, for any continuous functions ,\nFrom [32 ###reference_32###], the discrete norm is equivalent to the standard -norm in . If we denote \nand as the nodes and\nweights of shifted Legendre-Gauss-Lobatto quadratures on , then one can easily show that\nThus, we define the discrete inner product and norm on as follows\nIt is not difficult to obtain that\nand\nWe introduce the operator of interpolation at the shifted Legendre-Gauss-Lobatto nodes, denoted by , i.e., , , such that\nThe interpolation error estimate (see [7 ###reference_7###]) is\nNow consider the spectral discretization to the problem - as follows: find , such that\nwhere\nFor given, the well-posedness of the problem (3.7 ###reference_###) is guaranteed by the well-known Lax-Milgram Lemma."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "3.2. Convergence analysis of the full discretization scheme",
51
+ "text": "To simplify matters, we present the semi-discretized problem (2.28 ###reference_###)-(2.29 ###reference_###) in a compact form: find , such that\nwhere\nWe denote by the norm associated to the bilinear form :\nIt follows from (3.3 ###reference_###) that for all the\ndiscrete norm is equivalent to the norm defined in (2.27 ###reference_###).\nAssuming and .\nLet is the solution of the problem (3.7 ###reference_###) with the initial condition taken to be , the solution of the semi-discretized problem (2.28 ###reference_###)-(2.29 ###reference_###). Suppose that with , for ,\nthen there exists a constant such that\nProof. For any , denote . It is direct to check that\nBy virtue of (3.4 ###reference_###) gives\nhence\nFor the last term, by definition, we have\nand\nIt is known that the following result holds (see e.g. [32 ###reference_32###, 7 ###reference_7###]): , , ,\nThus for , , we have\nApplying the above results to (3.9 ###reference_###) and (3.10 ###reference_###), we obtain\nand\nLet , using (3.8 ###reference_###) and the norm equivalence, for , we have\nand\nBy triangular inequality\nfor , we obtain\nand\nThe above estimate specially holds for , which implies\nand\nSimilar to the proof of Theorem 2.2 ###reference_hm2###, we can immediately get the following conclusions:\nNotice that\nand the boundedness of and , then there exists a constant such that"
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "4. Numerical validation",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "4.1. Implementation",
63
+ "text": "We provide a comprehensive account of the implementation of problem (3.7 ###reference_###) using the shifted Legendre collocation method.\nConsidering problem (3.7 ###reference_###), we express the function in terms of the Lagrangian interpolants based on the shifted Legendre-Gauss-Lobatto points , i.e.,\nwhere , unknowns of the discrete solution. is the Lagrangian polynomials defined in , which satisfies\nwhere is the Kronecker symbols. Taking (4.5 ###reference_###) into (3.7 ###reference_###), and notice that the homogeneous Dirichlet boundary condition (1.3 ###reference_###), then choosing each test function to be , we have\nDefine the matrices\nThen, we obtain the matrix representation of the above equation in the following form:\nThe linear system (4.2 ###reference_###) can be solved in particular by the LU factorization or other related computational techniques.\nFinally, we discuss about the calculation of . When , the initial condition taken to be\nIt is not difficult to see that\nwhich implies that satisfies interpolation condition (3.5 ###reference_###). When , suppose that\nthen\nFurthermore,\nIn a word, we can easily obtain at each iteration of time-step."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "4.2. Numerical results",
69
+ "text": "We present a series of numerical results to validate our theoretical propositions.\nFirstly, to investigate the computational performance of two discrete fractional differential operators and , we test three examples from [2 ###reference_2###]. Denote .\nConsider the function , the Caputo-Fabrizio fractional derivative of\norder with of is written as\nConsider the function , the Caputo-Fabrizio fractional derivative of\norder with of is written as\nConsider the function , the Caputo-Fabrizio fractional derivative of\norder with of is written as\nThe proofs based on the method of integration by parts can be found in [8 ###reference_8###] and [2 ###reference_2###].\nWe choose in Example 4.1 ###reference_xamp1### and in Example 4.2 ###reference_xamp2### and Example 4.3 ###reference_xamp3###, and set , . Define the errors\nfor and operators, respectively, where is the last time step. Tables 1 ###reference_###\u20133 ###reference_### give the\nnumerical results of approximation error and CPU time with three examples. Here CPU time represents the total computation time, that is, the whole time for computing the approximations of Caputo-Fabrizio fractional derivatives at every time step. The convergence rates in Tables are given by\nTables 1 ###reference_###\u20133 ###reference_### demonstrate that the errors of both and approximations are virtually identical, as a result of their equivalence ().\nMoreover, both approximations have achieved second-order convergence of error, as stated in Lemma 2.1 ###reference_em1###.\nHowever, we observe that the CPU time of approximation increases linearly with respect to , while the approximation increases almost quadratically.\nThis suggests that the operator holds promise as it requires less storage and incurs lower computational costs than the operator during computation.\nSecondly, we provide preliminary computational findings to demonstrate the efficacy of the finite difference/shifted Legendre collocation method (abbreviated as FCM).\nConsider the following three-term time-fractional diffusion equations:\nwhere , , and\nwith , . The exact solution of the Eq.(4.3 ###reference_###) is , which is sufficiently smooth. In our experiments, we set the parameters .\nThe following error norms have been used as the error indicator:\nWe test Example 4.4 ###reference_xamp4### with four cases: Case 1. , then (4.3 ###reference_###) reduces to the single-term time-fractional diffusion equation; Case 2. , , ; Case 3. , , ; Case 4. , , . In time discretization, we use operator. Table 4 ###reference_### shows the errors and temporal accuracy of FCM with polynomial degree at for different cases of . Here convergence rates are given by\nIt can be observed that the FCM exhibits a second-order temporal convergence rate, , which is consistent with our theoretical analysis.\nNext, we check the spatial accuracy with respect to the polynomial degree . In order to avoid the contamination of temporal error, we need\nfix the time step sufficiently small. Here we take , and terminate computing at for saving time.\nFig. 4.1 ###reference_### shows the errors with respect to polynomial degree at in semi-log scale. Evidently, the spatial discretization exhibits exponential convergence as demonstrated by the nearly linear curves depicted in this figure. The aforementioned is known as spectral accuracy as expected since the exact solution is a sufficiently smooth function with respect to the space variable.\n\n###figure_1### \n###figure_2### To further verify the numerical validity, we finally test a two-dimensional problem.\nConsider the following three-term time-fractional diffusion equations:\nwhere , , and\nThe exact solution of the Eq. (4.4 ###reference_###) is , which is sufficiently smooth. In our experiments, we set the parameters .\nIn this case, we denote and as the nodes and weights of shifted Legendre-Gauss-Lobatto quadratures on . Then we express the function in terms of the two-dimensional Lagrangian interpolants based on the shifted Legendre-Gauss-Lobatto points ,\nwhere , unknowns of the discrete solution. and are the Lagrangian polynomials defined in and , i.e.,\nwhere and are the Kronecker symbols. A linear system such as (4.2 ###reference_###) can be readily derived.\nHere we take . Fig. 4.2 ###reference_### shows the errors with respect to polynomial degree in semi-log scale.\nThanks to the fast scheme (2.7 ###reference_###), a small time step does not significantly escalate the computational burden in the time direction,\nthereby the proposed method is effective even for handling high-dimensional problems.\n\n###figure_3### \n###figure_4###"
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "5. Concluding remarks",
75
+ "text": "In this work, we have developed a fully discrete scheme for the multi-term time-fractional diffusion\nequations with Caputo-Fabrizio derivatives. The proposed approach utilizes the finite difference method to approximate multi-term fractional derivatives in time and employs the Legendre spectral collocation method for spatial discretization.\nSpecifically, we use the exponential property of Caputo-Fabrizio derivative to give a recursive difference calculation scheme, which offers benefits in terms of computational complexity and storage capacity.\nThe proposed scheme has been proved to be unconditionally stable and convergent with order . Numerical results show good agreement with the theoretical analysis. Due to its high resolution feature in spectral approximation, the proposed method can be extended to handle multi-term time-fractional diffusion equations in higher spatial dimensions."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {
80
+ "1": {
81
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>Comparisons of with for Example <a class=\"ltx_ref\" href=\"#S4.Thmexamp1\" title=\"Example 4.1. \u2023 4.2. Numerical results \u2023 4. Numerical validation \u2023 Efficient numerical method for multi-term time-fractional diffusion equations with Caputo-Fabrizio derivatives^*\"><span class=\"ltx_text ltx_ref_tag\">4.1</span></a>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.58\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.13.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.1.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.6.2.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.7.3.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.8.4.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.9.5.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.10.6.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.11.7.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.12.8.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.13.9.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.22.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.14.10.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.15.11.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.16.12.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.17.13.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.18.14.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.19.15.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.20.16.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.21.17.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.22.18.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.31.27\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.23.19.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.24.20.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.25.21.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.26.22.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.27.23.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.28.24.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.29.25.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.30.26.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.31.27.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.40.36\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.32.28.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.33.29.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.34.30.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.35.31.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.36.32.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.37.33.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.38.34.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.39.35.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.40.36.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.49.45\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.41.37.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.42.38.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.43.39.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.44.40.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.45.41.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.46.42.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.47.43.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.48.44.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.49.45.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.58.54\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.50.46.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.51.47.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.52.48.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.53.49.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.54.50.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.55.51.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.56.52.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.57.53.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.58.54.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
82
+ "capture": "Table 1. Comparisons of with for Example 4.1."
83
+ },
84
+ "2": {
85
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2. </span>Comparisons of with for Example <a class=\"ltx_ref\" href=\"#S4.Thmexamp2\" title=\"Example 4.2. \u2023 4.2. Numerical results \u2023 4. Numerical validation \u2023 Efficient numerical method for multi-term time-fractional diffusion equations with Caputo-Fabrizio derivatives^*\"><span class=\"ltx_text ltx_ref_tag\">4.2</span></a>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.58\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.13.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.5.1.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.6.2.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.7.3.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.8.4.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.9.5.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.10.6.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.11.7.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.12.8.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.13.9.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.22.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.14.10.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.15.11.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.16.12.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.17.13.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.18.14.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.19.15.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.20.16.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.21.17.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.22.18.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.31.27\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.23.19.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.24.20.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.25.21.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.26.22.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.27.23.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.28.24.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.29.25.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.30.26.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.31.27.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.40.36\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.32.28.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.33.29.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.34.30.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.35.31.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.36.32.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.37.33.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.38.34.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.39.35.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.40.36.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.49.45\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.41.37.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.42.38.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.43.39.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.44.40.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.45.41.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.46.42.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.47.43.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.48.44.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.49.45.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.58.54\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.50.46.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.51.47.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.52.48.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.53.49.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.54.50.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.55.51.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.56.52.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.57.53.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T2.58.54.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
86
+ "capture": "Table 2. Comparisons of with for Example 4.2."
87
+ },
88
+ "3": {
89
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 3. </span>Comparisons of with for Example <a class=\"ltx_ref\" href=\"#S4.Thmexamp3\" title=\"Example 4.3. \u2023 4.2. Numerical results \u2023 4. Numerical validation \u2023 Efficient numerical method for multi-term time-fractional diffusion equations with Caputo-Fabrizio derivatives^*\"><span class=\"ltx_text ltx_ref_tag\">4.3</span></a>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.58\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.13.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.5.1.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.6.2.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.7.3.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.8.4.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.9.5.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.10.6.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.11.7.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.12.8.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.13.9.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.22.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.14.10.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.15.11.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.16.12.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.17.13.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.18.14.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.19.15.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.20.16.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.21.17.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.22.18.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.31.27\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.23.19.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.24.20.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.25.21.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.26.22.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.27.23.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.28.24.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.29.25.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.30.26.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.31.27.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.40.36\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.32.28.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.33.29.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.34.30.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.35.31.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.36.32.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.37.33.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.38.34.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.39.35.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.40.36.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.49.45\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.41.37.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.42.38.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.43.39.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.44.40.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.45.41.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.46.42.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.47.43.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.48.44.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.49.45.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.58.54\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T3.50.46.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T3.51.47.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T3.52.48.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T3.53.49.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T3.54.50.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T3.55.51.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T3.56.52.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T3.57.53.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T3.58.54.9\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
90
+ "capture": "Table 3. Comparisons of with for Example 4.3."
91
+ },
92
+ "4": {
93
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 4. </span>Numerical convergence of FCM in the temporal direction for Example <a class=\"ltx_ref\" href=\"#S4.Thmexamp4\" title=\"Example 4.4. \u2023 4.2. Numerical results \u2023 4. Numerical validation \u2023 Efficient numerical method for multi-term time-fractional diffusion equations with Caputo-Fabrizio derivatives^*\"><span class=\"ltx_text ltx_ref_tag\">4.4</span></a>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T4.160\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T4.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.1.1.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.2.2.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.3.3.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.4.4.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.5.5.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.6.6.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.7.7.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T4.8.8.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T4.15.15\">\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T4.15.15.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.9.9.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.10.10.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.11.11.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.12.12.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.13.13.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.14.14.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.15.15.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.23.23\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.16.16.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.17.17.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.18.18.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.19.19.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.20.20.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.21.21.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.22.22.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.23.23.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.31.31\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.24.24.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.25.25.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.26.26.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.27.27.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.28.28.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.29.29.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.30.30.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.31.31.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.39.39\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.32.32.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.33.33.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.34.34.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.35.35.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.36.36.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.37.37.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.38.38.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.39.39.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.46.46\">\n<td class=\"ltx_td\" id=\"S4.T4.46.46.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.40.40.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.41.41.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.42.42.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.43.43.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.44.44.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.45.45.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.46.46.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.53.53\">\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T4.53.53.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.47.47.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.48.48.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.49.49.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.50.50.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.51.51.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.52.52.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.53.53.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.61.61\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.54.54.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.55.55.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.56.56.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.57.57.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.58.58.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.59.59.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.60.60.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.61.61.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.69.69\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.62.62.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.63.63.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.64.64.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.65.65.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.66.66.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.67.67.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.68.68.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.69.69.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.77.77\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.70.70.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.71.71.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.72.72.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.73.73.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.74.74.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.75.75.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.76.76.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.77.77.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.84.84\">\n<td class=\"ltx_td\" id=\"S4.T4.84.84.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.78.78.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.79.79.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.80.80.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.81.81.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.82.82.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.83.83.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.84.84.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.91.91\">\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T4.91.91.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.85.85.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.86.86.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.87.87.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.88.88.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.89.89.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.90.90.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.91.91.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.99.99\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.92.92.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.93.93.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.94.94.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.95.95.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.96.96.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.97.97.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.98.98.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.99.99.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.107.107\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.100.100.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.101.101.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.102.102.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.103.103.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.104.104.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.105.105.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.106.106.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.107.107.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.115.115\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.108.108.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.109.109.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.110.110.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.111.111.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.112.112.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.113.113.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.114.114.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.115.115.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.122.122\">\n<td class=\"ltx_td\" id=\"S4.T4.122.122.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.116.116.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.117.117.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.118.118.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.119.119.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.120.120.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.121.121.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.122.122.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.129.129\">\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T4.129.129.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.123.123.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.124.124.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.125.125.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.126.126.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.127.127.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.128.128.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.129.129.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.137.137\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.130.130.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.131.131.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.132.132.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.133.133.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.134.134.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.135.135.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.136.136.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.137.137.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.145.145\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.138.138.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.139.139.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.140.140.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.141.141.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.142.142.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.143.143.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.144.144.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.145.145.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.153.153\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.146.146.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.147.147.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.148.148.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.149.149.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.150.150.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.151.151.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.152.152.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.153.153.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.160.160\">\n<td class=\"ltx_td ltx_border_b\" id=\"S4.T4.160.160.8\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T4.154.154.1\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T4.155.155.2\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T4.156.156.3\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T4.157.157.4\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T4.158.158.5\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T4.159.159.6\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T4.160.160.7\" style=\"padding-top:1.25pt;padding-bottom:1.25pt;\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
94
+ "capture": "Table 4. Numerical convergence of FCM in the temporal direction for Example 4.4."
95
+ }
96
+ },
97
+ "image_paths": {
98
+ "1(a)": {
99
+ "figure_path": "2307.08078v2_figure_1(a).png",
100
+ "caption": "Figure 4.1. Numerical convergence of FCM in the spatial direction for Example 4.4.",
101
+ "url": "http://arxiv.org/html/2307.08078v2/extracted/5334211/Fig1-1.jpg"
102
+ },
103
+ "1(b)": {
104
+ "figure_path": "2307.08078v2_figure_1(b).png",
105
+ "caption": "Figure 4.1. Numerical convergence of FCM in the spatial direction for Example 4.4.",
106
+ "url": "http://arxiv.org/html/2307.08078v2/extracted/5334211/Fig1-2.jpg"
107
+ },
108
+ "2(a)": {
109
+ "figure_path": "2307.08078v2_figure_2(a).png",
110
+ "caption": "Figure 4.2. Numerical convergence of FCM in the spatial direction for Example 4.5.",
111
+ "url": "http://arxiv.org/html/2307.08078v2/extracted/5334211/Fig2-1.jpg"
112
+ },
113
+ "2(b)": {
114
+ "figure_path": "2307.08078v2_figure_2(b).png",
115
+ "caption": "Figure 4.2. Numerical convergence of FCM in the spatial direction for Example 4.5.",
116
+ "url": "http://arxiv.org/html/2307.08078v2/extracted/5334211/Fig2-2.jpg"
117
+ }
118
+ },
119
+ "validation": true,
120
+ "references": [
121
+ {
122
+ "1": {
123
+ "title": "Phys. A.,",
124
+ "author": "M. Abdulhameed, D. Vieru, R. Roslan, Modeling electro-magneto-hydrodynamic thermo-fluidic transport of biofluids with new trend of fractional derivative without singular kernel,",
125
+ "venue": "484 (2018), 233\u2013252.",
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "2": {
131
+ "title": "Comput. Appl. Math.,",
132
+ "author": "T. Akman, B. Yldz, D. Baleanu, New discretization of Caputo\u2013Fabrizio derivative,",
133
+ "venue": "37 (2018), 3307\u20133333.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "3": {
139
+ "title": "Adv. Differential Equations.,",
140
+ "author": "M. Al-Refai, T. Abdeljawad, Analysis of the fractional diffusion equations with fractional derivative of non-singular kernel,",
141
+ "venue": "2017 (2017), 315.",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "4": {
147
+ "title": "New Trends in Mathematical Sciences,",
148
+ "author": "N. Al-Salti, E. Karimov, S. Kerbal, Boundary-value problems for fractional heat equation involving Caputo-Fabrizio derivative,",
149
+ "venue": "4 (2016), 79\u201389.",
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "5": {
155
+ "title": "Appl. Math. Comput.,",
156
+ "author": "A. Atangana, On the new fractional derivative and application to nonlinear Fisher\u2019s reaction\u2013diffusion equation,",
157
+ "venue": "273 (2016), 948\u2013956.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "6": {
163
+ "title": "Arab. J. Geosciences.,",
164
+ "author": "A. Atangana, B.S. T Alkahtani, New model of groundwater flowing within a confine aquifer: application of Caputo-Fabrizio derivative,",
165
+ "venue": "6 (2016), 1\u20136.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "7": {
171
+ "title": "C. Bernardi, Y. Maday, Approximations spectrales de problemes aux limites elliptiques,",
172
+ "author": "",
173
+ "venue": "volume 142, Berlin: Springer Press, 1992.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "8": {
179
+ "title": "Progr. Fract. Differ. Appl.,",
180
+ "author": "M. Caputo, M. Fabrizio, A new definition of fractional derivative without singular kernel,",
181
+ "venue": "1(2) (2015), 73\u201385.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "9": {
187
+ "title": "Progr. Fract. Differ. Appl.,",
188
+ "author": "M. Caputo, M. Fabrizio, Applications of new time and spatial fractional derivatives with exponential kernels,",
189
+ "venue": "2(1) (2016), 1\u201311.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "10": {
195
+ "title": "Appl. Numer. Math.,",
196
+ "author": "J. Shi, M. Chen, A second-order accurate scheme for two-dimensional space fractional diffusion equations with time Caputo-Fabrizio fractional derivative,",
197
+ "venue": "151 (2020), 246\u2013262.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "11": {
203
+ "title": "Signal Process.,",
204
+ "author": "E. Cuesta, M. Kirane, S.A. Malik, Image structure preserving denoising using generalized fractional time integrals,",
205
+ "venue": "92(2) (2012), 553\u2013563.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "12": {
211
+ "title": "Numer. Methods Partial Differential Equations,",
212
+ "author": "J.D. Djida, A. Atangana, More generalized groundwater model with space-time Caputo\u2013Fabrizio fractional differentiation,",
213
+ "venue": "33(5) (2017), 1616\u20131627.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "13": {
219
+ "title": "Math. Sci.,",
220
+ "author": "M. Fardi, J. Alidousti, A legendre spectral-finite difference method for Caputo\u2013Fabrizio time-fractional distributed-order diffusion equation,",
221
+ "venue": "16(4) (2022), 417\u2013430.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "14": {
227
+ "title": "Phys. A.,",
228
+ "author": "J. G\u00f3mez-Aguilar, M. L\u00f3pez-L\u00f3pez, V. Alvarado-Mart\u00ednez,\nJ. Reyes-Reyes, M. Adam-Medina, Modeling diffusive transport with a fractional derivative without\nsingular kernel,",
229
+ "venue": "447 (2016), 467\u2013481.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "15": {
235
+ "title": "J. Comput. Phys.,",
236
+ "author": "X. Gu, S. Wu, A parallel-in-time iterative algorithm for Volterra partial integro-differential problems with weakly singular kernel,",
237
+ "venue": "417 (2020), 109576.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "16": {
243
+ "title": "Appl. Math. Lett.,",
244
+ "author": "J. Jia, H. Wang, Analysis of asymptotic behavior of the Caputo\u2013Fabrizio time-fractional diffusion equation,",
245
+ "venue": "136 (2023), 108447.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "17": {
251
+ "title": "Commun. Comput. Phys.,",
252
+ "author": "S. Jiang, J. Zhang, Q. Zhang, Z. Zhang, Fast evaluation of the Caputo fractional derivative and its applications to fractional diffusion equations,",
253
+ "venue": "21(3) (2017), 650\u2013678.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "18": {
259
+ "title": "B. Jin, Fractional Differential Equations: An Approach via Fractional Derivatives,",
260
+ "author": "",
261
+ "venue": "Springer Cham Press, 2021.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "19": {
267
+ "title": "J. Comput. Phys.,",
268
+ "author": "B. Jin, R. Lazarov, Y. Liu, Z. Zhou, The Galerkin finite element method for a multi-term time-fractional diffusion equation,",
269
+ "venue": "281 (2015), 825\u2013843.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "20": {
275
+ "title": "Fract. Calc. Appl. Anal.,",
276
+ "author": "D. Kai, G. Roberto, G. Andrea, S. Martin, Why fractional derivatives with nonsingular kernels should not be used,",
277
+ "venue": "23(3) (2020), 610\u2013634.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "21": {
283
+ "title": "J. Comput. Phys.,",
284
+ "author": "M. Li, X.M. Gu, C. Huang, M. Fei, G. Zhang, A fast linearized conservative finite element method for the strongly\ncoupled nonlinear fractional Schr\u00f6dinger equations,",
285
+ "venue": "358 (2018), 256\u2013282.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "22": {
291
+ "title": "SIAM J. Numer. Anal.,",
292
+ "author": "X. Li, C. Xu, A space-time spectral method for the time fractional diffusion equation,",
293
+ "venue": "47(3) (2009), 2108\u20132131.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "23": {
299
+ "title": "J. Comput. Phys.,",
300
+ "author": "Y. Lin, C. Xu, Finite difference/spectral approximations for the time-fractional diffusion equation,",
301
+ "venue": "225(2) (2007), 1533\u20131552.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "24": {
307
+ "title": "ANZIAM J.,",
308
+ "author": "F. Liu, S. Shen, V. Anh, I. Turner, Analysis of a discrete non-Markovian random walk approximation for\nthe time fractional diffusion equation,",
309
+ "venue": "46 (2004), C488\u2013C504.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "25": {
315
+ "title": "Int. J. Comput. Math.,",
316
+ "author": "H. Liu, A. Cheng, H. Yan, Z. Liu, H. Wang, A fast compact finite difference method for quasilinear time\nfractional parabolic equation without singular kernel,",
317
+ "venue": "96(7) (2019), 1444\u20131460.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "26": {
323
+ "title": "AIMS Math.,",
324
+ "author": "Y. Liu, E. Fan, B. Yin, H. Li, Fast algorithm based on the novel approximation formula for the\nCaputo-Fabrizio fractional derivative,",
325
+ "venue": "5(3) (2020), 1729\u20131744.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "27": {
331
+ "title": "Critical Reviews in Biomedical Engineering,",
332
+ "author": "R. Magin, Fractional calculus in bioengineering, part 1,",
333
+ "venue": "32(1) (2004), 104 pages.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "28": {
339
+ "title": "F. Mainardi, Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models,",
340
+ "author": "",
341
+ "venue": "World Scientific: Imperial College Press, 2010.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "29": {
347
+ "title": "Phys. Rep.,",
348
+ "author": "R. Metzler, J. Klafter, The random walk\u2019s guide to anomalous diffusion: a fractional dynamics\napproach,",
349
+ "venue": "339(1) (2000), 1\u201377.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "30": {
355
+ "title": "Comput. Math. Appl.,",
356
+ "author": "I.A. Mirza, D. Vieru, Fundamental solutions to advection\u2013diffusion equation with time-fractional Caputo\u2013Fabrizio derivative,",
357
+ "venue": "73(1) (2017), 1\u201310.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "31": {
363
+ "title": "I. Podlubny, Fractional Differential Equations:\nAn Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications,",
364
+ "author": "",
365
+ "venue": "Elsevier Science: Academic Press, 1998.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "32": {
371
+ "title": "A. Quarteroni, A. Valli, Numerical Approximation of Partial Differential Equations,",
372
+ "author": "",
373
+ "venue": "Springer Berlin: Heidelberg Press, 2009.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "33": {
379
+ "title": "East Asian J. Appl. Math.,",
380
+ "author": "J. Ren, Z. Sun, Efficient and stable numerical methods for multi-term time fractional sub-diffusion equations,",
381
+ "venue": "4(3) (2014), 242\u2013266.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "34": {
387
+ "title": "Fractal and Fractional,",
388
+ "author": "S. Jocelyn, Fractional-order derivatives defined by continuous kernels: are they really too restrictive?,",
389
+ "venue": "4(3) (2020), 40.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "35": {
395
+ "title": "Filomat,",
396
+ "author": "M. Taghipour, H. Aminikhah, A new compact alternating direction implicit method for solving two\ndimensional time fractional diffusion equation with Caputo-Fabrizio\nderivative,",
397
+ "venue": "34(11) (2020), 3609\u20133626.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "36": {
403
+ "title": "J. Comput. Appl. Math.,",
404
+ "author": "N.H. Tuan, Y. Zhou, Well-posedness of an initial value problem for fractional diffusion\nequation with Caputo\u2013Fabrizio derivative,",
405
+ "venue": "375 (2020), 112811.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "37": {
411
+ "title": "arXiv,",
412
+ "author": "F. Yu, M. Chen, Finite difference/spectral approximations for the two-dimensional time Caputo-Fabrizio fractional diffusion equation,",
413
+ "venue": "(2019), 906.00328v1.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "38": {
419
+ "title": "J. Sci. Comput.,",
420
+ "author": "F. Zeng, I. Turner, K. Burrage, A stable fast time-stepping method for fractional integral and\nderivative operators,",
421
+ "venue": "77 (2018), 283\u2013307.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "39": {
427
+ "title": "Comput. Math. Appl.,",
428
+ "author": "Y. Zhao, Y. Zhang, F. Liu, I. Turner, Y. Tang, V. Anh, Convergence and superconvergence of a fully-discrete scheme for\nmulti-term time fractional diffusion equations,",
429
+ "venue": "73(6) (2017), 1087\u20131099.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "40": {
435
+ "title": "Appl. Math. Model.,",
436
+ "author": "M. Zheng, F. Liu, V. Anh, I. Turner, A high-order spectral method for the multi-term time-fractional\ndiffusion equations,",
437
+ "venue": "40(7-8) (2016), 4970\u20134985.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "41": {
443
+ "title": "SIAM J. Numer. Anal.,",
444
+ "author": "H. Zhu, C. Xu, A fast high order method for the time-fractional diffusion equation,",
445
+ "venue": "57(6) (2019), 2829\u20132849.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "42": {
451
+ "title": "Int. J. Comput. Math.,",
452
+ "author": "J. Zhou, X.M. Gu, Y.L. Zhao, H. Li, A fast compact difference scheme with unequal time-steps for the tempered time-fractional Black-\u2013Scholes model,",
453
+ "venue": "(2023), in press.",
454
+ "url": null
455
+ }
456
+ }
457
+ ],
458
+ "url": "http://arxiv.org/html/2307.08078v2"
459
+ }
20240119/2307.10266v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2307.14995v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2307.15610v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2308.02202v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2308.03016v3.json ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Shaping a Smarter Electromagnetic Landscape: IAB, NCR, and RIS in 5G Standard and Future 6G",
3
+ "abstract": "The main objective of 5G and beyond networks is to provide an optimal user experience in terms of throughput and reliability, irrespective of location and time. To achieve this, traditional fixed macro base station deployments are being replaced by more innovative and flexible solutions, such as wireless backhaul and relays. This article focuses on the evolution and standardization of these advancements, which are shaping the electromagnetic landscape. Specifically, we explore Integrated Access and Backhaul (IAB) nodes, which offer a cost-efficient and agile alternative to fiber backhaul. We also discuss Network-Controlled Repeaters (NCRs) and the emergence of Reconfigurable Intelligent Surfaces (RIS) actively adapting the wireless environment. The article provides an overview of the 5G features and ongoing developments in 3GPP Release 18 related to these intelligent EM entities, highlighting the expected evolution of future wireless networks in terms of architecture, operations, and control signals.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The primary goal for 5G and beyond wireless networks is to satisfy user experience requirements for throughput and reliability, while simultaneously maintaining energy efficiency across all locations. Cellular networks traditionally relied on fixed macro base stations (BSs) for deployment. However, this approach may not offer sufficient coverage in densely populated urban areas or at higher frequencies. Network densification can address these coverage gaps, but creating new infrastructure from scratch is time-consuming and poses significant challenges due to cost, power sourcing, real estate permissions, regulatory approvals, and backhaul availability. In this context, wireless backhaul emerges as a feasible solution for enabling flexible and dense network deployments.\nWireless backhaul has been extensively studied, with the Long Term Evolution (LTE) relay playing a role in standardization efforts during LTE Release 10. However, due to the limited performance boost and the complexity of relay features, few operators adopted LTE relay. With the rise of dense 5G New Radio (NR) networks and advancements in beamforming technology, there is renewed interest in developing an Integrated Access and Backhaul (IAB) solution. This solution offers a faster and more cost-efficient alternative to fiber backhaul. Figure 1 ###reference_### the 3GPP\u2019s roadmap for IAB development from 5G to 5G-Advanced. Studies on IAB architectures and radio protocols began with a Study Item (SI) within 3GPP Release 15 (Rel-15). Release 16 (Rel-16), detailed in Technical Specification TS 38.401 [1 ###reference_1###], officially introduced a multi-hop NR-based IAB solution. Its objective was to use the existing 5G radio air interface for wireless backhaul while meeting specified electromagnetic (EM) compatibility requirements. Standardization efforts continued in Release 17 (Rel-17), with a focus on enhancing IAB network performance through topology adaptation, duplexing, and efficiency enhancements. Furthermore, Release 18 (Rel-18) introduced a Work Item (WI) to explore architectural and system-level enhancements for 5G networks with mobile BS relays mounted on vehicles, termed Vehicle Mounted Relay (VMR).\nApart from IAB, utilizing the decode-and-forward operation, employing amplify-and-forward RF repeaters is a simpler solution for improving network coverage. RF repeaters have been widely deployed in commercial 2G, 3G, and 4G networks. As 5G NR technology migrates to higher frequencies, signal propagation conditions may deteriorate, necessitating the critical implementation of RF repeaters. To accommodate this increasing demand, 3GPP RAN4 introduced specifications for RF repeaters in Rel-17 [2 ###reference_2###]. These specifications lay out the RF and Electromagnetic Compatibility requirements for FR1 and FR2, ensuring compatibility in typical commercial environments.111In 3GPP, Frequency Range 1 (FR1) includes 410-7125 MHz, and Frequency Range 2 (FR2) includes 24.25-52.6 GHz.\nAlthough RF repeaters are cost-effective for expanding network coverage, their ability to support network performance-enhancing features such as adaptive spatial beamforming, dynamic gain and power adjustments, flexible downlink (DL)/uplink (UL) configurations, and ON-OFF stages is limited. To address these limitations, 3GPP initiated a SI on Network-Controlled Repeaters (NCRs) in Rel-18. NCRs inherit the amplify-and-forward operation of RF repeaters but also receive side control information from the 5G gNB to function more efficiently. This SI was finalized in August 2022 with TR38.867 [3 ###reference_3###], where RAN1 pinpointed the essential side control information necessary for NCR functionality. This was followed by the NCR WI [4 ###reference_4###], detailing side control information for beamforming, UL-DL Time Division Duplex (TDD) operations, and ON-OFF protocols. Figure 1 ###reference_### provides an overview of 3GPP\u2019s NCR roadmap.\nReconfigurable Intelligent Surfaces (RIS) are an emerging technology that leverages reconfigurable surface technology to adapt to the propagation environment. Primarily using passive components and eliminating the need for expensive active components like power amplifiers, RIS provides benefits such as lower hardware costs, reduced energy consumption, and versatile deployment options on various structures like walls, buildings, and lamp posts. Compared to IAB and NCR, RIS stands out due to its cost-effectiveness and adaptability.\nAs for standardization, RIS has not yet been studied in 3GPP. While some companies proposed including RIS as a SI in 3GPP for Rel-18, most considered it premature and suggested exploring it for 6G technology instead. Consequently, the proposal was not approved for Rel-18. To address standardization, the ETSI Industry Specification Group (ISG) for RIS was established in September 2021. It serves as the pre-standardizing group for RIS, aiming to define use cases, deployment scenarios, and requirements to establish global standardization. Their goal is to enable dynamic control over radio signals, effectively transforming the wireless environment into an adaptable service. The group\u2019s progress includes the release of its first Group Report (ETSI GR RIS-001 [5 ###reference_5###]) in April 2023, which identifies pertinent use cases for RIS, followed by ETSI GR RIS-003 [6 ###reference_6###] in June 2023, discussing communication and channel models, and ETSI GR RIS-002 [7 ###reference_7###] in August 2023, which delves into technological challenges, architectural considerations, and their implications for standardization. Figure 1 ###reference_### showcases ETSI\u2019s progress on RIS.\nWith the standardization of IAB, NCR, and RIS, future networks are expected to incorporate a variety of network nodes to optimize performance, reduce costs, and minimize power consumption [8 ###reference_8###]. 3GPP is currently advancing into the second phase of 5G standardization, known as 5G-Advanced, which builds upon the foundational 5G baseline established in 3GPP Releases 15, 16, and 17. The 3GPP Release 19 RAN workshop (Rel-19 WS), held on June 15-16, 2023, garnered significant global interest, with contributions from over 80 different companies and organizations. 3GPP Rel-19, which commenced in 2024, will study advanced network topology. In certain use cases, IAB, NCR, and RIS share similar deployment scenarios. Therefore, the progression of IAB and NCR within existing 5G and the ongoing 5G-Advanced developments provide mutual references. Additionally, their developments establish a foundational reference for the future standardization of RIS. This article aims to provide an overview of the 5G NR features pertinent to IAB and NCR, focusing on their anticipated evolution in architecture, operations, and control signals. Additionally, it offers insights into the current status and development of RIS as per the ETSI ISG RIS."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II IAB",
15
+ "text": "###figure_1### Rel-16 introduced a multi-hop NR-based IAB solution, aimed at reusing the existing 5G radio air interface. This section explores the architecture of IAB, covering its network topology, resource allocation mechanisms, and recent enhancements, along with an outlook on future developments."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A IAB Architecture",
21
+ "text": "The architecture of IAB, depicted in Figure 2 ###reference_###(a), defines two types of network nodes [1 ###reference_1###]."
22
+ },
23
+ {
24
+ "section_id": "2.1.1",
25
+ "parent_section_id": "2.1",
26
+ "section_name": "II-A1 IAB-donor",
27
+ "text": "In the IAB network, the IAB-donor acts as the central control node, consisting of a centralized unit (IAB-donor-CU) and a distributed unit (IAB-donor-DU). These units are connected through a wired F1 interface. The IAB-donor-DU handles the lower protocol layers (PHY, MAC, and RLC), while the IAB-donor-CU oversees the upper protocol layers (PDCP and SDAP/RRC).222Abbreviations: Physical (PHY), Medium Access Control (MAC), Radio Link Control (RLC), Packet Data Convergence Protocol (PDCP), and Service Data Adaptation Protocol (SDAP) / Radio Resource Control (RRC). This division permits time-sensitive functions to be conducted by the IAB-donor-DU, which is closer to the served nodes, and the remaining functions by the IAB-donor-CU, which has superior processing capacity [9 ###reference_9###, 10 ###reference_10###]."
28
+ },
29
+ {
30
+ "section_id": "2.1.2",
31
+ "parent_section_id": "2.1",
32
+ "section_name": "II-A2 IAB-node",
33
+ "text": "The IAB-node is a wireless relay node within the IAB network, tasked with initiating access to the parent node above it and providing access services to child nodes below it. As such, the IAB-node comprises two functional modules: the IAB Mobile Termination (IAB-MT) and the IAB-node-DU. The IAB-MT links an IAB-node with the DU of the parent/upstream node, which can either be an IAB-donor or another IAB-node. The IAB-node-DU (or IAB-donor-DU) caters to the user equipments (UEs) and potentially downstream IAB-nodes in instances of multi-hop wireless backhauling.\nIAB-nodes function as Layer-2 regenerative relays, decoding and re-encoding each received packet prior to retransmission. This mechanism applies to packets received from the IAB-donor, UEs, or other IAB-nodes."
34
+ },
35
+ {
36
+ "section_id": "2.2",
37
+ "parent_section_id": "2",
38
+ "section_name": "II-B IAB Network Topology",
39
+ "text": "Figure 2 ###reference_###(a) depicts the topology of an IAB network. The network originates from an IAB-donor, which serves as the central control node and traffic convergence point between the access and core network. The network expands in a tree-like structure through wireless links, connecting multiple intermediate nodes or IAB-nodes. These IAB-nodes provide UE access services and enable multi-hop transmissions, directing backhaul traffic to their respective child or upstream nodes.\nThe IAB-donor-CU establishes connections with all DU components in the IAB topology using wired or wireless F1 interfaces. It manages traffic to and from the core network while coordinating the operations of the entire IAB topology. For its child IAB-nodes, the IAB-donor provides wireless backhaul links within its radio coverage. From the perspective of UEs, there is no distinction between IAB-nodes, IAB-donors, and regular NR BSs. They all provide access links to UEs within their radio coverage through the IAB-DU module. The IAB-MT module, similar to a UE with a subset of UE functions, enables the IAB-node to connect to its parent DU node through the NR air interface.\nDuring the initial power-up of an IAB-node, it needs to access the IAB network as a UE, acquire an IP address, and establish a wireless F1 interface between the IAB-node-DU and the IAB-donor-CU. Once the IAB-node-DU is configured, it can operate as a relay node within the network topology.\nIn a single-hop transmission scenario, the UE directly receives network services through the access link provided by the IAB-donor. However, in a multi-hop transmission scenario, the UE accesses the network via a neighboring IAB-node and utilizes the wireless backhaul function for multi-hop transmissions. The uplink transmission is relayed to the IAB-donor through multiple wireless backhaul links before reaching the core network, and the downlink transmission follows the reverse process."
40
+ },
41
+ {
42
+ "section_id": "2.3",
43
+ "parent_section_id": "2",
44
+ "section_name": "II-C IAB Resource Allocation Mechanism",
45
+ "text": "The IAB supports both out-of-band and in-band backhauling, with the latter using the same frequencies for NR backhaul and access links. In-band IAB may experience cross-link interference. To address this issue, various half-duplex (HD) multiplexing schemes have been developed. 3GPP Rel-16 primarily adopts Time-Division Multiplexing (TDM) for wireless resource allocation in IAB networks. In TDM, the IAB-node\u2019s parent link is allocated different time slots compared to its child links, thereby avoiding simultaneous transmissions and receptions by co-located MT and DU to prevent interference. 3GPP Rel-17 introduces other HD schemes like Frequency-Division Multiplexing (FDM) and Space-Division Multiplexing (SDM).\nFDM assigns different frequencies to backhaul and access links, while SDM uses beamforming with multiple antennas for spatial separation. However, SDM does not always eliminate cross-link interference due to non-narrow beam widths and is often combined with TDM or FDM for optimal performance. Despite their utility, these HD schemes have limitations. TDM can cause relay delays, and FDM demands more spectrum. To improve spectral efficiency and reduce latency, 3GPP Rel-17 proposes a full-duplex (FD) mode. FD allows simultaneous transmission and reception on the same frequency in IAB-nodes. This, however, introduces significant self-interference, necessitating a strong self-interference cancellation mechanism for effective full-duplex operation in IAB networks [10 ###reference_10###]."
46
+ },
47
+ {
48
+ "section_id": "2.4",
49
+ "parent_section_id": "2",
50
+ "section_name": "II-D Outlook",
51
+ "text": "Rel-16 ratified IAB, with enhancements continuing through Releases 17\u201318. Broadly, wireless backhaul enables the deployment of mobile cells, with Mobile Base Station Relays (MBSRs) often installed in vehicles like buses or trains, known as VMRs, to provide coverage for UE within or near the vehicle. 3GPP Rel-18 focuses on architecture enhancements for MBSRs [11 ###reference_11###]. MBSRs operate within a single-hop topology framework, where the lower IAB-node aligns with the MBSR. An intermediate IAB-node may exist, provided it does not function as an MBSR, as shown in Figure 2 ###reference_###(b). Due to the mobility of an MBSR over a wide area, it may need to switch its IAB-donor. This can disrupt the upper protocol layer connections of the served UEs, even when the UEs are stationary inside the vehicle. To mitigate these disruptions, the introduction of a dedicated mobile control unit (m-CU) is proposed. The m-CU has an Xn connection to the donor gNB. By ensuring that the MBSR\u2019s DU is under the service of the m-CU, the MBSR can move across a larger RAN coverage area without requiring a change in the m-CU. This mobility allows the MBSR between IAB-donors to remain transparent to the connected UEs, as long as the control remains within the same m-CU. As illustrated in Figure 2 ###reference_###(b), during the MBSR\u2019s mobility between IAB-donors, the F1 interface between the MBSR and m-CU is preserved by transferring the F1 connection from the source to the target IAB-donors.\nRel-18 also addresses other pertinent challenges associated with MBSRs, such as implementing location services for UEs accessing the network through mobile or roaming MBSRs, ensuring accurate Cell ID/Tracking Area Code information despite MBSR movements, and developing efficient controls for managing UE access to the 5G network via MBSRs. Geographic constraints and legacy UE support are also factored in. The extension of IAB application scenarios to various domains is expected, including non-terrestrial network (NTN)-based backhauling, urban air mobility, public safety, and disaster recovery scenarios. These applications triggered a study of Wireless Access Backhaul (WAB) at Rel-19."
52
+ },
53
+ {
54
+ "section_id": "3",
55
+ "parent_section_id": null,
56
+ "section_name": "III NCR",
57
+ "text": "As another network node to improve coverage in 3GPP, the scope of NCRs, as detailed in the NCR WI [4 ###reference_4###], is more narrowly defined compared to IAB. According to TR 38.867 [3 ###reference_3###], NCRs primarily focus on the following scenarios and assumptions:\nNCRs are in-band RF repeaters used to extend network coverage on FR1 and FR2 bands.\nOnly single-hop stationary NCRs are considered.\nThe NCR is transparent to the UE.\nThe NCR can maintain the gNB-repeater link and the repeater-UE link simultaneously.\nTable I ###reference_### provides a summary of the comparisons between IAB and NCR. In the subsequent subsections, we explore the architecture and functionalities of NCRs in more detail."
58
+ },
59
+ {
60
+ "section_id": "3.1",
61
+ "parent_section_id": "3",
62
+ "section_name": "III-A NCR Architecture",
63
+ "text": "###figure_2### As shown in Figure 3 ###reference_###, the NCR consists of two functional entities [3 ###reference_3###, Sec. 5]: the NCR-MT and the NCR-Forwarding (NCR-Fwd).\nThe NCR-MT, functioning similarly to the IAB-MT, connects to the gNB via a Control link (C-link) using the NR Uu interface. It supports a subset of UE functions to link with its parent gNB as a standard device while being identified as an NCR within the network. Furthermore, it manages side control information exchange to oversee the NCR-Fwd operations. Conversely, the NCR-Fwd serves purely as an RF repeater, relaying UL/DL RF signals between the gNB and UEs across the backhaul and access links. Its operation is guided by side control information relayed by the NCR-MT from the gNB. In summary, compared to the IAB-node, the NCR solely deciphers control information pertinent to itself and ignores all control, data, or signals meant for UEs.\nThe NCR-Fwd can be implemented with two sets of panel antennas (one for the backhaul link and the other for the access link) and one RF amplifier (see Figure 3 ###reference_###). It amplifies and beamforms the signal before forwarding it in the DL or UL direction. Beamforming techniques allow adjusting the reception and transmission directions. The NCR-Fwd module primarily focuses on signal amplification and (analog) beamforming, eliminating the need for advanced digital receiver or transmitter chains. The performance requirements for beamforming antennas in the NCR-Fwd are not as high as those for macro BS or IAB-node antennas. Cost-effectiveness and ease of manufacturing are prioritized.\nThe NCR-MT and NCR-Fwd can operate in the same or different frequency bands. However, at least one carrier used by the NCR-MT must operate within the frequency band being forwarded by the NCR-Fwd, which serves as the baseline."
64
+ },
65
+ {
66
+ "section_id": "3.2",
67
+ "parent_section_id": "3",
68
+ "section_name": "III-B Side Control Information",
69
+ "text": "TR 38.867 [3 ###reference_3###, Sec. 6] explores various forms of side control information, including but not limited to beam information, timing information, UL-DL TDD configuration, and ON-OFF information. We detail their signaling protocols in the subsequent subsections."
70
+ },
71
+ {
72
+ "section_id": "3.2.1",
73
+ "parent_section_id": "3.2",
74
+ "section_name": "III-B1 Beam Information",
75
+ "text": "Deploying 5G NR in high-frequency bands necessitates beamforming capabilities in NCRs for optimal performance. The distinct characteristics of backhaul and access links allow for tailored beamforming mechanisms in the side control information:\nBackhaul link and C-link\u2014Given the NCR\u2019s stationary nature, it can utilize fixed or adaptive beamforming on the backhaul and C-links to cope with varying conditions.\nIn the baseline scenario where the NCR-MT and NCR-Fwd operate within the same frequency band, it is expected that the C-link and backhaul link will experience similar large-scale channel characteristics. Therefore, the same transmission configuration indicator states used for the C-link can also be applied to the NCR-Fwd for beamforming in the backhaul link.\nFurthermore, if adaptive beams are employed for both the C-link and backhaul link, the determination and indication of the backhaul link beams can be accomplished through new signaling provided by the gNB. In the absence of indication via the new signaling, the backhaul link beam can be determined based on a predefined rule.\n###figure_3### Access link\u2014In the context of the access link, dynamic beam steering towards users leads to improved SINR performance compared to fixed-beam solutions, particularly benefiting cell edge users [3 ###reference_3###, Sec. 9]. Thus, providing beam information to guide the NCR\u2019s access link behavior is advisable.\nThe access link for NCR-Fwd is identified by a beam index, supporting both dynamic and semi-static indications. The dynamic indication allows for rapid adaptation to changing conditions, such as user mobility, whereas the semi-static indication holds a stable configuration with infrequent adjustments. Time domain resources must be explicitly linked to these beam indications. Specifically, the gNB specifies which time slots or symbols are to be allocated for a particular beam or set of beams to manage NCR-Fwd operations.\nFor the DL/UL of the access link in NCR-Fwd, beam correspondence is assumed. That is, the DL and UL beams on the access side that are paired with each other are assigned the same beam index.\nTo illustrate a beam indication mechanism for the access link, Figure 4 ###reference_### serves as an example. Here, the gNB guides the NCR through beam control information with Beam Index (BI) to perform periodic beam sweeping during the beam training phase. The gNB may not have full knowledge of the NCR\u2019s beam characteristics but knows the NCR\u2019s capability regarding the number of supported beams; meanwhile, the UE cannot differentiate between beams originating from the gNB or the NCR. Based solely on the received Reference Signals (RSs), specifically Channel State Information RS (CSI-RS), for beam quality measurement, the UE reports its preferred beam index, such as RS=5. From the UE\u2019s perspective, it cannot know which gNB\u2019s Tx beam and which NCR\u2019s Tx beam are applied for the transmission of each RS. In contrast, the gNB has full control over which gNB\u2019s Tx beam and which NCR\u2019s beam index are adopted for the transmission of RS=5. Subsequently, for subsequent data transmission, as depicted in Figure 4, the gNB indicates BI=5 as the NCR\u2019s Tx beamforming for forwarding."
76
+ },
77
+ {
78
+ "section_id": "3.2.2",
79
+ "parent_section_id": "3.2",
80
+ "section_name": "III-B2 Timing Information",
81
+ "text": "The NCR\u2019s timing is assumed to follow these guidelines:\nWhen internal delay is not considered: The NCR-Fwd\u2019s DL reception timing is synchronized with the NCR-MT\u2019s DL reception timing. Likewise, the NCR-Fwd\u2019s UL transmission timing is synchronized with the NCR-MT\u2019s UL transmission timing.\nWhen internal delay is considered: The NCR-Fwd\u2019s DL transmission timing occurs after an internal delay subsequent to the NCR-MT\u2019s DL reception timing. Conversely, the NCR-Fwd\u2019s UL reception timing precedes the NCR-MT\u2019s UL transmission timing by an internal delay."
82
+ },
83
+ {
84
+ "section_id": "3.2.3",
85
+ "parent_section_id": "3.2",
86
+ "section_name": "III-B3 UL-DL TDD Configuration Information",
87
+ "text": "To avoid introducing significant signaling overhead in NCR, a semi-static TDD UL/DL configuration is supported for the C-link, backhaul link, and access link. To mitigate cross-link interference, the NCR-Fwd has its default behavior set to OFF on the flexible symbols/slots in the semi-static configuration. Additionally, for simplicity, the same TDD UL/DL configuration is assumed to be used for both the backhaul link and access link."
88
+ },
89
+ {
90
+ "section_id": "3.2.4",
91
+ "parent_section_id": "3.2",
92
+ "section_name": "III-B4 ON-OFF Information",
93
+ "text": "Using ON-OFF information allows NCRs to be turned off when not needed, resulting in power savings. TR 38.867 [3 ###reference_3###, Sec. 9] highlights that ON-OFF information not only saves power but also helps NCRs mitigate interference for high SINR users while maintaining performance for low SINR users, leading to an improved network experience. Therefore, ON-OFF information is recommended for NCRs to control the behavior of NCR-Fwd, providing benefits for network performance. By default, the NCR-Fwd is assumed to be OFF unless explicitly or implicitly indicated by the gNB."
94
+ },
95
+ {
96
+ "section_id": "3.3",
97
+ "parent_section_id": "3",
98
+ "section_name": "III-C Outlook",
99
+ "text": "As Table I ###reference_### illustrates, NCR demonstrates a narrower scope compared to IAB, primarily due to its early stage of application and specific design objectives. Further enhancements in the side control information for NCR are anticipated. For instance, at the Rel-19 WS, some companies highlighted the importance of power control in enhancing NCR performance. Additionally, strategies such as beamforming improvements, refined scheduling techniques, and the use of diverse frequencies for the backhaul link, may also be explored.\nA significant trend observed in Rel-18 is the growing transition towards a more UE-centric architecture within the evolution of massive MIMO. One such example is the coherent joint transmission for multi-transmission/reception point. This shift is motivated by the increasing quantity of UEs, which induces complexities in network management. Consistent with this trend, the concept of a UE-controlled repeater (UCR) has been proposed at the Rel-19 WS [12 ###reference_12###]. The UCR employs an amplify-and-forward Layer-1 forwarding with frequency translation mechanism, which relays received signals to the UE using a different frequency band. This method enables end-user-centric collaborative MIMO, as described in [13 ###reference_13###]. Here, multiple fixed or portable devices cooperate to create a rich array of antennas, thereby resulting in substantial performance enhancements."
100
+ },
101
+ {
102
+ "section_id": "4",
103
+ "parent_section_id": null,
104
+ "section_name": "IV RIS",
105
+ "text": "RIS has not yet been designated as a SI by 3GPP. According to ETSI GR RIS-001 [5 ###reference_5###], RIS is defined as:\n\u201cRIS is a new type of system node with reconfigurable\nsurface technology, which can adapt its response according to\nthe status of the propagation environment through\ncontrol signaling.\u201d\nIn particular, RIS manipulates incoming wireless signals through techniques like reflection, refraction, absorption, and backscattering. It includes active, passive, and hybrid designs.333Active RIS uses powered RF circuits, passive RIS employs cost-effective elements to modify EM fields, and hybrid RIS merges reflection with signal sensing for communication enhancement while retaining passive RIS\u2019s energy efficiency.\n###figure_4###"
106
+ },
107
+ {
108
+ "section_id": "4.1",
109
+ "parent_section_id": "4",
110
+ "section_name": "IV-A RIS Architecture",
111
+ "text": "RIS can be modelled as a combination of a RIS controller and a RIS panel, as detailed in ETSI GR RIS-002 [7 ###reference_7###, Fig. 5.1-1]. The panel is equipped with elements capable of altering the characteristics of incoming radio waves, either reflecting or redirecting them based on the panel\u2019s design. The RIS controller not only adjusts these elements to manipulate the waves but also processes control signals from other network nodes. Functionally, RIS could resemble NCR, leading to suggestions to adapt the existing Rel-18 NCR architecture for RIS applications. For example, the adoption of an NCR-like architecture with separate RFs for control and reflection to increase design flexibility has been proposed [14 ###reference_14###], as depicted in Figure 5 ###reference_###. Unlike NCR and IAB, RIS may not include a MT unit. Instead, it features a streamlined control unit, connected via a wired or wireless control link, and uses a single reflective panel instead of separate receive and transmit antenna panels.\nWhile the NCR-like architecture for RIS shows promise, it is also important to consider alternative architectures due to RIS\u2019s diverse deployment scenarios. The ETSI GR RIS-001 [5 ###reference_5###] highlights key environments for RIS deployment, including indoor, outdoor, and hybrid settings. RIS can be deployed in either fixed or nomadic manners. In a fixed deployment, the RIS is attached to a static structure, such as a building wall, creating a largely static radio channel with a stationary BS. On the other hand, the nomadic deployment model permits mounting the RIS on moving platforms like trains or vehicles, allowing dynamic changes in its location or orientation. Therefore, network operators and service providers need to devise specific control strategies for each type of deployment, considering that NCR-like approaches might not always be applicable. As expounded in [7 ###reference_7###], RIS control methodologies include:\nNetwork-controlled RIS: The network directly provides configuration commands based on measurements from UEs and/or RIS, with the RIS controller under network jurisdiction.\nNetwork-assisted RIS: The RIS controller, which could be part of the network or a third party, uses UE and/or RIS measurements with network input to adjust the RIS settings.\nStandalone RIS: This controller independently dictates RIS configurations based on UE and/or RIS measurements.\nUE-controlled RIS: Configuration control lies with the UE, which instructs the RIS through its controller.\nHybrid-controlled RIS: The RIS controller is bifurcated into remote and local segments. The remote segment is part of the network or a third party, while the local is embedded in the RIS controller."
112
+ },
113
+ {
114
+ "section_id": "4.2",
115
+ "parent_section_id": "4",
116
+ "section_name": "IV-B RIS Requirements",
117
+ "text": "RIS distinguishes itself from IAB and NCR in terms of cost efficiency and energy conservation.\nSpecifically, according to ETSI GR RIS-001 [5 ###reference_5###], RIS requirements include:\nHardware Cost: RIS should be more cost-effective than IAB and NCR, factoring in production and component costs.\nEase of Deployment and Maintenance: Its deployment should be straightforward, regardless of whether fixed or wireless backhaul is used. Maintenance including fault detection and software updates should be easy.\nSignal Power Boosting: The RIS\u2019s ability to enhance signal power is influenced by its size, number of elements, and configurability.\nReconfigurability: RIS should rapidly adapt to changes, with attention to the quantity of elements that can be reconfigured simultaneously.\nInteroperability and Regulatory Compliance: It must integrate seamlessly with existing networks and adhere to EM exposure regulations.\nBy meeting these requirements, RIS facilitates integration into various applications, extending beyond the coverage-focused roles of IAB and NCR. Its uses range from improving coverage to enabling wireless power transfer, supporting ambient backscatter communications, enhancing positioning accuracy, and strengthening secure communication. This contributes to the development of a ubiquitous intelligent network."
118
+ },
119
+ {
120
+ "section_id": "4.3",
121
+ "parent_section_id": "4",
122
+ "section_name": "IV-C Outlook",
123
+ "text": "The ETSI ISG RIS completed its first phase, spanning two years, concentrating on exploring technological potential, validation, and standardization requirements. This phase concluded with the publication of three GRs. The organization will now enter the second phase, targeting the initial specification of the functional architecture.\nDuring the Rel-19 WS, various companies and institutes proposed a study or feasibility phase for RIS, which can also reflect a guide for this specification.\nThe shared perspectives include:\nChannel Modeling: Unlike IAB and NCR, where the channel models between two nodes can be modeled like a transmit-receive pair using the TR 38.901 [15 ###reference_15###] channel model, RIS is seen as a reconfigurable cluster, leading to a more complex channel model. Industry proposals highlight the need to examine channel modeling for RIS, taking into account a range of propagation effects such as reflection, refraction, absorption, and scattering. Models should represent diverse environments\u2014indoor, outdoor, outdoor-to-indoor, and line-of-sight (LoS)/Non-LoS situations\u2014as well as fluctuations at both large and small scales, radiation characteristics, and behaviors in both near and far fields. ETSI GR RIS-003 [6 ###reference_6###] highlights the necessity of creating models that strike a balance between detailed complexity and practical accuracy.\nUse Cases and Deployment Scenarios: The companies generally proposed studying various use cases, deployment scenarios, and operation modes of RIS, with an emphasis on their potential for improving network coverage and communication performance. This includes investigating RIS\u2019s role in both FR1 and FR2 frequency bands and exploring the potential for cooperative transmission and interference mitigation.\nRIS Architecture and Control Signaling: The companies proposed advanced control information and signaling for RIS, including power control, signal operation, and beam management. This highlights the need to study how RIS integrates with and affects existing technological frameworks. Similar points are also discussed in ETSI GR RIS-002 [7 ###reference_7###, Sec. 7]."
124
+ },
125
+ {
126
+ "section_id": "5",
127
+ "parent_section_id": null,
128
+ "section_name": "Conclusion",
129
+ "text": "This article provided a comprehensive overview of the 3GPP standardization efforts for IAB, NCR, and RIS. IAB enhances coverage and capacity using Layer-2 capable decode-and-forward relays, while NCR extends coverage through simpler amplify-and-forward repeaters. RIS, on the other hand, manipulates signals using cost-effective, energy-efficient reconfigurable unit-cells. As these network nodes standardize and evolve within 3GPP, as depicted in Figure 1 ###reference_###, future wireless networks are expected to incorporate a mix of gNBs and IABs, creating a network of macro and small cells. NCR and RIS will improve coverage and UE connectivity, leading to a sophisticated wireless communication landscape."
130
+ }
131
+ ],
132
+ "appendix": [],
133
+ "tables": {
134
+ "1": {
135
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Comparisons of IAB and NCR.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.2.1\">IAB</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.1.1.1.3.1\">NCR</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.1.2.1.1\">3GPP Stage</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.2.1.2\">Part of Rel-16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.2.1.3\">WI concluded in Rel-18</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.1.3.2.1\">A&amp;B Links</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.3.2.2\">Out-of-band/In-band</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.3.2.3\">In-band</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.1.4.3.1\">Architecture</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.4.3.2\">Multi-hop</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.4.3.3\">Single-hop</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.1.5.4.1\">Deployment</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.5.4.2\">Stationary/Mobile</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.5.4.3\">Stationary</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.1.6.5.1\">Operation</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.6.5.2\">Decode-and-forward</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.6.5.3\">Amplify-and-forward</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.1.7.6.1\">Duplex Mode</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.7.6.2\">HD/FD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.7.6.3\">FD</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.1.8.7.1\">UE Transparency</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.8.7.2\">Not transparent</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.8.7.3\">Transparent</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S2.T1.1.9.8.1\">Power Saving</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.1.9.8.2\">Specifics vary</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S2.T1.1.9.8.3\">Uses ON-OFF information</td>\n</tr>\n</tbody>\n</table>\n</figure>",
136
+ "capture": "TABLE I: Comparisons of IAB and NCR."
137
+ }
138
+ },
139
+ "image_paths": {
140
+ "1": {
141
+ "figure_path": "2308.03016v3_figure_1.png",
142
+ "caption": "Figure 1: 5G standardization for IAB, NCR, and RIS.",
143
+ "url": "http://arxiv.org/html/2308.03016v3/x1.png"
144
+ },
145
+ "2": {
146
+ "figure_path": "2308.03016v3_figure_2.png",
147
+ "caption": "Figure 2: IAB architectures for 5G system and MBSR.",
148
+ "url": "http://arxiv.org/html/2308.03016v3/x2.png"
149
+ },
150
+ "3": {
151
+ "figure_path": "2308.03016v3_figure_3.png",
152
+ "caption": "Figure 3: NCR architecture.",
153
+ "url": "http://arxiv.org/html/2308.03016v3/x3.png"
154
+ },
155
+ "4": {
156
+ "figure_path": "2308.03016v3_figure_4.png",
157
+ "caption": "Figure 4: Beam indication for access link.",
158
+ "url": "http://arxiv.org/html/2308.03016v3/x4.png"
159
+ },
160
+ "5": {
161
+ "figure_path": "2308.03016v3_figure_5.png",
162
+ "caption": "Figure 5: Potential RIS architecture [14].",
163
+ "url": "http://arxiv.org/html/2308.03016v3/x5.png"
164
+ }
165
+ },
166
+ "validation": true,
167
+ "references": [],
168
+ "url": "http://arxiv.org/html/2308.03016v3"
169
+ }
20240119/2308.03279v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2309.07988v3.json ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Folding Attention: Memory and Power Optimization for On-device Transformer-based Streaming Speech Recognition",
3
+ "abstract": "Transformer-based models excel in speech recognition. Existing efforts to optimize Transformer inference, typically for long-context applications, center on simplifying attention score calculations. However, streaming speech recognition models usually process a limited number of tokens each time, making attention score calculation less of a bottleneck. Instead, the bottleneck lies in the linear projection layers of multi-head attention and feedforward networks, constituting a substantial portion of the model size and contributing significantly to computation, memory, and power usage.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Transformer-based architectures [1 ###reference_1###] have demonstrated notable effectiveness in automatic speech recognition (ASR), spanning various modeling paradigms, including sequence-to-sequence models [2 ###reference_2###, 3 ###reference_3###, 4 ###reference_4###, 5 ###reference_5###, 6 ###reference_6###], neural transducers [7 ###reference_7###, 8 ###reference_8###, 9 ###reference_9###, 10 ###reference_10###], Connectionist Temporal Classification [11 ###reference_11###, 9 ###reference_9###], and hybrid models [7 ###reference_7###, 12 ###reference_12###, 13 ###reference_13###].\nThe core mechanism of Transformers, known as attention, involves projecting tokens into queries, keys, and values and then comparing queries with keys to calculate attention scores. This attention score calculation exhibits quadratic complexity relative to the number of tokens, which becomes the computation bottleneck for tasks involving long contexts, such as non-streaming full context speech recognition [8 ###reference_8###, 9 ###reference_9###, 10 ###reference_10###]. Consequently, many approaches have been focusing on mitigating the complexity of attention score calculations. These methods include exploiting the sparsity [14 ###reference_14###, 15 ###reference_15###] or low rank [16 ###reference_16###] of the attention score matrix or modifying and converting the attention score calculation into a recurrent procedure [17 ###reference_17###].\nHowever, streaming ASR using limited context such as [18 ###reference_18###, 7 ###reference_7###, 19 ###reference_19###] faces a different computation bottleneck \u2014 the linear layers of self-attention and feedforward networks. For low-latency voice assistant or voice search scenarios, streaming ASR models need to process short audio segments within 100 ms at a time, sometimes even downsampling to fewer tokens. With as the context window size in self-attention and as the embedding dimension, calculating attention scores has a complexity of , while the linear projection layers of multi-head attention and feedforward networks lead to a complexity of . Due to fewer tokens (), the former is not a bottleneck anymore; instead, the latter becomes the computation bottleneck.\nIn addition to the computation overhead, these linear layers serve as the memory and power bottleneck for streaming ASR. Storing their weights requires memory, significantly exceeding the consumption for attention scores ( is the number of heads). This heightened memory requirement leads to substantially increased power consumption. While current hardware excels in computation energy efficiency, it demonstrates comparatively lower energy efficiency in memory operations [20 ###reference_20###, 21 ###reference_21###, 22 ###reference_22###]. Consequently, these linear layers, responsible for the majority of memory read/write traffic, emerge as the power bottleneck in streaming ASR.\nWe propose folding attention to reduce memory and power consumption in streaming ASR. Folding attention trades minimal attention score computation overhead for significant reductions in memory and power usage in linear layers. Each input token of dimension is split into sub-tokens with dimension , effectively increasing the token count by . These sub-tokens pass through multi-head attention and a feedforward network, then concatenate into an output token with the original dimension . The weight matrix dimension is reduced by . Compared to standard attention, folding attention offers linear layers computation cost, size, and lower memory and power consumption. The increased computation cost for attention score calculation is negligible. Using folding attention layers to substitute a standard attention layer reduces model size and memory/power overhead substantially without increasing computation. Folding attention applied to multiple Emformer Transducer models shows reductions of 12-24% in model size (and related memory) and 11-23% in power consumption on LibriSpeech [23 ###reference_23###], and reductions of 14-23% in model size (and related memory) and 13-21% in power consumption on our in-house dataset, all without sacrificing model accuracy or increasing computation overhead.\nThis paper presents the following contributions:\nWe comprehensively analyze the compute, memory, and power overhead in streaming ASR. We identify the bottleneck in the linear layers of self-attention and feedforward networks, rather than in calculating attention scores.\nOn-device applications have strict memory and power budgets. We introduce the technique of folding attention as a means to reduce model size and minimize memory and power consumption in streaming ASR.\nThrough extensive experiments conducted on LibriSpeech and our in-house dataset, we demonstrate the substantial reduction in model size, memory, and power achieved by employing folding attention. This improvement is achieved while maintaining comparable model accuracy and computation cost.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Overhead Analysis of Standard Attention",
15
+ "text": "In standard self-attention (refer to Figure 1 ###reference_###), input tokens are projected into queries, keys, and values using three projection layers: , , and . Output tokens are obtained through another projection layer, . With an embedding dimension of and a context window size (i.e., token count) of , each of these projection layers has parameters and has a computational complexity of and a memory overhead of .\nTo calculate attention scores, we perform the inner product () of each query and key, followed by a operation to obtain the attention score () for each query-key pair. With pairs of query and key, and the need to store the attention score for each pair under each attention head, the computational complexity for calculating attention scores is with a memory overhead of (where is the number of heads).\nAs discussed earlier, in streaming ASR, the context length is much smaller than the embedding dimension . This leads to the computation and memory overhead of linear layers ( and respectively) significantly surpassing that of attention score calculation ( and ). Instead of focusing on reducing the overhead of attention score calculation, as other Transformer optimization techniques [16 ###reference_16###, 14 ###reference_14###, 15 ###reference_15###, 17 ###reference_17###] do, it becomes more imperative to optimize the linear layers."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Folding Attention",
21
+ "text": "Folding attention is designed to decrease the number of parameters in linear layers within self-attention and feedforward networks, thereby mitigating their memory and power consumption.\n###figure_2### Design: The concept of folding attention is depicted in Figure 2 ###reference_###. In folding attention, a folding operator and an unfolding operator are introduced respectively before and after the standard self-attention. This folding operator divides an input token into sub-tokens, where is the folding factor. This division occurs by assigning the first channels of an original token to the initial sub-token, the subsequent to the second sub-token, and so forth. Consequently, the original input tokens transform into a sequence of sub-tokens, each with a dimension of . These sub-tokens then proceed through the standard self-attention layer, akin to regular tokens. The standard self-attention layer subsequently yields new sub-tokens, each with a dimension of . Finally, the unfolding operator concatenates every of them, generating the final output tokens, each with the original embedding dimension . Folding attention does not change the internal mechanics of self-attention and thus is naturally compatible with any self-attention design or optimization such as multi-head [1 ###reference_1###] and multi-query attentions [24 ###reference_24###].\nOverhead Analysis: With folding attention, the self-attention layer operates on times more tokens, yet each token possesses number of channels. This leads to linear layers within the self-attention having computation overhead and number of parameters. The attention scores\u2019 computation overhead increases by a factor of , and their memory overhead increases by times. However, this increase remains negligible as their original overhead is minimal when the token count is small (). This way, folding attention trades a negligible increase in the cost of computing attention scores for a significant reduction in memory and power overhead associated with linear layers. Overall, for streaming ASR, folding attention reduces attention layers\u2019 computation by almost times and reduces their size and corresponding memory consumption by almost times. Given that the power of streaming ASR is primarily influenced by memory read/write operations [22 ###reference_22###], which scale with model size, folding attention also significantly reduces attention layers\u2019 power.\nExpressiveness: Compared to standard self-attention, folding attention maintains an equivalent total number of elements for token embeddings ( token embeddings, each with elements; total number of elements), ensuring similar expressiveness in this regard. In folding attention, although linear projection layers do not establish dependencies between sub-tokens from the same original token, the inner product of these sub-tokens\u2019 query and key embeddings effectively introduces interdependencies, keeping similar expressiveness as standard self-attention in this aspect. While the linear projection layers in folding attention have fewer parameters and thus reduced expressiveness, incorporating additional folding attention layers can compensate for this. Using folding attention layers to replace one standard self-attention layer reduces the number of parameters (and related memory) by a factor of and significantly decreases power consumption while maintaining similar computation overhead. Our experiments demonstrate this can also maintain model accuracy.\nRelation to Prior Work: FoldedCNN [25 ###reference_25###] enhances CNN throughput and GPU utilization by unfolding input images, transforming images with channels into a single -channel image. Our work, in contrast, focuses on reducing memory and power in streaming ASR using Transformers. We use a folding operator on input tokens, splitting a -channel token into sub-tokens with channels each. Similarly, depthwise separable convolution [26 ###reference_26###] divides channels into groups, paralleling our method of dividing tokens into sub-tokens."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Evaluation",
27
+ "text": ""
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "Datasets",
33
+ "text": "We conducted experiments on two datasets: LibriSpeech [23 ###reference_23###] and an in-house dataset.\nLibriSpeech: We used 960 hours of its training dataset, extracting 80-dimensional log Mel-filterbank features every 25 milliseconds, with a sliding window of 10 milliseconds. Following the Emformer Transducer work [7 ###reference_7###], we used a pre-trained sentence piece model [27 ###reference_27###] to produce 4096-dimensional sentence pieces, plus an additional \u201cblank\u201d symbol, as our Transducer models predict. We used test-clean and test-other, the test sets of Librispeech, for evaluating the model accuracy.\nIn-house dataset: It consists of 23k hours of data sampled from English public videos. The audio was de-identified and aggregated, with personally identifiable information (PII) removed. We distorted the collected audio using simulated reverberation and added randomly sampled additive background noise extracted from publicly available videos. We applied speed perturbations [28 ###reference_28###] to create two additional copies of the training datasets at 0.9 and 1.1 times the original speed. We further applied distortion and additive noise to the speed-perturbed data. This resulted in a total of 127.8k hours of training data. For evaluating the accuracy of models trained on this dataset, we use the following two test sets:\ndictation: 5.8k hand-transcribed, de-identified, and aggregated utterances from vendor-collected data where speakers were asked to record unscripted open-domain dictation conversations. The audio was recorded in a variety of noise conditions and speaking volumes.\nmessaging: 13.4k hand-transcribed, de-identified, and aggregated utterances from vendor-collected data where speakers were asked to record audio messages for an unspecific person based on a scripted scenario. These utterances are generally shorter and have a higher signal-to-noise ratio (SNR) than the dictation dataset.\n###figure_3### ###figure_4###"
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "Results on LibriSpeech",
39
+ "text": "We trained multiple Emformer Transducers [7 ###reference_7###] on LibriSpeech [23 ###reference_23###], with model details provided in Table 1 ###reference_###. Our initial set comprised six baseline models, labeled A1\u2013A6, featuring 6\u201318 standard attention layers 111For simplicity, we refer to an attention layer in this section as a combination of a multi-head attention layer followed by a feedforward network. in their encoders. Additionally, we developed six models, tagged B1\u2013B6, which incorporate 8\u201312 folding attention layers followed by 2\u201312 standard attention layers in their encoders. In both baseline and folding attention models, every standard attention layer deploys eight attention heads, and every folding attention layer deploys four attention heads.\nWe benchmarked these models on a Google Pixel-6 Pro, measuring their Real-Time Factor (RTF) and other critical runtime characteristics. To assess their inference power on a device with a 16 MB cache, we utilized key runtime statistics and state-of-the-art hardware energy efficiency parameters [20 ###reference_20###, 21 ###reference_21###]. The results, including model size, word error rate, power consumption, compute overhead (GOPS),222GOPS, or Giga Operations Per Second, is the average number of operations from a model that the device must execute per second during streaming speech recognition. It measures the model\u2019s computational overhead. and RTF, are summarized in Table 2 ###reference_###.\nFigure 3 ###reference_### provides a visual representation of the impact of model size on word error rate. Notably, when comparing models with similar word error rates (B1 vs. A1, B2 vs. A2, B3 vs. A3, B4 vs. A4, and B5 vs. A6), folding attention models demonstrate a 12\u201324% reduction in model size compared to their baseline counterparts, while maintaining similar computation overhead. The RTF of folding attention models, being well below 1, comfortably meets the streaming ASR requirement. The marginal increase (0.01\u20130.06) in their RTF is attributed to additional layers, resulting in a slightly higher interpretive overhead [27]. With an enhanced interpreter, this marginal rise will become even more negligible.\nFigure 4 ###reference_### illustrates the relation between power consumption and word error rate. When examining models with comparable accuracy, it becomes evident that folding attention models exhibit an 11\u201323% power reduction compared to the baseline models.\n###figure_5### ###figure_6###"
40
+ },
41
+ {
42
+ "section_id": "4.3",
43
+ "parent_section_id": "4",
44
+ "section_name": "Results on the In-House Dataset",
45
+ "text": "We conducted training on multiple Emformer Transducers using our in-house dataset. Key hyperparameters of these models are summarized in Table 3 ###reference_###. The baseline models (C1\u2013C6) incorporate between 6 to 18 standard attention layers in their encoders. In contrast, the folding attention models (D1\u2013D6) feature 8 to 12 folding attention layers followed by 2 to 12 standard attention layers in their encoders. In both baseline and folding attention models, every standard attention layer deploys four attention heads, and every folding attention layer deploys two attention heads.\nTable 4 ###reference_### presents a comprehensive overview of these models, detailing their size, word error rate, power, GOPS, and RTF.\nFigure 5 ###reference_### provides a graphical representation of the relationship between model size and word error rate. Notably, under similar word error rates, folding attention models exhibit a 14\u201323% reduction in size compared to their baseline counterparts (comparing models D1 vs. C1, \u2026, D6 vs. C6) while maintaining similar compute GOPS. This reduction in size translates to a noteworthy 13\u201321% decrease in power consumption, as illustrated in Figure 6 ###reference_###. It is important to note that in ASR inference, power consumption is primarily influenced by memory read/write operations rather than computational operations. With a smaller model size, we can substantially diminish memory read/write operations, consequently reducing power consumption."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "Conclusions",
51
+ "text": "On-device AI applications operate within stringent memory and power constraints, highlighting the critical need for optimization in these domains. We investigate the memory and power challenges associated with Transformer-based streaming ASR. Our analysis reveals a distinctive bottleneck in Transformer-based streaming ASR, predominantly situated within the linear projection layers of self-attention and feedforward networks, as opposed to the attention score calculation, which is the customary focal point of general Transformer optimization strategies.\nTo address this bottleneck, we introduce the concept of folding attention. The essence of this approach lies in a deliberate trade-off: We accept a negligible increase in computation overhead during attention score calculation in exchange for a noteworthy reduction in memory and power consumption within the linear layers. Upon applying folding attention to on-device streaming ASR, we observed a reduction in model size (and the corresponding memory usage) of up to 24%, coupled with a decrease in power of up to 23%, while maintaining similar model accuracy and computation overhead.\nThese substantial reductions in memory and power consumption are expected to enhance the feasibility of on-device streaming speech recognition, ultimately improving the overall user experience."
52
+ }
53
+ ],
54
+ "appendix": [],
55
+ "tables": {
56
+ "1": {
57
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.2\" style=\"width:433.6pt;height:244.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(121.0pt,-68.2pt) scale(2.26255426246636,2.26255426246636) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.1\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1.1.1\" style=\"font-size:90%;\">model (baseline)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.2\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1.2.1\" style=\"font-size:90%;\">\u2009A1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.3\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1.3.1\" style=\"font-size:90%;\">\u2009A2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.4\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1.4.1\" style=\"font-size:90%;\">\u2009A3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1.5.1\" style=\"font-size:90%;\">\u2009A4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.6\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1.6.1\" style=\"font-size:90%;\">\u2009A5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.1.7\"><span class=\"ltx_text\" id=\"S4.T1.2.1.1.7.1\" style=\"font-size:90%;\">\u2009A6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.1\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.1.1\" style=\"font-size:90%;\"># folding attention layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.2\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.2.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.3\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.3.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.4\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.4.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.5.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.6\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.6.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.2.7\"><span class=\"ltx_text\" id=\"S4.T1.2.1.2.7.1\" style=\"font-size:90%;\">0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.3.1\"><span class=\"ltx_text\" id=\"S4.T1.2.1.3.1.1\" style=\"font-size:90%;\"># standard attention layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.3.2\"><span class=\"ltx_text\" id=\"S4.T1.2.1.3.2.1\" style=\"font-size:90%;\">6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.3.3\"><span class=\"ltx_text\" id=\"S4.T1.2.1.3.3.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.3.4\"><span class=\"ltx_text\" id=\"S4.T1.2.1.3.4.1\" style=\"font-size:90%;\">10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.3.5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.3.5.1\" style=\"font-size:90%;\">12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.3.6\"><span class=\"ltx_text\" id=\"S4.T1.2.1.3.6.1\" style=\"font-size:90%;\">15</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.3.7\"><span class=\"ltx_text\" id=\"S4.T1.2.1.3.7.1\" style=\"font-size:90%;\">18</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T1.2.1.4.1\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.1.1\" style=\"font-size:90%;\">model (folding attention)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.2.1.4.2\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.2.1\" style=\"font-size:90%;\">B1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.2.1.4.3\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.3.1\" style=\"font-size:90%;\">B2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.2.1.4.4\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.4.1\" style=\"font-size:90%;\">B3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.2.1.4.5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.5.1\" style=\"font-size:90%;\">B4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.2.1.4.6\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.6.1\" style=\"font-size:90%;\">B5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T1.2.1.4.7\"><span class=\"ltx_text\" id=\"S4.T1.2.1.4.7.1\" style=\"font-size:90%;\">B6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.5.1\"><span class=\"ltx_text\" id=\"S4.T1.2.1.5.1.1\" style=\"font-size:90%;\"># folding attention layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.5.2\"><span class=\"ltx_text\" id=\"S4.T1.2.1.5.2.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.5.3\"><span class=\"ltx_text\" id=\"S4.T1.2.1.5.3.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.5.4\"><span class=\"ltx_text\" id=\"S4.T1.2.1.5.4.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.5.5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.5.5.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.5.6\"><span class=\"ltx_text\" id=\"S4.T1.2.1.5.6.1\" style=\"font-size:90%;\">10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.5.7\"><span class=\"ltx_text\" id=\"S4.T1.2.1.5.7.1\" style=\"font-size:90%;\">12</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.6.1\"><span class=\"ltx_text\" id=\"S4.T1.2.1.6.1.1\" style=\"font-size:90%;\"># standard attention layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.6.2\"><span class=\"ltx_text\" id=\"S4.T1.2.1.6.2.1\" style=\"font-size:90%;\">2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.6.3\"><span class=\"ltx_text\" id=\"S4.T1.2.1.6.3.1\" style=\"font-size:90%;\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.6.4\"><span class=\"ltx_text\" id=\"S4.T1.2.1.6.4.1\" style=\"font-size:90%;\">6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.6.5\"><span class=\"ltx_text\" id=\"S4.T1.2.1.6.5.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.6.6\"><span class=\"ltx_text\" id=\"S4.T1.2.1.6.6.1\" style=\"font-size:90%;\">10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.2.1.6.7\"><span class=\"ltx_text\" id=\"S4.T1.2.1.6.7.1\" style=\"font-size:90%;\">12</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.5.1.1\">Table 1</span>: </span>Models on LibrsiSpeech: number of folding / standard attention layers in their encoders. Folding factor is 2.</figcaption>\n</figure>",
58
+ "capture": "Table 1: Models on LibrsiSpeech: number of folding / standard attention layers in their encoders. Folding factor is 2."
59
+ },
60
+ "2": {
61
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.2\" style=\"width:433.6pt;height:510.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(109.7pt,-129.2pt) scale(2.02507618873742,2.02507618873742) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.1.1.1\" style=\"font-size:90%;\">model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.1.2.1\" style=\"font-size:90%;\"><span class=\"ltx_text\" id=\"S4.T2.2.1.1.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T2.2.1.1.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.2.1.1.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T2.2.1.1.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.1.1.2.1.2.1.1.1\">size</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.2.1.1.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.1.1.2.1.2.1.2.1\">(M)</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T2.2.1.1.2.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.T2.2.1.1.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.1.3.1\" style=\"font-size:90%;\">word error rate (%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.1.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.1.4.1\" style=\"font-size:90%;\"><span class=\"ltx_text\" id=\"S4.T2.2.1.1.4.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T2.2.1.1.4.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.2.1.1.4.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T2.2.1.1.4.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.1.1.4.1.2.1.1.1\">power</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.2.1.1.4.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.1.1.4.1.2.1.2.1\">(mW)</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T2.2.1.1.4.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.1.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.1.5.1\" style=\"font-size:90%;\"><span class=\"ltx_text\" id=\"S4.T2.2.1.1.5.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T2.2.1.1.5.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.2.1.1.5.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T2.2.1.1.5.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.1.1.5.1.2.1.1.1\">compute</span></span>\n<span class=\"ltx_tr\" id=\"S4.T2.2.1.1.5.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.1.1.5.1.2.1.2.1\">GOPS</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T2.2.1.1.5.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.1.6\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.1.6.1\" style=\"font-size:90%;\">RTF</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.2.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.2.1.1\" style=\"font-size:90%;\">test-clean</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.2.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.2.2.1\" style=\"font-size:90%;\">test-other</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.3.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.3.1.1\" style=\"font-size:90%;\">A1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.3.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.3.2.1\" style=\"font-size:90%;\">33.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.3.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.3.3.1\" style=\"font-size:90%;\">5.94</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.3.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.3.4.1\" style=\"font-size:90%;\">13.75</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.3.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.3.5.1\" style=\"font-size:90%;\">27.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.3.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.3.6.1\" style=\"font-size:90%;\">2.58</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.3.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.3.7.1\" style=\"font-size:90%;\">0.33</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.4.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.4.1.1\" style=\"font-size:90%;\">A2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.4.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.4.2.1\" style=\"font-size:90%;\">40.29</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.4.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.4.3.1\" style=\"font-size:90%;\">5.11</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.4.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.4.4.1\" style=\"font-size:90%;\">12.39</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.4.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.4.5.1\" style=\"font-size:90%;\">31.88</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.4.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.4.6.1\" style=\"font-size:90%;\">2.87</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.4.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.4.7.1\" style=\"font-size:90%;\">0.36</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.5.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.5.1.1\" style=\"font-size:90%;\">A3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.5.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.5.2.1\" style=\"font-size:90%;\">46.59</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.5.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.5.3.1\" style=\"font-size:90%;\">4.77</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.5.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.5.4.1\" style=\"font-size:90%;\">11.36</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.5.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.5.5.1\" style=\"font-size:90%;\">36.64</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.5.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.5.6.1\" style=\"font-size:90%;\">3.14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.5.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.5.7.1\" style=\"font-size:90%;\">0.38</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.6.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.6.1.1\" style=\"font-size:90%;\">A4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.6.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.6.2.1\" style=\"font-size:90%;\">52.90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.6.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.6.3.1\" style=\"font-size:90%;\">4.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.6.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.6.4.1\" style=\"font-size:90%;\">11.16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.6.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.6.5.1\" style=\"font-size:90%;\">41.39</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.6.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.6.6.1\" style=\"font-size:90%;\">3.41</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.6.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.6.7.1\" style=\"font-size:90%;\">0.39</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.7.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.7.1.1\" style=\"font-size:90%;\">A5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.7.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.7.2.1\" style=\"font-size:90%;\">62.36</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.7.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.7.3.1\" style=\"font-size:90%;\">4.06</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.7.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.7.4.1\" style=\"font-size:90%;\">10.30</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.7.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.7.5.1\" style=\"font-size:90%;\">48.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.7.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.7.6.1\" style=\"font-size:90%;\">3.88</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.7.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.7.7.1\" style=\"font-size:90%;\">0.41</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.8.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.8.1.1\" style=\"font-size:90%;\">A6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.8.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.8.2.1\" style=\"font-size:90%;\">71.82</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.8.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.8.3.1\" style=\"font-size:90%;\">4.07</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.8.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.8.4.1\" style=\"font-size:90%;\">10.05</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.8.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.8.5.1\" style=\"font-size:90%;\">55.74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.8.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.8.6.1\" style=\"font-size:90%;\">4.33</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.8.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.8.7.1\" style=\"font-size:90%;\">0.44</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.9.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.9.1.1\" style=\"font-size:90%;\">B1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.9.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.9.2.1\" style=\"font-size:90%;\">27.69</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.9.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.9.3.1\" style=\"font-size:90%;\">5.55</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.9.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.9.4.1\" style=\"font-size:90%;\">13.25</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.9.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.9.5.1\" style=\"font-size:90%;\">22.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.9.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.9.6.1\" style=\"font-size:90%;\">2.59</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T2.2.1.9.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.9.7.1\" style=\"font-size:90%;\">0.39</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.10.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.10.1.1\" style=\"font-size:90%;\">B2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.10.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.10.2.1\" style=\"font-size:90%;\">34.00</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.10.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.10.3.1\" style=\"font-size:90%;\">4.98</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.10.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.10.4.1\" style=\"font-size:90%;\">12.21</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.10.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.10.5.1\" style=\"font-size:90%;\">27.18</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.10.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.10.6.1\" style=\"font-size:90%;\">2.90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.10.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.10.7.1\" style=\"font-size:90%;\">0.42</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.11.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.11.1.1\" style=\"font-size:90%;\">B3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.11.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.11.2.1\" style=\"font-size:90%;\">40.30</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.11.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.11.3.1\" style=\"font-size:90%;\">4.68</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.11.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.11.4.1\" style=\"font-size:90%;\">11.05</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.11.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.11.5.1\" style=\"font-size:90%;\">31.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.11.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.11.6.1\" style=\"font-size:90%;\">3.17</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.11.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.11.7.1\" style=\"font-size:90%;\">0.43</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.12.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.12.1.1\" style=\"font-size:90%;\">B4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.12.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.12.2.1\" style=\"font-size:90%;\">46.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.12.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.12.3.1\" style=\"font-size:90%;\">4.35</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.12.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.12.4.1\" style=\"font-size:90%;\">10.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.12.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.12.5.1\" style=\"font-size:90%;\">36.69</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.12.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.12.6.1\" style=\"font-size:90%;\">3.44</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.12.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.12.7.1\" style=\"font-size:90%;\">0.43</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.13.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.13.1.1\" style=\"font-size:90%;\">B5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.13.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.13.2.1\" style=\"font-size:90%;\">54.50</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.13.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.13.3.1\" style=\"font-size:90%;\">4.09</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.13.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.13.4.1\" style=\"font-size:90%;\">10.03</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.13.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.13.5.1\" style=\"font-size:90%;\">42.68</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.13.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.13.6.1\" style=\"font-size:90%;\">3.90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.13.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.13.7.1\" style=\"font-size:90%;\">0.45</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.1.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.14.1\"><span class=\"ltx_text\" id=\"S4.T2.2.1.14.1.1\" style=\"font-size:90%;\">B6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.14.2\"><span class=\"ltx_text\" id=\"S4.T2.2.1.14.2.1\" style=\"font-size:90%;\">62.38</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.14.3\"><span class=\"ltx_text\" id=\"S4.T2.2.1.14.3.1\" style=\"font-size:90%;\">4.09</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.14.4\"><span class=\"ltx_text\" id=\"S4.T2.2.1.14.4.1\" style=\"font-size:90%;\">10.10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.14.5\"><span class=\"ltx_text\" id=\"S4.T2.2.1.14.5.1\" style=\"font-size:90%;\">48.69</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.14.6\"><span class=\"ltx_text\" id=\"S4.T2.2.1.14.6.1\" style=\"font-size:90%;\">4.38</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T2.2.1.14.7\"><span class=\"ltx_text\" id=\"S4.T2.2.1.14.7.1\" style=\"font-size:90%;\">0.50</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.5.1.1\">Table 2</span>: </span>Results on LibrsiSpeech: model size, word error rate, power, compute overhead, and RTF. Models A1\u2013A6 are baseline models; B1\u2013B6 are folding attention models.</figcaption>\n</figure>",
62
+ "capture": "Table 2: Results on LibrsiSpeech: model size, word error rate, power, compute overhead, and RTF. Models A1\u2013A6 are baseline models; B1\u2013B6 are folding attention models."
63
+ },
64
+ "3": {
65
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.2\" style=\"width:433.6pt;height:246.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(121.7pt,-69.1pt) scale(2.28040238127776,2.28040238127776) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T3.2.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.1.1\"><span class=\"ltx_text\" id=\"S4.T3.2.1.1.1.1\" style=\"font-size:90%;\">model (baseline)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.1.2\"><span class=\"ltx_text\" id=\"S4.T3.2.1.1.2.1\" style=\"font-size:90%;\">\u2009C1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.1.3\"><span class=\"ltx_text\" id=\"S4.T3.2.1.1.3.1\" style=\"font-size:90%;\">\u2009C2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.1.4\"><span class=\"ltx_text\" id=\"S4.T3.2.1.1.4.1\" style=\"font-size:90%;\">\u2009C3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.1.5\"><span class=\"ltx_text\" id=\"S4.T3.2.1.1.5.1\" style=\"font-size:90%;\">\u2009C4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.1.6\"><span class=\"ltx_text\" id=\"S4.T3.2.1.1.6.1\" style=\"font-size:90%;\">\u2009C5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.1.7\"><span class=\"ltx_text\" id=\"S4.T3.2.1.1.7.1\" style=\"font-size:90%;\">\u2009C6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.2.1\"><span class=\"ltx_text\" id=\"S4.T3.2.1.2.1.1\" style=\"font-size:90%;\"># folding attention layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.2.2\"><span class=\"ltx_text\" id=\"S4.T3.2.1.2.2.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.2.3\"><span class=\"ltx_text\" id=\"S4.T3.2.1.2.3.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.2.4\"><span class=\"ltx_text\" id=\"S4.T3.2.1.2.4.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.2.5\"><span class=\"ltx_text\" id=\"S4.T3.2.1.2.5.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.2.6\"><span class=\"ltx_text\" id=\"S4.T3.2.1.2.6.1\" style=\"font-size:90%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.2.7\"><span class=\"ltx_text\" id=\"S4.T3.2.1.2.7.1\" style=\"font-size:90%;\">0</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.3.1\"><span class=\"ltx_text\" id=\"S4.T3.2.1.3.1.1\" style=\"font-size:90%;\"># standard attention layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.3.2\"><span class=\"ltx_text\" id=\"S4.T3.2.1.3.2.1\" style=\"font-size:90%;\">6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.3.3\"><span class=\"ltx_text\" id=\"S4.T3.2.1.3.3.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.3.4\"><span class=\"ltx_text\" id=\"S4.T3.2.1.3.4.1\" style=\"font-size:90%;\">10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.3.5\"><span class=\"ltx_text\" id=\"S4.T3.2.1.3.5.1\" style=\"font-size:90%;\">12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.3.6\"><span class=\"ltx_text\" id=\"S4.T3.2.1.3.6.1\" style=\"font-size:90%;\">15</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.3.7\"><span class=\"ltx_text\" id=\"S4.T3.2.1.3.7.1\" style=\"font-size:90%;\">18</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T3.2.1.4.1\"><span class=\"ltx_text\" id=\"S4.T3.2.1.4.1.1\" style=\"font-size:90%;\">model (folding attention)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.2.1.4.2\"><span class=\"ltx_text\" id=\"S4.T3.2.1.4.2.1\" style=\"font-size:90%;\">D1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.2.1.4.3\"><span class=\"ltx_text\" id=\"S4.T3.2.1.4.3.1\" style=\"font-size:90%;\">D2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.2.1.4.4\"><span class=\"ltx_text\" id=\"S4.T3.2.1.4.4.1\" style=\"font-size:90%;\">D3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.2.1.4.5\"><span class=\"ltx_text\" id=\"S4.T3.2.1.4.5.1\" style=\"font-size:90%;\">D4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.2.1.4.6\"><span class=\"ltx_text\" id=\"S4.T3.2.1.4.6.1\" style=\"font-size:90%;\">D5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T3.2.1.4.7\"><span class=\"ltx_text\" id=\"S4.T3.2.1.4.7.1\" style=\"font-size:90%;\">D6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.5.1\"><span class=\"ltx_text\" id=\"S4.T3.2.1.5.1.1\" style=\"font-size:90%;\"># folding attention layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.5.2\"><span class=\"ltx_text\" id=\"S4.T3.2.1.5.2.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.5.3\"><span class=\"ltx_text\" id=\"S4.T3.2.1.5.3.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.5.4\"><span class=\"ltx_text\" id=\"S4.T3.2.1.5.4.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.5.5\"><span class=\"ltx_text\" id=\"S4.T3.2.1.5.5.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.5.6\"><span class=\"ltx_text\" id=\"S4.T3.2.1.5.6.1\" style=\"font-size:90%;\">12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.5.7\"><span class=\"ltx_text\" id=\"S4.T3.2.1.5.7.1\" style=\"font-size:90%;\">12</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.6.1\"><span class=\"ltx_text\" id=\"S4.T3.2.1.6.1.1\" style=\"font-size:90%;\"># standard attention layers</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.6.2\"><span class=\"ltx_text\" id=\"S4.T3.2.1.6.2.1\" style=\"font-size:90%;\">2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.6.3\"><span class=\"ltx_text\" id=\"S4.T3.2.1.6.3.1\" style=\"font-size:90%;\">4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.6.4\"><span class=\"ltx_text\" id=\"S4.T3.2.1.6.4.1\" style=\"font-size:90%;\">6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.6.5\"><span class=\"ltx_text\" id=\"S4.T3.2.1.6.5.1\" style=\"font-size:90%;\">8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.6.6\"><span class=\"ltx_text\" id=\"S4.T3.2.1.6.6.1\" style=\"font-size:90%;\">9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.2.1.6.7\"><span class=\"ltx_text\" id=\"S4.T3.2.1.6.7.1\" style=\"font-size:90%;\">12</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.5.1.1\">Table 3</span>: </span>Models on the in-house dataset: number of folding / standard attention layers in their encoders. Folding factor is 2.</figcaption>\n</figure>",
66
+ "capture": "Table 3: Models on the in-house dataset: number of folding / standard attention layers in their encoders. Folding factor is 2."
67
+ },
68
+ "4": {
69
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T4.2\" style=\"width:433.6pt;height:512.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(110.3pt,-130.4pt) scale(2.03481604703623,2.03481604703623) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.2.1\">\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1.1.1\" style=\"font-size:90%;\">model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.2\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1.2.1\" style=\"font-size:90%;\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1.2.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T4.2.1.1.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.2.1.1.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T4.2.1.1.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.1.1.2.1.2.1.1.1\">size</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.2.1.1.2.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.1.1.2.1.2.1.2.1\">(M)</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T4.2.1.1.2.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.T4.2.1.1.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1.3.1\" style=\"font-size:90%;\">word error rate (%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1.4.1\" style=\"font-size:90%;\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1.4.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T4.2.1.1.4.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.2.1.1.4.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T4.2.1.1.4.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.1.1.4.1.2.1.1.1\">power</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.2.1.1.4.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.1.1.4.1.2.1.2.1\">(mW)</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T4.2.1.1.4.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.5\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1.5.1\" style=\"font-size:90%;\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1.5.1.1\"></span> <span class=\"ltx_text\" id=\"S4.T4.2.1.1.5.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.2.1.1.5.1.2.1\">\n<span class=\"ltx_tr\" id=\"S4.T4.2.1.1.5.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.1.1.5.1.2.1.1.1\">compute</span></span>\n<span class=\"ltx_tr\" id=\"S4.T4.2.1.1.5.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T4.2.1.1.5.1.2.1.2.1\">GOPS</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T4.2.1.1.5.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.1.6\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.1.6.1\" style=\"font-size:90%;\">RTF</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.2.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.2.1.1\" style=\"font-size:90%;\">dictation</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.2.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.2.2.1\" style=\"font-size:90%;\">messaging</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.3.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.3.1.1\" style=\"font-size:90%;\">C1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.3.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.3.2.1\" style=\"font-size:90%;\">17.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.3.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.3.3.1\" style=\"font-size:90%;\">23.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.3.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.3.4.1\" style=\"font-size:90%;\">8.08</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.3.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.3.5.1\" style=\"font-size:90%;\">\u20047.54</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.3.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.3.6.1\" style=\"font-size:90%;\">1.02</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.3.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.3.7.1\" style=\"font-size:90%;\">0.18</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.4.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.4.1.1\" style=\"font-size:90%;\">C2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.4.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.4.2.1\" style=\"font-size:90%;\">21.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.4.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.4.3.1\" style=\"font-size:90%;\">20.86</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.4.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.4.4.1\" style=\"font-size:90%;\">6.67</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.4.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.4.5.1\" style=\"font-size:90%;\">\u20049.16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.4.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.4.6.1\" style=\"font-size:90%;\">1.15</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.4.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.4.7.1\" style=\"font-size:90%;\">0.20</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.5.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.5.1.1\" style=\"font-size:90%;\">C3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.5.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.5.2.1\" style=\"font-size:90%;\">25.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.5.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.5.3.1\" style=\"font-size:90%;\">19.14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.5.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.5.4.1\" style=\"font-size:90%;\">5.90</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.5.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.5.5.1\" style=\"font-size:90%;\">10.78</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.5.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.5.6.1\" style=\"font-size:90%;\">1.28</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.5.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.5.7.1\" style=\"font-size:90%;\">0.21</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.6.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.6.1.1\" style=\"font-size:90%;\">C4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.6.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.6.2.1\" style=\"font-size:90%;\">29.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.6.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.6.3.1\" style=\"font-size:90%;\">18.45</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.6.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.6.4.1\" style=\"font-size:90%;\">5.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.6.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.6.5.1\" style=\"font-size:90%;\">12.39</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.6.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.6.6.1\" style=\"font-size:90%;\">1.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.6.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.6.7.1\" style=\"font-size:90%;\">0.22</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.7.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.7.1.1\" style=\"font-size:90%;\">C5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.7.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.7.2.1\" style=\"font-size:90%;\">35.19</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.7.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.7.3.1\" style=\"font-size:90%;\">17.72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.7.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.7.4.1\" style=\"font-size:90%;\">5.26</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.7.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.7.5.1\" style=\"font-size:90%;\">14.83</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.7.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.7.6.1\" style=\"font-size:90%;\">1.60</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.7.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.7.7.1\" style=\"font-size:90%;\">0.23</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.8.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.8.1.1\" style=\"font-size:90%;\">C6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.8.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.8.2.1\" style=\"font-size:90%;\">41.19</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.8.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.8.3.1\" style=\"font-size:90%;\">17.07</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.8.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.8.4.1\" style=\"font-size:90%;\">4.76</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.8.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.8.5.1\" style=\"font-size:90%;\">17.26</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.8.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.8.6.1\" style=\"font-size:90%;\">1.78</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.8.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.8.7.1\" style=\"font-size:90%;\">0.25</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.9.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.9.1.1\" style=\"font-size:90%;\">D1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.9.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.9.2.1\" style=\"font-size:90%;\">13.22</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.9.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.9.3.1\" style=\"font-size:90%;\">22.44</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.9.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.9.4.1\" style=\"font-size:90%;\">7.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.9.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.9.5.1\" style=\"font-size:90%;\">\u20045.94</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.9.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.9.6.1\" style=\"font-size:90%;\">1.02</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S4.T4.2.1.9.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.9.7.1\" style=\"font-size:90%;\">0.21</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.10.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.10.1.1\" style=\"font-size:90%;\">D2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.10.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.10.2.1\" style=\"font-size:90%;\">17.21</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.10.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.10.3.1\" style=\"font-size:90%;\">20.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.10.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.10.4.1\" style=\"font-size:90%;\">6.57</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.10.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.10.5.1\" style=\"font-size:90%;\">\u20047.56</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.10.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.10.6.1\" style=\"font-size:90%;\">1.15</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.10.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.10.7.1\" style=\"font-size:90%;\">0.22</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.11.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.11.1.1\" style=\"font-size:90%;\">D3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.11.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.11.2.1\" style=\"font-size:90%;\">21.21</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.11.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.11.3.1\" style=\"font-size:90%;\">18.94</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.11.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.11.4.1\" style=\"font-size:90%;\">5.77</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.11.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.11.5.1\" style=\"font-size:90%;\">\u20049.18</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.11.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.11.6.1\" style=\"font-size:90%;\">1.27</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.11.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.11.7.1\" style=\"font-size:90%;\">0.22</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.12.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.12.1.1\" style=\"font-size:90%;\">D4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.12.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.12.2.1\" style=\"font-size:90%;\">25.21</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.12.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.12.3.1\" style=\"font-size:90%;\">18.17</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.12.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.12.4.1\" style=\"font-size:90%;\">5.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.12.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.12.5.1\" style=\"font-size:90%;\">10.81</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.12.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.12.6.1\" style=\"font-size:90%;\">1.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.12.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.12.7.1\" style=\"font-size:90%;\">0.23</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.13.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.13.1.1\" style=\"font-size:90%;\">D5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.13.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.13.2.1\" style=\"font-size:90%;\">29.21</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.13.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.13.3.1\" style=\"font-size:90%;\">17.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.13.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.13.4.1\" style=\"font-size:90%;\">5.18</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.13.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.13.5.1\" style=\"font-size:90%;\">12.44</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.13.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.13.6.1\" style=\"font-size:90%;\">1.60</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.13.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.13.7.1\" style=\"font-size:90%;\">0.25</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.2.1.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.14.1\"><span class=\"ltx_text\" id=\"S4.T4.2.1.14.1.1\" style=\"font-size:90%;\">D6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.14.2\"><span class=\"ltx_text\" id=\"S4.T4.2.1.14.2.1\" style=\"font-size:90%;\">35.21</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.14.3\"><span class=\"ltx_text\" id=\"S4.T4.2.1.14.3.1\" style=\"font-size:90%;\">17.04</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.14.4\"><span class=\"ltx_text\" id=\"S4.T4.2.1.14.4.1\" style=\"font-size:90%;\">4.78</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.14.5\"><span class=\"ltx_text\" id=\"S4.T4.2.1.14.5.1\" style=\"font-size:90%;\">14.88</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.14.6\"><span class=\"ltx_text\" id=\"S4.T4.2.1.14.6.1\" style=\"font-size:90%;\">1.80</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T4.2.1.14.7\"><span class=\"ltx_text\" id=\"S4.T4.2.1.14.7.1\" style=\"font-size:90%;\">0.27</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.5.1.1\">Table 4</span>: </span>Results on the in-house dataset: model size, word error rate, power, compute overhead, and RTF. Models C1\u2013C6 are baseline models; D1\u2013D6 are folding attention models.</figcaption>\n</figure>",
70
+ "capture": "Table 4: Results on the in-house dataset: model size, word error rate, power, compute overhead, and RTF. Models C1\u2013C6 are baseline models; D1\u2013D6 are folding attention models."
71
+ }
72
+ },
73
+ "image_paths": {
74
+ "1": {
75
+ "figure_path": "2309.07988v3_figure_1.png",
76
+ "caption": "Fig. 1: Standard self-attention (single-head as an example).",
77
+ "url": "http://arxiv.org/html/2309.07988v3/x1.png"
78
+ },
79
+ "2": {
80
+ "figure_path": "2309.07988v3_figure_2.png",
81
+ "caption": "Fig. 2: Folding attention (folding factor of 2 and single-head as an example). The subsequent feedforward networks (not shown in the diagram) can be repositioned prior to the unfolding operator to achieve a size reduction (fourfold in this example).",
82
+ "url": "http://arxiv.org/html/2309.07988v3/x2.png"
83
+ },
84
+ "3": {
85
+ "figure_path": "2309.07988v3_figure_3.png",
86
+ "caption": "Fig. 3: Model size vs. word error rate on LibriSpeech.",
87
+ "url": "http://arxiv.org/html/2309.07988v3/x3.png"
88
+ },
89
+ "4": {
90
+ "figure_path": "2309.07988v3_figure_4.png",
91
+ "caption": "Fig. 4: Power vs. word error rate on LibriSpeech.",
92
+ "url": "http://arxiv.org/html/2309.07988v3/x4.png"
93
+ },
94
+ "5": {
95
+ "figure_path": "2309.07988v3_figure_5.png",
96
+ "caption": "Fig. 5: Model size vs. word error rate on the in-house dataset.",
97
+ "url": "http://arxiv.org/html/2309.07988v3/x5.png"
98
+ },
99
+ "6": {
100
+ "figure_path": "2309.07988v3_figure_6.png",
101
+ "caption": "Fig. 6: Power vs. word error rate on the in-house dataset.",
102
+ "url": "http://arxiv.org/html/2309.07988v3/x6.png"
103
+ }
104
+ },
105
+ "validation": true,
106
+ "references": [
107
+ {
108
+ "1": {
109
+ "title": "\u201cAttention Is All You Need,\u201d",
110
+ "author": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,\nAidan N Gomez, Lukasz Kaiser, and Illia Polosukhin,",
111
+ "venue": "in NeurIPS, 2017.",
112
+ "url": null
113
+ }
114
+ },
115
+ {
116
+ "2": {
117
+ "title": "\u201cSpeech-Transformer: A No-Recurrence Sequence-to-Sequence Model for\nSpeech Recognition,\u201d",
118
+ "author": "Linhao Dong, Shuang Xu, and Bo Xu,",
119
+ "venue": "in ICASSP, 2018.",
120
+ "url": null
121
+ }
122
+ },
123
+ {
124
+ "3": {
125
+ "title": "\u201cA Comparative Study on Transformer vs RNN in Speech\nApplications,\u201d",
126
+ "author": "Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma,\nZiyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto,\nXiaofei Wang, Shinji Watanabe, Takenori Yoshimura, and Wangyou Zhang,",
127
+ "venue": "in ASRU, 2019.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "4": {
133
+ "title": "\u201cSelf-Attentional Acoustic Models,\u201d",
134
+ "author": "Matthias Sperber, Jan Niehues, Graham Neubig, Sebastian St\u00fcker, and Alex\nWaibel,",
135
+ "venue": "in INTERSPEECH, 2018.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "5": {
141
+ "title": "\u201cSyllable-Based Sequence-to-Sequence Speech Recognition with the\nTransformer in Mandarin Chinese,\u201d",
142
+ "author": "Shiyu Zhou, Linhao Dong, Shuang Xu, and Bo Xu,",
143
+ "venue": "in INTERSPEECH, 2018.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "6": {
149
+ "title": "\u201cLow Latency End-to-End Streaming Speech Recognition with a Scout\nNetwork,\u201d",
150
+ "author": "Chengyi Wang, Yu Wu, Liang Lu, Shujie Liu, Jinyu Li, Guoli Ye, and Ming Zhou,",
151
+ "venue": "in INTERSPEECH, 2020.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "7": {
157
+ "title": "\u201cEmformer: Efficient Memory Transformer Based Acoustic Model for\nLow Latency Streaming Speech Recognition,\u201d",
158
+ "author": "Yangyang Shi, Yongqiang Wang, Chunyang Wu, Ching-Feng Yeh, Julian Chan, Frank\nZhang, Duc Le, and Mike Seltzer,",
159
+ "venue": "in ICASSP, 2021.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "8": {
165
+ "title": "\u201cConformer: Convolution-augmented Transformer for Speech\nRecognition,\u201d",
166
+ "author": "Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu,\nWei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang,",
167
+ "venue": "in INTERSPEECH, 2020.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "9": {
173
+ "title": "\u201cFaster, Simpler and More Accurate Hybrid ASR Systems Using\nWordpieces,\u201d",
174
+ "author": "Frank Zhang, Yongqiang Wang, Xiaohui Zhang, Chunxi Liu, Yatharth Saraf, and\nGeoffrey Zweig,",
175
+ "venue": "in INTERSPEECH, 2020.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "10": {
181
+ "title": "\u201cTransformer-Transducer: End-to-End Speech Recognition with\nSelf-Attention,\u201d",
182
+ "author": "Ching-Feng Yeh, Jay Mahadeokar, Kaustubh Kalgaonkar, Yongqiang Wang, Duc Le,\nMahaveer Jain, Kjell Schubert, Christian Fuegen, and Michael L Seltzer,",
183
+ "venue": "arXiv preprint arXiv:1910.12977, 2019.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "11": {
189
+ "title": "\u201cSelf-Attention Networks for Connectionist Temporal Classification\nin Speech Recognition,\u201d",
190
+ "author": "Julian Salazar, Katrin Kirchhoff, and Zhiheng Huang,",
191
+ "venue": "in ICASSP, 2019.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "12": {
197
+ "title": "\u201cA Time-Restricted Self-Attention Layer for ASR,\u201d",
198
+ "author": "Daniel Povey, Hossein Hadian, Pegah Ghahremani, Ke Li, and Sanjeev Khudanpur,",
199
+ "venue": "in ICASSP, 2018.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "13": {
205
+ "title": "\u201cTransformer-based Acoustic Modeling for Hybrid Speech\nRecognition,\u201d",
206
+ "author": "Yongqiang Wang, Abdelrahman Mohamed, Due Le, Chunxi Liu, Alex Xiao, Jay\nMahadeokar, Hongzhao Huang, Andros Tjandra, Xiaohui Zhang, Frank Zhang,\nChristian Fuegen, Geoffrey Zweig, and Michael L. Seltzer,",
207
+ "venue": "in ICASSP, 2020.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "14": {
213
+ "title": "\u201cGenerating Long Sequences with Sparse Transformers,\u201d",
214
+ "author": "Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever,",
215
+ "venue": "arXiv preprint arXiv:1904.10509, 2019.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "15": {
221
+ "title": "\u201cBlockwise Self-Attention for Long Document Understanding,\u201d",
222
+ "author": "Jiezhong Qiu, Hao Ma, Omer Levy, Wen-tau Yih, Sinong Wang, and Jie Tang,",
223
+ "venue": "in Findings of the Association for Computational Linguistics:\nEMNLP, 2020.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "16": {
229
+ "title": "\u201cLinformer: Self-Attention with Linear Complexity,\u201d",
230
+ "author": "Sinong Wang, Belinda Z Li, Madian Khabsa, Han Fang, and Hao Ma,",
231
+ "venue": "arXiv preprint arXiv:2006.04768, 2020.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "17": {
237
+ "title": "\u201cRetentive Network: A Successor to Transformer for Large Language\nModels,\u201d",
238
+ "author": "Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong\nWang, and Furu Wei,",
239
+ "venue": "arXiv preprint arXiv:2307.08621, 2023.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "18": {
245
+ "title": "\u201cSelf-Attention Aligner: A Latency-Control End-to-End Model for ASR\nUsing Self-Attention Network and Chunk-Hopping,\u201d",
246
+ "author": "Linhao Dong, Feng Wang, and Bo Xu,",
247
+ "venue": "ICASSP, 2019.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "19": {
253
+ "title": "\u201cStreaming Automatic Speech Recognition with the Transformer\nModel,\u201d",
254
+ "author": "Niko Moritz, Takaaki Hori, and Jonathan Le Roux,",
255
+ "venue": "arXiv preprint arXiv:2001.02674, 2020.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "20": {
261
+ "title": "\u201cOn-Chip Memory Technology Design Space Explorations for Mobile\nDeep Neural Network Accelerators,\u201d",
262
+ "author": "Haitong Li, Mudit Bhargav, Paul N. Whatmough, and H.-S. Philip Wong,",
263
+ "venue": "in DAC, 2019.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "21": {
269
+ "title": "\u201cUNPU: A 50.6 TOPS/W Unified Deep Neural Network Accelerator with\n1b-to-16b Fully-Variable Weight Bit-Precision,\u201d",
270
+ "author": "Jinmook Lee, Changhyeon Kim, Sanghoon Kang, Dongjoo Shin, Sangyeob Kim, and\nHoi-Jun Yoo,",
271
+ "venue": "in ISSCC, 2018.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "22": {
277
+ "title": "\u201cFactorized Blank Thresholding for Improved Runtime Efficiency of\nNeural Transducers,\u201d",
278
+ "author": "Duc Le, Frank Seide, Yuhao Wang, Yang Li, Kjell Schubert, Ozlem Kalinli, and\nMichael L. Seltzer,",
279
+ "venue": "in ICASSP, 2023.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "23": {
285
+ "title": "\u201cLibriSpeech: An ASR Corpus based on Public Domain Audio Books,\u201d",
286
+ "author": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur,",
287
+ "venue": "in ICASSP, 2015.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "24": {
293
+ "title": "\u201cFast Transformer Decoding: One Write-Head Is All You Need,\u201d",
294
+ "author": "Noam Shazeer,",
295
+ "venue": "arXiv preprint arXiv:1911.02150, 2019.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "25": {
301
+ "title": "\u201cBoosting the Throughput and Accelerator Utilization of Specialized\nCNN Inference beyond Increasing Batch Size,\u201d",
302
+ "author": "Jack Kosaian, Amar Phanishayee, Matthai Philipose, Debadeepta Dey, and Rashmi\nVinayak,",
303
+ "venue": "in ICML, 2021.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "26": {
309
+ "title": "\u201cXception: Deep Learning with Depthwise Separable Convolutions,\u201d",
310
+ "author": "Fran\u00e7ois Chollet,",
311
+ "venue": "in CVPR, 2017.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "27": {
317
+ "title": "\u201cSentencePiece: A Simple and Language Independent Subword Tokenizer\nand Detokenizer for Neural Text Processing,\u201d",
318
+ "author": "Taku Kudo and John Richardson,",
319
+ "venue": "in EMNLP, 2018.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "28": {
325
+ "title": "\u201cAudio Augmentation for Speech Recognition,\u201d",
326
+ "author": "Tom Ko, Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur,",
327
+ "venue": "in INTERSPEECH, 2015.",
328
+ "url": null
329
+ }
330
+ }
331
+ ],
332
+ "url": "http://arxiv.org/html/2309.07988v3"
333
+ }
20240119/2309.09466v2.json ADDED
@@ -0,0 +1,458 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Progressive Text-to-Image Diffusion with Soft Latent Direction",
3
+ "abstract": "In spite of the rapidly evolving landscape of text-to-image generation, the synthesis and manipulation of multiple entities while adhering to specific relational constraints pose enduring challenges. This paper introduces an innovative progressive synthesis and editing operation that systematically incorporates entities into the target image, ensuring their adherence to spatial and relational constraints at each sequential step.\nOur key insight stems from the observation that while a pre-trained text-to-image diffusion model adeptly handles one or two entities, it often falters when dealing with a greater number. To address this limitation, we propose harnessing the capabilities of a Large Language Model (LLM) to decompose intricate and protracted text descriptions into coherent directives adhering to stringent formats.\nTo facilitate the execution of directives involving distinct semantic operations\u2014namely insertion, editing, and erasing\u2014we formulate the Stimulus, Response, and Fusion (SRF) framework. Within this framework, latent regions are gently stimulated in alignment with each operation, followed by the fusion of the responsive latent components to achieve cohesive entity manipulation.\nOur proposed framework yields notable advancements in object synthesis, particularly when confronted with intricate and lengthy textual inputs. Consequently, it establishes a new benchmark for text-to-image generation tasks, further elevating the field\u2019s performance standards.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Text-to-image generation is a vital and rapidly evolving field in computer vision that has attracted unprecedented attention from both researchers and the general public.\nThe remarkable advances in this area are driven by the application of state-of-the-art image-generative models, such as auto-regressive (Ramesh et al. 2021 ###reference_23###; Wang et al. 2022 ###reference_31###) and diffusion models (Ramesh et al. 2022 ###reference_22###; Saharia et al. 2022 ###reference_26###; Rombach et al. 2022 ###reference_25###), as well as the availability of large-scale language-image datasets (Sharma et al. 2018 ###reference_29###; Schuhmann et al. 2022 ###reference_28###). However, existing methods face challenges in synthesizing or editing multiple subjects with specific relational and attributive constraints from textual prompts (Chefer et al. 2023 ###reference_7###).\nThe typical defects that occur in the synthesis results are missing entities, and inaccurate inter-object relations, as shown in LABEL:fig:teaser. Existing work improves the compositional skills of text-to-image synthesis models by incorporating linguistic structures (Feng et al. 2022 ###reference_9###), and attention controls (Hertz et al. 2022 ###reference_11###; Chefer et al. 2023 ###reference_7###) within the diffusion guidance process.\nNotably, Structured Diffusion (Feng et al. 2022 ###reference_9###) parse a text to extract numerous noun phrases, Attend-and-Excite (Chefer et al. 2023 ###reference_7###) strength attention activations associated with the most marginalized subject token. Yet, these remedies still face difficulties when the text description is long and complex, especially when it involves two and more subjects.\nFurthermore, users may find it necessary to perform subtle modifications to the unsatisfactory regions of the generated image, while preserving the remaining areas.\nIn this paper, we propose a novel progressive synthesizing/editing operation that successively incorporates entities, that conform to the spatial and relational constraint defined in the text prompt, while preserving the structure and aesthetics in each step. Our intuition is based on the observation that text-to-image models tend to better handle short-sentence prompts with a limited number of entities (1 or 2) than long descriptions with more entities.\nTherefore, we can parse the long descriptions into short-text prompts and craft the image progressively via a diffusion model to prevent the leakage and missing of semantics.\nHowever, applying such a progressive operation to diffusion models faces two major challenges:\nThe absence of a unified method for converting the integrated text-to-image process into a progressive procedure that can handle both synthesis and editing simultaneously. Current strategies can either synthesize (Chefer et al. 2023 ###reference_7###; Ma et al. 2023 ###reference_17###) or edit (Kawar et al. 2023 ###reference_15###; Goel et al. 2023 ###reference_10###; Xie et al. 2022 ###reference_35###; Avrahami, Fried, and Lischinski 2022 ###reference_1###; Yang et al. 2023 ###reference_36###), leaving a gap in the collective integration of these functions.\nThe need for precise positioning and relational entity placement. Existing solutions either rely on user-supplied masks for entity insertion, necessitating manual intervention (Avrahami, Fried, and Lischinski 2022 ###reference_1###; Nichol et al. 2021 ###reference_19###), or introduce supplementary phrases to determine the entity editing direction (Hertz et al. 2022 ###reference_11###; Brooks, Holynski, and Efros 2023 ###reference_5###), which inadequately addressing spatial and relational dynamics.\nTo overcome these hurdles, we present the Stimulus, Response, and Fusion (SRF) framework, assimilating a stimulus-response generation mechanism along with a latent fusion module into the diffusion process. Our methodology involves employing a fine-tuned GPT model to deconstruct complex texts into structured prompts, including synthesis, editing, and erasing operations governed by a unified SRF framework.\nOur progressive process begins with a real image or synthesized background, accompanied by the text prompt, and applies the SRF method in a step-by-step approach. Unlike previous strategies that aggressively manipulate the cross-attention map (Wu et al. 2023 ###reference_32###; Ma et al. 2023 ###reference_17###), our operation guides the attention map via a soft direction, avoiding brusque modifications that may lead to discordant synthesis.\nAdditionally, when addressing relationships like \u201cwearing\u201d and \u201cplaying with\u201d, we begin by parsing the positions of the objects, after which we incorporate the relational description into the diffusion process to enable object interactions.\nIn summary, we unveil a novel, progressive text-to-image diffusion framework that leverages the capabilities of a Language Model (LLM) to simplify language description, offering a unified solution for handling synthesis and editing patterns concurrently. This represents an advancement in text-to-image generation and provides a new platform for future research.\n###figure_1### ###figure_2### ###figure_3###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Image Manipulation",
21
+ "text": "Image manipulating refers to the process of digitally manipulating images to modify or enhance their visual appearance. Various techniques can be employed to achieve this end, such as the use of spatial masks or natural language descriptions to guide the editing process towards specific goals. One promising line of inquiry involves the application of generative adversarial networks (GANs) for image domain transfer (Isola et al. 2017 ###reference_14###; Sangkloy et al. 2017 ###reference_27###; Zhu et al. 2017 ###reference_39###; Choi et al. 2018 ###reference_8###; Wang et al. 2018 ###reference_30###; Huang et al. 2018 ###reference_12###; Park et al. 2019 ###reference_21###; Liu, Breuel, and Kautz 2017 ###reference_16###; Baek et al. 2021 ###reference_3###) or the manipulation of latent space (Zhu et al. 2016 ###reference_38###; Huh et al. 2020 ###reference_13###; Richardson et al. 2021 ###reference_24###; Zhu et al. 2020 ###reference_37###; Wulff and Torralba 2020 ###reference_33###; Bau et al. 2021 ###reference_4###).\nRecently, diffusion models have emerged as the mainstream. GLIDE (Nichol et al. 2021 ###reference_19###), Blended diffusion (Avrahami, Fried, and Lischinski 2022 ###reference_1###) and SmartBrush (Xie et al. 2022 ###reference_35###) replace masked image regions with predefined objects while preserving the inherent image structure. Additionally, techniques such as prompt-to-prompt (Hertz et al. 2022 ###reference_11###) and instructpix2pix (Brooks, Holynski, and Efros 2023 ###reference_5###) enable the modification of image-level objects through text alterations.\nContrasting previous methods that solely cater to either synthesis or editing, we construct a unified framework that accommodates both."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Cross Attention Control",
27
+ "text": "Objects and positional relationships are manifested within the cross attention map of the diffusion model. Inspired by this observation (Feng et al. 2022 ###reference_9###), techniques have been devised to manipulate the cross attention map for image synthesis or editing.\nPrompt-to-Prompt approach (Hertz et al. 2022 ###reference_11###) aims at regulating spatial arrangement and geometry through the manipulation of attention maps derived from textual prompts.\nStructured Diffusion (Feng et al. 2022 ###reference_9###) utilizes a text parsing mechanism to isolate numerous noun phrases, enhancing the corresponding attention space channels.\nThe Attend-and-Excite approach (Chefer et al. 2023 ###reference_7###) amplifies attention activations linked to the most marginalized subject tokens.\nDirected Diffusion (Ma et al. 2023 ###reference_17###) proposes an attention refinement strategy through the utilization of a weak and strong activation approach.\nThe main difference between our layout generation and the layout prediction approaches is that our method enables precise increment generation and intermediate modifications, i.e., we gradually change the layout instead of generating one layout at once. As for background fusion, we use a soft mask to ensure the object\u2019s integrity."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Method",
33
+ "text": ""
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Problem Formulation",
39
+ "text": "we elaborate upon our innovative progressive text-to-image framework. Given a multifaceted text description and a real or generated background , our primary goal is to synthesize an image that meticulously adheres to the modifications delineated by in alignment with .\nThe principal challenge emerges from the necessity to decode the intricacy of , manifesting across three complex dimensions:\nThe presence of multiple entities and attributes escalates the complexity of the scene, imposing stringent demands on the model to generate representations that are not only accurate but also internally coherent and contextually aligned.\nThe integration of diverse positional and relational descriptions calls for the model to exhibit an advanced level of understanding and to employ sophisticated techniques to ascertain precise spatial configuration, reflecting both explicit commands and implied semantic relations.\nThe concurrent introduction of synthesis, editing, and erasing operations introduces additional layers of complexity to the task. Managing these intricate operations within a unified model presents a formidable challenge, requiring a robust and carefully designed approach to ensure seamless integration and execution.\nWe address these challenges through a unified progressive text-to-image framework that: (1) employs a fine-tuned GPT model to distill complex texts into short prompts, categorizing each as synthesis, editing, or erasing mode, and accordingly generating the object mask; (2) sequentially processes these prompts within the same framework, utilizing attention-guided generation to capture position-aware features with soft latent direction, and subsequently integrates them with the previous stage\u2019s outcomes in a subtle manner. This approach synthesizes the intricacies of text-to-image transformation into a coherent, positionally aware procedure."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Text Decomposition",
45
+ "text": "may involve multiple objects and relations, we decompose into a set of short prompts, which produces an image accurately representing when executed sequentially.\nAs illustrated in fig. 1 ###reference_###, we fine-tune a GPT with OpenAI API (OpenAI 2023 ###reference_20###) to decompose into multiple structured prompts, denoted as .\nEach falls into one of the three distinct modes:\nSynthesis mode: \u201c[object 1] [relation] [object 2] [position] [object 3]\u201d,\nEditing mode: \u201cchange [object 1] to [object 2]\u201d,\nand Erasing mode: \u201cdelete [object]\u201d.\nIn pursuit of this aim, we start by collecting full texts using ChatGPT (Brown et al. 2020 ###reference_6###) and then manually deconstruct them into atomic prompts. Each prompt has a minimal number of relations and is labeled with synthesis/editing/erasing mode. Using these prompts and their corresponding modes for model supervision, we fine-tune the GPT model to enhance its decomposition and generalization ability.\nOperational Layouts.\nFor the synthesis operation, as shown in fig. 2 ###reference_###, we feed both the prompt and a reference bounding box into a frozen GPT-4 API. This procedure produces bounding boxes for the target entity that will be used in the subsequent phase. We exploit GPT-4\u2019s ability to extract information from positional and relational text descriptors. For example, the phrase \u201ccat and dog play together\u201d indicates a close spatial relationship between the \u201ccat\u201d and \u201cdog\u201d. Meanwhile, \u201con the right side\u201d suggests that both animals are positioned to the right of the \u201cyard\u201d.\nFor the editing and erasing operations, we employ Diffusion Inversion (Mokady et al. 2023 ###reference_18###) to obtain the cross-attention map of the target object, which serves as the layout mask. For example, when changing \u201capples\u201d to \u201coranges\u201d, we draw upon the attention corresponding to \u201capples\u201d. On the other hand, to \u201cdelete the oranges\u201d, we focus on the attention related to \u201coranges\u201d. Notably, this approach avoids the need to retrain the diffusion model and is proficient in managing open vocabularies.\nwe denote generated layout mask as for all operations in following sections for convention.\nIn the following section, we provide a complete introduction to the synthesis operation. At last, we exhibit that the editing and erasing operations only differ from the synthesis operation in parameter settings."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "Stimulus & Response",
51
+ "text": "With the synthesis prompt to be executed and its mask configuration . The goal of Latent Stimulus & Response is to enhance the positional feature representation on . As illustrated in fig. 3 ###reference_###, this is achieved by guided cross-attention generation.\nDiffering from the approaches (Ma et al. 2023 ###reference_17###; Wu et al. 2023 ###reference_32###), which manipulate attention through numerical replacement, we modulate the attention within mask regions associated with the entity in via a soft manner. Rather than directly altering the attention, we introduce a stimulus to ensure that the object attention converges to the desired scores.\nSpecifically, we formulate a stimulus loss function between the object mask and the corresponding attention as:\nwhere signifies the cross-attention map of the -th object at the -th timestep. denotes the mask of the -th object. represents the stimulus weights.\nThe intent of stimulus attention leans towards a spatial-wise generation process. This is achieved by backpropagating the gradient of the stimulus loss function, as defined in Eq. 1 ###reference_###, to update the latent code. This process serves as a latent response to the stimulated attention, which can be formally expressed as:\nIn the above equation, represents the updated latent code and denotes the learning rate. Finally, we execute another forward pass of the stable diffusion model using the updated latent code to compute for the subsequent denoising step.\nBased on eq. 1 ###reference_### and eq. 2 ###reference_###, we observe consistent spatial behavior in both the cross-attention and latent spaces. For a more detailed analysis, we refer to fig. 4 ###reference_### and find this property contributes to producing faithful and position-aware image representations.\n###figure_4###"
52
+ },
53
+ {
54
+ "section_id": "3.4",
55
+ "parent_section_id": "3",
56
+ "section_name": "Latent Fusion",
57
+ "text": "Recalling that denotes the latent feature of the target object, our next task is to integrate them seamlessly with the image from the preceding stage. For this purpose, we first convert the previous image into latent code by DDIM inversion, denoted as . Then for timestep t, we take a latent fusion strategy (Avrahami, Lischinski, and Fried 2022 ###reference_2###) between and , which is formulated as:\nwhere acts as a latent mask to blend the features of target objects with the background. In the synthesis operation, employing a uniform mask across all steps can be too restrictive, potentially destroying the object\u2019s semantic continuity. To mitigate this, we introduce a more soft mask, ensuring both object integrity and spatial consistency. Specifically, during the initial steps of diffusion denoising, we use layout mask to provide spatial guidance. Later, we shift to an attention mask , generated by averaging and setting a threshold on the cross-attention map, to maintain object cohesion. This process is denoted as:\nHere, serves as a tuning parameter balancing object integrity with spatial coherence.\nThe above response and fusion process is repeated for a subset of the diffusion timesteps, and the final output serves as the image for the next round generation.\nEditing and Erasing Specifications. Our editing and erasing operation differs in parameter setting: we set in eq. 1 ###reference_### as editing/erasing reference attention. we set in eq. 3 ###reference_### as the editing/erasing mask in all diffusion steps for detailed, shape-specific modifications."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Experiment",
63
+ "text": "Baselines and Evaluation.\nOur experimental comparison primarily concentrates on Single-Stage Generation and Progressive Generation baselines.\n(1) We refer to Single-Stage Generation methods as those that directly generate images from input text in a single step. Current methods include Stable Diffusion (Rombach et al. 2022 ###reference_25###), Attend-and-excite (Chefer et al. 2023 ###reference_7###), and Structured Diffusion (Feng et al. 2022 ###reference_9###). We compare these methods to analyze the efficacy of our progressive synthesis operation. We employ GPT to construct 500 text prompts that contain diverse objects and relationship types.\nFor evaluation, we follow (Wu et al. 2023 ###reference_32###) to compute Object Recall, which quantifies the percentage of objects successfully synthesized. Moreover, we measure Relation Accuracy as the percentage of spatial or relational text descriptions that are correctly identified, based on 8 human evaluations.\n(2) We define Progressive Generation as a multi-turn synthesis and editing process that builds on images from preceding rounds. Our comparison encompasses our comprehensive progressive framework against other progressive methods, which includes Instruct-based Diffusion models (Brooks, Holynski, and Efros 2023 ###reference_5###) and mask-based diffusion models (Rombach et al. 2022 ###reference_25###; Avrahami, Fried, and Lischinski 2022 ###reference_1###).\nTo maintain a balanced comparison, we source the same input images from SUN (Xiao et al. 2016 ###reference_34###) and text descriptions via the GPT API (OpenAI 2023 ###reference_20###). Specifically, we collate five scenarios totaling 25 images from SUN, a dataset that showcases real-world landscapes. Each image is paired with the text description, which ensures: 1. Integration of synthesis, editing, and easing paradigms; 2. Incorporation of a diverse assortment of synthesized objects; 3. Representation of spatial relations (e.g., top, bottom, left, right) and interactional relations (e.g., \u201cplaying with\u201d, \u201cwearing\u201d).\nFor evaluation, we utilize Amazon Mechanical Turk (AMT) to assess image fidelity. Each image is evaluated based on the fidelity of the generated objects, their relationships, the execution of editing instructions, and the alignment of erasures with the text descriptions.\nImages are rated on a fidelity scale from 0 to 2, where 0 represents the lowest quality and 2 signifies the highest. With two evaluators assessing each generated image, the cumulative score for each aspect can reach a maximum of 100.\n###figure_5### ###figure_6### ###figure_7### Implementation Details.\nOur framework builds upon Stable Diffusion (SD) V-1.4. During the Stimulus & Response stage, we assign a weight of equals 0.8 in eq. 1 ###reference_###, and set equals 25 and equals 40 in eq. 2 ###reference_###. We implement the stimulus procedure over the 16 \u00d7 16 attention units and integrate the Iterative Latent Refinement design (Chefer et al. 2023 ###reference_7###). In the latent fusion stage, the parameter is set to a value of 40."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Qualitative and Quantitative Results",
69
+ "text": "Qualitative and Quantitative Comparisons with Single-Generation Baselines. fig. 5 ###reference_### reveals that traditional baseline methods often struggle with object omissions and maintaining spatial and interactional relations. In contrast, our progressive generation process offers enhanced image fidelity and controllability. Additionally, we maintain finer details in the generated images, such as the shadows of the \u201cbeach chair\u201d. Result in table 1 ###reference_### indicates that our method outperforms the baselines in both object recall and relation accuracy.\nQualitative and Quantitative Comparisons with Progressive Generation Baselines. In fig. 7 ###reference_###, baseline methods often fail to synthesize full objects and may not represent relationships as described in the provided text. Moreover, during editing and erasing operations, these methods tend to produce outputs with compromised quality, showcasing unnatural characteristics. It\u2019s worth noting that any missteps or inaccuracies in the initial stages, such as those seen in InstructPix2Pix, can cascade into subsequent stages, exacerbating the degradation of results. In contrast, our proposed method consistently yields superior results through every phase. The results in table 2 ###reference_### further cement our method\u2019s dominant performance in synthesis, editing, and erasing operations, as underscored by the impressive rating scores."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Ablation Study",
75
+ "text": "Ablation study of method components is shown in table 3 ###reference_###. Without latent fusion, we lose continuity from prior generation stages, leading to inconsistencies in object synthesis and placement. On the other hand, omitting the Stimulus & Response process results in a lack of positional awareness, making the synthesis less precise. Both omissions manifest as significant drops in relation and entity accuracies, emphasizing the synergistic importance of these components in our approach.\nThe analysis of Stimulus & Response in the editing operation is highlighted in fig. 6 ###reference_###. Compared to Stable Diffusion, Stimulus & Response not only enhances object completeness and fidelity but also demonstrates a broader diversity in editing capabilities. The loss curve indicates that Stimulus & Response aligns more closely with the reference cross-attention, emphasizing its adeptness in preserving the original structure."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusion",
81
+ "text": "In this study, we addressed the prevailing challenges in the rapidly advancing field of text-to-image generation, particularly the synthesis and manipulation of multiple entities under specific constraints. Our innovative progressive synthesis and editing methodology ensures precise spatial and relational representations. Recognizing the limitations of existing diffusion models with increasing entities, we integrated the capabilities of a Large Language Model (LLM) to dissect complex text into structured directives. Our Stimulus, Response, and Fusion (SRF) framework, which enables seamless entity manipulation, represents a major stride in object synthesis from intricate text inputs.\nOne major limitation of our approach is that not all text can be decomposed into a sequence of short prompts. For instance, our approach finds it challenging to sequentially parse text such as \u201ca horse under a car and between a cat and a dog.\u201d We plan to gather more training data and labels of this nature to improve the parsing capabilities of GPT."
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "Acknowledgments",
87
+ "text": "This work is supported by the National Natural Science Foundation of China (NSFC No. 62272184). The computation is completed in the HPC Platform of Huazhong University of Science and Technology."
88
+ }
89
+ ],
90
+ "appendix": [],
91
+ "tables": {
92
+ "1": {
93
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T1.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T1.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.2.2.3.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T1.1.1.1\">Object Recall\u00a0\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T1.2.2.2\">Relation Accuracy\u00a0\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T1.2.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.2.3.1.1\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.2.3.1.2\">40.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.2.3.1.3\">19.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.2.4.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.2.4.2.1\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.2.4.2.2\">43.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.2.4.2.3\">21.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.2.5.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.2.5.3.1\">Attend-and-excite</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.2.5.3.2\">50.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.2.5.3.3\">23.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.2.6.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T1.2.6.4.1\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T1.2.6.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.2.6.4.2.1\">64.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T1.2.6.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.2.6.4.3.1\">50.8</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Quantitative comparison with Single-Stage Generation baselines.</figcaption>\n</figure>",
94
+ "capture": "Table 1: Quantitative comparison with Single-Stage Generation baselines."
95
+ },
96
+ "2": {
97
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"Sx4.T2.1.1.1.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.1.1.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"2\" id=\"Sx4.T2.1.1.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.1.2.1\">Synthesis</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T2.1.1.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.1.3.1\">Editing</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T2.1.1.1.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.1.1.4.1\">Erasing</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.2.1\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"Sx4.T2.1.2.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T2.1.2.1.2\">Object</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T2.1.2.1.3\">Relation</th>\n<th class=\"ltx_td ltx_th ltx_th_column\" id=\"Sx4.T2.1.2.1.4\"></th>\n<td class=\"ltx_td\" id=\"Sx4.T2.1.2.1.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T2.1.3.2.1\">InstructPix2Pix</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.3.2.2\">19</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.3.2.3\">24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.3.2.4\">32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.3.2.5\">29</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T2.1.4.3.1\">Stable-inpainting</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.4.3.2\">64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.4.3.3\">54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.4.3.4\">65</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.4.3.5\">45</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx4.T2.1.5.4.1\">Blended Latent</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.5.4.2\">67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.5.4.3\">52</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.5.4.4\">67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T2.1.5.4.5\">46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T2.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.6.5.1\">Ours</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.6.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.6.5.2.1\">74</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.6.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.6.5.3.1\">60</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.6.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.6.5.4.1\">72</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T2.1.6.5.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T2.1.6.5.5.1\">50</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Quantitative comparison of our method against Progressive Generation baselines, using rating scores.</figcaption>\n</figure>",
98
+ "capture": "Table 2: Quantitative comparison of our method against Progressive Generation baselines, using rating scores."
99
+ },
100
+ "3": {
101
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T3.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T3.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.2.2.3.1\">Method Variant</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T3.1.1.1\">Object Recall\u00a0\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"Sx4.T3.2.2.2\">Relation Accuracy\u00a0\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T3.2.3.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.2.3.1.1\">w/o LF</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.2.3.1.2\">38.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.2.3.1.3\">21.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.2.4.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.2.4.2.1\">w/o S&amp;R</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.2.4.2.2\">58.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T3.2.4.2.3\">45.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T3.2.5.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T3.2.5.3.1\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T3.2.5.3.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.2.5.3.2.1\">64.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"Sx4.T3.2.5.3.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T3.2.5.3.3.1\">50.8</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Ablation study. LF and S&amp;R represent Latent Fusion and Stimulus &amp; Response respectively.</figcaption>\n</figure>",
102
+ "capture": "Table 3: Ablation study. LF and S&R represent Latent Fusion and Stimulus & Response respectively."
103
+ }
104
+ },
105
+ "image_paths": {
106
+ "1": {
107
+ "figure_path": "2309.09466v2_figure_1.png",
108
+ "caption": "Figure 1: We employ a fine-tuned GPT model to deconstruct a comprehensive text into structured prompts, each classified under synthesis, editing, and erasing operations.",
109
+ "url": "http://arxiv.org/html/2309.09466v2/x1.png"
110
+ },
111
+ "2": {
112
+ "figure_path": "2309.09466v2_figure_2.png",
113
+ "caption": "Figure 2: For the synthesis operation, we generate the layout indicated in the prompt from a frozen GPT-4 model, which subsequently yields the new bounding box coordinates for object insertion.",
114
+ "url": "http://arxiv.org/html/2309.09466v2/x2.png"
115
+ },
116
+ "3": {
117
+ "figure_path": "2309.09466v2_figure_3.png",
118
+ "caption": "Figure 3: Overview of our unified framework emphasizes progressive synthesis, editing, and erasing. In each progressive step, A random latent ztsubscript\ud835\udc67\ud835\udc61z_{t}italic_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is directed through the cross-attention map in inverse diffusion. Specifically, we design a soft stimulus loss that evaluates the positional difference between entity attention and the target mask region, leading to a gradient for updating the latent zt\u22121*superscriptsubscript\ud835\udc67\ud835\udc611z_{t-1}^{*}italic_z start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT as a latent response. Subsequentially, another forward diffusion pass is applied to denoise zt*subscriptsuperscript\ud835\udc67\ud835\udc61z^{*}_{t}italic_z start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, yielding deriving zt\u22121*subscriptsuperscript\ud835\udc67\ud835\udc611z^{*}_{t-1}italic_z start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT. In the latent fusion phase, we transform the previous i\ud835\udc56iitalic_i-th image into a latent code zt\u22121b\u2062gsubscriptsuperscript\ud835\udc67\ud835\udc4f\ud835\udc54\ud835\udc611z^{bg}_{t-1}italic_z start_POSTSUPERSCRIPT italic_b italic_g end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT using DDIM inversion. The blending of zt\u22121*subscriptsuperscript\ud835\udc67\ud835\udc611z^{*}_{t-1}italic_z start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT with zt\u22121b\u2062gsubscriptsuperscript\ud835\udc67\ud835\udc4f\ud835\udc54\ud835\udc611z^{bg}_{t-1}italic_z start_POSTSUPERSCRIPT italic_b italic_g end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT incorporates a dynamic evolving mask, which starts with a layout box and gradually shifts to cross-attention. Finally, zt\u22121*subscriptsuperscript\ud835\udc67\ud835\udc611z^{*}_{t-1}italic_z start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT undergoes multiple diffusion reverse steps and results in the (i+1)\ud835\udc561(i+1)( italic_i + 1 )-th image.",
119
+ "url": "http://arxiv.org/html/2309.09466v2/x3.png"
120
+ },
121
+ "4": {
122
+ "figure_path": "2309.09466v2_figure_4.png",
123
+ "caption": "Figure 4: Visual results generated by Stable Diffusion and Stimulus & Response. Stable Diffusion shows noticeable problems in positional generation (top), semantic and attribute coupling (middle), and object omission (bottom), while ours delivers precise outcomes.",
124
+ "url": "http://arxiv.org/html/2309.09466v2/x4.png"
125
+ },
126
+ "5": {
127
+ "figure_path": "2309.09466v2_figure_5.png",
128
+ "caption": "Figure 5: Qualitative comparison with Single-Stage baselines. Common errors in the baselines include missing objects and mismatched relations. Our method demonstrates the progressive generation process.",
129
+ "url": "http://arxiv.org/html/2309.09466v2/x5.png"
130
+ },
131
+ "6": {
132
+ "figure_path": "2309.09466v2_figure_6.png",
133
+ "caption": "Figure 6: The analysis of Stimulus & Response in the editing operation. The left side shows a visual comparison between SD (Stable Diffusion) and S&R (Stimulus & Response). The right side presents the convergence curve of cross-attention loss during diffusion sampling steps. The loss is computed as the difference between reference attention and model-generated attention. In the right figure, red, blue, and green colors represent the objects \u201cjaguar\u201d, \u201ccat\u201d, and \u201cmonkey\u201d respectively. Solid lines indicate SD loss, while dashed lines represent S&D loss.",
134
+ "url": "http://arxiv.org/html/2309.09466v2/x6.png"
135
+ },
136
+ "7": {
137
+ "figure_path": "2309.09466v2_figure_7.png",
138
+ "caption": "Figure 7: Qualitative comparison with Progressive Generation baselines. The first two phases illustrate object synthesis operation, where target objects are color-coded in both the text and layout. Subsequent phases depict object editing and erasing processes, wherein a cat is first transformed into a rabbit and then the rabbit is removed.",
139
+ "url": "http://arxiv.org/html/2309.09466v2/x7.png"
140
+ }
141
+ },
142
+ "validation": true,
143
+ "references": [
144
+ {
145
+ "1": {
146
+ "title": "Blended latent diffusion.",
147
+ "author": "Avrahami, O.; Fried, O.; and Lischinski, D. 2022.",
148
+ "venue": "arXiv preprint arXiv:2206.02779.",
149
+ "url": null
150
+ }
151
+ },
152
+ {
153
+ "2": {
154
+ "title": "Blended diffusion for text-driven editing of natural images.",
155
+ "author": "Avrahami, O.; Lischinski, D.; and Fried, O. 2022.",
156
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, 18208\u201318218.",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "3": {
162
+ "title": "Rethinking the truly unsupervised image-to-image translation.",
163
+ "author": "Baek, K.; Choi, Y.; Uh, Y.; Yoo, J.; and Shim, H. 2021.",
164
+ "venue": "In Proceedings of the IEEE/CVF International Conference on\nComputer Vision, 14154\u201314163.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "4": {
170
+ "title": "Paint by word.",
171
+ "author": "Bau, D.; Andonian, A.; Cui, A.; Park, Y.; Jahanian, A.; Oliva, A.; and\nTorralba, A. 2021.",
172
+ "venue": "arXiv preprint arXiv:2103.10951.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "5": {
178
+ "title": "Instructpix2pix: Learning to follow image editing instructions.",
179
+ "author": "Brooks, T.; Holynski, A.; and Efros, A. A. 2023.",
180
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, 18392\u201318402.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "6": {
186
+ "title": "Language models are few-shot learners.",
187
+ "author": "Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.;\nNeelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020.",
188
+ "venue": "Advances in neural information processing systems, 33:\n1877\u20131901.",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "7": {
194
+ "title": "Attend-and-excite: Attention-based semantic guidance for\ntext-to-image diffusion models.",
195
+ "author": "Chefer, H.; Alaluf, Y.; Vinker, Y.; Wolf, L.; and Cohen-Or, D. 2023.",
196
+ "venue": "arXiv preprint arXiv:2301.13826.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "8": {
202
+ "title": "Stargan: Unified generative adversarial networks for multi-domain\nimage-to-image translation.",
203
+ "author": "Choi, Y.; Choi, M.; Kim, M.; Ha, J.-W.; Kim, S.; and Choo, J. 2018.",
204
+ "venue": "In Proceedings of the IEEE conference on computer vision and\npattern recognition, 8789\u20138797.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "9": {
210
+ "title": "Training-Free Structured Diffusion Guidance for Compositional\nText-to-Image Synthesis.",
211
+ "author": "Feng, W.; He, X.; Fu, T.-J.; Jampani, V.; Akula, A.; Narayana, P.; Basu, S.;\nWang, X. E.; and Wang, W. Y. 2022.",
212
+ "venue": "arXiv preprint arXiv:2212.05032.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "10": {
218
+ "title": "Pair-diffusion: Object-level image editing with\nstructure-and-appearance paired diffusion models.",
219
+ "author": "Goel, V.; Peruzzo, E.; Jiang, Y.; Xu, D.; Sebe, N.; Darrell, T.; Wang, Z.; and\nShi, H. 2023.",
220
+ "venue": "arXiv preprint arXiv:2303.17546.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "11": {
226
+ "title": "Prompt-to-prompt image editing with cross attention control.",
227
+ "author": "Hertz, A.; Mokady, R.; Tenenbaum, J.; Aberman, K.; Pritch, Y.; and Cohen-Or, D.\n2022.",
228
+ "venue": "arXiv preprint arXiv:2208.01626.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "12": {
234
+ "title": "Multimodal unsupervised image-to-image translation.",
235
+ "author": "Huang, X.; Liu, M.-Y.; Belongie, S.; and Kautz, J. 2018.",
236
+ "venue": "In Proceedings of the European conference on computer vision\n(ECCV), 172\u2013189.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "13": {
242
+ "title": "Transforming and projecting images into class-conditional generative\nnetworks.",
243
+ "author": "Huh, M.; Zhang, R.; Zhu, J.-Y.; Paris, S.; and Hertzmann, A. 2020.",
244
+ "venue": "In Computer Vision\u2013ECCV 2020: 16th European Conference,\nGlasgow, UK, August 23\u201328, 2020, Proceedings, Part II 16, 17\u201334. Springer.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "14": {
250
+ "title": "Image-to-image translation with conditional adversarial networks.",
251
+ "author": "Isola, P.; Zhu, J.-Y.; Zhou, T.; and Efros, A. A. 2017.",
252
+ "venue": "In Proceedings of the IEEE conference on computer vision and\npattern recognition, 1125\u20131134.",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "15": {
258
+ "title": "Imagic: Text-based real image editing with diffusion models.",
259
+ "author": "Kawar, B.; Zada, S.; Lang, O.; Tov, O.; Chang, H.; Dekel, T.; Mosseri, I.; and\nIrani, M. 2023.",
260
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, 6007\u20136017.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "16": {
266
+ "title": "Unsupervised image-to-image translation networks.",
267
+ "author": "Liu, M.-Y.; Breuel, T.; and Kautz, J. 2017.",
268
+ "venue": "Advances in neural information processing systems, 30.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "17": {
274
+ "title": "Directed Diffusion: Direct Control of Object Placement through\nAttention Guidance.",
275
+ "author": "Ma, W.-D. K.; Lewis, J.; Kleijn, W. B.; and Leung, T. 2023.",
276
+ "venue": "arXiv preprint arXiv:2302.13153.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "18": {
282
+ "title": "Null-text inversion for editing real images using guided diffusion\nmodels.",
283
+ "author": "Mokady, R.; Hertz, A.; Aberman, K.; Pritch, Y.; and Cohen-Or, D. 2023.",
284
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, 6038\u20136047.",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "19": {
290
+ "title": "Glide: Towards photorealistic image generation and editing with\ntext-guided diffusion models.",
291
+ "author": "Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.;\nSutskever, I.; and Chen, M. 2021.",
292
+ "venue": "arXiv preprint arXiv:2112.10741.",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "20": {
298
+ "title": "GPT-4 Technical Report.",
299
+ "author": "OpenAI. 2023.",
300
+ "venue": "arXiv:2303.08774.",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "21": {
306
+ "title": "Semantic image synthesis with spatially-adaptive normalization.",
307
+ "author": "Park, T.; Liu, M.-Y.; Wang, T.-C.; and Zhu, J.-Y. 2019.",
308
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision\nand pattern recognition, 2337\u20132346.",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "22": {
314
+ "title": "Hierarchical text-conditional image generation with clip latents.",
315
+ "author": "Ramesh, A.; Dhariwal, P.; Nichol, A.; Chu, C.; and Chen, M. 2022.",
316
+ "venue": "arXiv preprint arXiv:2204.06125.",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "23": {
322
+ "title": "Zero-shot text-to-image generation.",
323
+ "author": "Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and\nSutskever, I. 2021.",
324
+ "venue": "In International Conference on Machine Learning, 8821\u20138831.\nPMLR.",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "24": {
330
+ "title": "Encoding in style: a stylegan encoder for image-to-image translation.",
331
+ "author": "Richardson, E.; Alaluf, Y.; Patashnik, O.; Nitzan, Y.; Azar, Y.; Shapiro, S.;\nand Cohen-Or, D. 2021.",
332
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision\nand pattern recognition, 2287\u20132296.",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "25": {
338
+ "title": "High-resolution image synthesis with latent diffusion models.",
339
+ "author": "Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; and Ommer, B. 2022.",
340
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, 10684\u201310695.",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "26": {
346
+ "title": "Photorealistic text-to-image diffusion models with deep language\nunderstanding.",
347
+ "author": "Saharia, C.; Chan, W.; Saxena, S.; Li, L.; Whang, J.; Denton, E.; Ghasemipour,\nS. K. S.; Ayan, B. K.; Mahdavi, S. S.; Lopes, R. G.; et al. 2022.",
348
+ "venue": "arXiv preprint arXiv:2205.11487.",
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "27": {
354
+ "title": "Scribbler: Controlling deep image synthesis with sketch and color.",
355
+ "author": "Sangkloy, P.; Lu, J.; Fang, C.; Yu, F.; and Hays, J. 2017.",
356
+ "venue": "In Proceedings of the IEEE conference on computer vision and\npattern recognition, 5400\u20135409.",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "28": {
362
+ "title": "Laion-5b: An open large-scale dataset for training next generation\nimage-text models.",
363
+ "author": "Schuhmann, C.; Beaumont, R.; Vencu, R.; Gordon, C.; Wightman, R.; Cherti, M.;\nCoombes, T.; Katta, A.; Mullis, C.; Wortsman, M.; et al. 2022.",
364
+ "venue": "arXiv preprint arXiv:2210.08402.",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "29": {
370
+ "title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset\nfor automatic image captioning.",
371
+ "author": "Sharma, P.; Ding, N.; Goodman, S.; and Soricut, R. 2018.",
372
+ "venue": "In Proceedings of the 56th Annual Meeting of the Association\nfor Computational Linguistics (Volume 1: Long Papers), 2556\u20132565.",
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "30": {
378
+ "title": "High-resolution image synthesis and semantic manipulation with\nconditional gans.",
379
+ "author": "Wang, T.-C.; Liu, M.-Y.; Zhu, J.-Y.; Tao, A.; Kautz, J.; and Catanzaro, B.\n2018.",
380
+ "venue": "In Proceedings of the IEEE conference on computer vision and\npattern recognition, 8798\u20138807.",
381
+ "url": null
382
+ }
383
+ },
384
+ {
385
+ "31": {
386
+ "title": "Clip-gen: Language-free training of a text-to-image generator with\nclip.",
387
+ "author": "Wang, Z.; Liu, W.; He, Q.; Wu, X.; and Yi, Z. 2022.",
388
+ "venue": "arXiv preprint arXiv:2203.00386.",
389
+ "url": null
390
+ }
391
+ },
392
+ {
393
+ "32": {
394
+ "title": "Harnessing the spatial-temporal attention of diffusion models for\nhigh-fidelity text-to-image synthesis.",
395
+ "author": "Wu, Q.; Liu, Y.; Zhao, H.; Bui, T.; Lin, Z.; Zhang, Y.; and Chang, S. 2023.",
396
+ "venue": "arXiv preprint arXiv:2304.03869.",
397
+ "url": null
398
+ }
399
+ },
400
+ {
401
+ "33": {
402
+ "title": "Improving inversion and generation diversity in stylegan using a\ngaussianized latent space.",
403
+ "author": "Wulff, J.; and Torralba, A. 2020.",
404
+ "venue": "arXiv preprint arXiv:2009.06529.",
405
+ "url": null
406
+ }
407
+ },
408
+ {
409
+ "34": {
410
+ "title": "Sun database: Exploring a large collection of scene categories.",
411
+ "author": "Xiao, J.; Ehinger, K. A.; Hays, J.; Torralba, A.; and Oliva, A. 2016.",
412
+ "venue": "International Journal of Computer Vision, 119: 3\u201322.",
413
+ "url": null
414
+ }
415
+ },
416
+ {
417
+ "35": {
418
+ "title": "SmartBrush: Text and Shape Guided Object Inpainting with Diffusion\nModel.",
419
+ "author": "Xie, S.; Zhang, Z.; Lin, Z.; Hinz, T.; and Zhang, K. 2022.",
420
+ "venue": "arXiv preprint arXiv:2212.05034.",
421
+ "url": null
422
+ }
423
+ },
424
+ {
425
+ "36": {
426
+ "title": "Paint by example: Exemplar-based image editing with diffusion models.",
427
+ "author": "Yang, B.; Gu, S.; Zhang, B.; Zhang, T.; Chen, X.; Sun, X.; Chen, D.; and Wen,\nF. 2023.",
428
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision\nand Pattern Recognition, 18381\u201318391.",
429
+ "url": null
430
+ }
431
+ },
432
+ {
433
+ "37": {
434
+ "title": "In-domain gan inversion for real image editing.",
435
+ "author": "Zhu, J.; Shen, Y.; Zhao, D.; and Zhou, B. 2020.",
436
+ "venue": "In Computer Vision\u2013ECCV 2020: 16th European Conference,\nGlasgow, UK, August 23\u201328, 2020, Proceedings, Part XVII 16, 592\u2013608.\nSpringer.",
437
+ "url": null
438
+ }
439
+ },
440
+ {
441
+ "38": {
442
+ "title": "Generative visual manipulation on the natural image manifold.",
443
+ "author": "Zhu, J.-Y.; Kr\u00e4henb\u00fchl, P.; Shechtman, E.; and Efros, A. A. 2016.",
444
+ "venue": "In Computer Vision\u2013ECCV 2016: 14th European Conference,\nAmsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V 14,\n597\u2013613. Springer.",
445
+ "url": null
446
+ }
447
+ },
448
+ {
449
+ "39": {
450
+ "title": "Unpaired image-to-image translation using cycle-consistent\nadversarial networks.",
451
+ "author": "Zhu, J.-Y.; Park, T.; Isola, P.; and Efros, A. A. 2017.",
452
+ "venue": "In Proceedings of the IEEE international conference on computer\nvision, 2223\u20132232.",
453
+ "url": null
454
+ }
455
+ }
456
+ ],
457
+ "url": "http://arxiv.org/html/2309.09466v2"
458
+ }
20240119/2309.14393v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240119/2309.16284v2.json ADDED
@@ -0,0 +1,346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "NOMAD: Unsupervised Learning of Perceptual Embeddings for Speech Enhancement and Non-matching Reference Audio Quality Assessment",
3
+ "abstract": "This paper presents NOMAD (Non-Matching Audio Distance), a differentiable perceptual similarity metric that measures the distance of a degraded signal against non-matching references. The proposed method is based on learning deep feature embeddings via a triplet loss guided by the Neurogram Similarity Index Measure (NSIM) to capture degradation intensity. During inference, the similarity score between any two audio samples is computed through Euclidean distance of their embeddings. NOMAD is fully unsupervised and can be used in general perceptual audio tasks for audio analysis e.g. quality assessment and generative tasks such as speech enhancement and speech synthesis.\nThe proposed method is evaluated with 3 tasks. Ranking degradation intensity, predicting speech quality, and as a loss function for speech enhancement. Results indicate NOMAD outperforms other non-matching reference approaches in both ranking degradation intensity and quality assessment, exhibiting competitive performance with full-reference audio metrics. NOMAD demonstrates a promising technique that mimics human capabilities in assessing audio quality with non-matching references to learn perceptual embeddings without the need for human-generated labels.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Objective speech and audio quality assessment techniques include full-reference metrics [1 ###reference_1###, 2 ###reference_2###, 3 ###reference_3###, 4 ###reference_4###], using both degraded and clean signals, and no-reference metrics [5 ###reference_5###, 6 ###reference_6###, 7 ###reference_7###, 8 ###reference_8###, 9 ###reference_9###] that predict mean opinion scores (MOS) from the degraded signal only. No-reference metrics overcome issues of full-reference metrics, like sensitivity to imperceptible differences between degraded and reference signals [4 ###reference_4###], as well as the lack of need for a reference signal. However, no-reference metrics assume absolute quality, as MOS is given without a reference, using the absolute category rating (ACR) scale [10 ###reference_10###], which is calibrated with anchors. Yet, MOS distributions remain relative due to biases [11 ###reference_11###] and stimulus dependence. We observe that merging MOS databases for no-reference metrics is uncommon due to label space differences; MOS of has different meanings across databases.\nIn [12 ###reference_12###] it has been noticed that no-reference models would need to learn the hidden references used by raters when judging quality which can be very challenging. To solve this, [12 ###reference_12###] proposed NORESQA which measures the perceived quality of a degraded signal against non-matching references i.e. using any clean speech signal, not necessarily the clean counterpart of the degraded signal. The advantage of non-matching references is twofold: the clean counterpart is not required and quality can be measured relatively to any other signal. If any clean speech is used as a non-matching reference, then absolute quality is measured. This approach reflects the higher capacity of humans in sensory judgement when comparing stimuli instead of absolute quality [13 ###reference_13###].\nIn this work, we introduce NOMAD (Non-Matching Audio Distance), a perceptual differentiable audio metric that operates with any non-matching reference. Our method creates an embedding space where signals with similar degradation intensity are close. We employ the triplet loss [14 ###reference_14###], a popular contrastive approach in computer vision for metric learning, to achieve this. We use degradation intensity as a label, which is linked to quality, and programmatically controlled without relying on human labels. However, the challenge with degradation intensity parameters is their lack of comparability across different degradations. To address this, we propose to use the Neurogram Similarity Index Measure (NSIM) [15 ###reference_15###], a spectro-temporal similarity between degraded and clean signals ranging from to . A non-matching reference metric must be reference-invariant, consistently producing the same score for a degraded signal regardless of the clean signal used for comparison. We attain this by training a feature space invariant to speaker and sentence characteristics, using the self-supervised learning (SSL) model wav2vec 2.0 [16 ###reference_16###], which has proven efficacy across diverse downstream tasks with distinct variational factors [17 ###reference_17###, 18 ###reference_18###, 19 ###reference_19###].\nNOMAD can be used in diverse applications: quality prediction, perceptual audio retrieval, parallel and non-parallel speech enhancement, and waveform synthesis like text-to-speech. We evaluate NOMAD\u2019s performance in three tasks: ranking degradation intensity, speech quality assessment, and speech enhancement training loss. The PyTorch code, pip package, and dataset generation code for training and validation are available on GitHub111https://github.com/alessandroragano/nomad ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Proposed Method",
15
+ "text": "Our approach relies on the assumption that audio quality is linked to degradation intensity. We aim to develop a similarity metric that captures degradation intensity in degraded audio, irrespective of factors like speaker or sentence attributes in speech. Let us start by considering a scenario with a single degradation type, such as background noise. We can model degradation using a function denoted as . When applied to clean speech , this function generates a signal with degradation intensity depending on a scalar parameter, e.g., SNR.\nGiven two values and , along with the corresponding degraded samples and , the degradation parameter can be used as a label222The direction of depends on the type of degradation. to learn a similarity function that follows the constraint:\nThe idea is to induce a semantic order in the feature space based on the level of degradation. Using only one degradation is limiting for generalization. A perceptual audio similarity metric should be able to capture information from multiple degradations. Let us consider the case where the clean speech is perturbed with two different degradations producing two signals and . In this scenario, each degradation is controlled by a single scalar parameter and it is not possible to establish an order between the two parameters such that\nIn order to establish cross-degradation similarity with respect to to the clean reference we propose to use the NSIM [15 ###reference_15###] which is a spectro-temporal measure of similarity between a degraded signal and its clean counterpart and it has been proven to model human speech quality perception [1 ###reference_1###]. The NSIM is a score between and relative to the reference signal defined as:\nHere, represents the clean speech spectrogram, and denotes the degraded spectrogram. The NSIM relies on statistical measures: and are the mean and standard deviation of the reference spectrogram, while and are the mean and standard deviation of the degraded spectrogram. Additionally, denotes the cross-correlation between the reference and degraded spectrogram. The constant values and are determined based on the intensity range of the reference spectrogram and used for boundary conditions [1 ###reference_1###].\nBy employing the NSIM, we gain the capability to compare multiple degradations, leading to the formulation of Equation 3 ###reference_###:\nIn this equation, denotes the signal obtained using degradation , while is obtained from another degradation . The constraint implies that must be closer to than since the NSIM of degradation is higher.\nIn the following section, we illustrate how the NSIM can be leveraged to learn a perceptual distance function that is cross-reference, even though the NSIM is a score relative to the same reference signal."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Loss Function",
21
+ "text": "To model a perceptual similarity metric as formulated in Equation 3 ###reference_### we employ the triplet margin loss function [14 ###reference_14###], supervised by the order of the NSIM scores. The triplet loss function is commonly used for deep metric learning and it is formulated as follows:\nHere, , , and represent anchor, positive, and negative samples respectively and is the neural network producing the embeddings.\nThe goal of the triplet loss is to make the squared Euclidean distance between the anchor-positive pair smaller and increase the distance between the anchor-negative pair by a specified margin .\nThe training process includes triplet sampling. As we lack categorical labels like and for relationships, we observe that the concept of \\saycloseness is relative in regression problems. For instance, someone cm tall is closer to cm than cm. Similarly, an NSIM of is closer to than to .\nOur method employs NSIM space to represent \u201dcloseness\u201d between speech samples. This develops feature embeddings capturing similarities among speech samples with similar NSIM and thus close degradation intensity. At test time, we measure the similarity score between embeddings using the Euclidean distance. The challenge of triplet sampling, crucial for the triplet loss to work [14 ###reference_14###], is addressed in the next section.\nOur approach does not aim to predict exact NSIM scores as they are relative to the reference signal used. Predicting NSIM would prevent comparing signals from different sources.\n###figure_1###"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Sampling Strategy",
27
+ "text": "Large batch sizes are required to find harder triplets in the embedding space e.g. 1800 [14 ###reference_14###].\nTo avoid memory issues with large deep models we combine two different sampling strategies called easy and hard sampling not requiring large batch sizes since we do not use the embedding space. Initial experiments yielded better results than using only one of them. Our strategy is based on the idea that harder triplets can be identified with the NSIM. Intuitively, the larger the NSIM between two samples, the easier is the task since the same reference is used in the triplet.\nFor a clean speech file , we consider the sample set which includes degraded versions of perturbed by degradations at levels and their corresponding NSIM value . We sample a clean file and an anchor from . The positive is the sample with the closest NSIM score to the anchor: . Easy and hard sampling differ in negative selection. In easy strategy, the negative is sampled from . The set of negative samples includes all the samples of where the NSIM scores are more distant from the anchor than the positive by at least a margin .\nHard approach picks the closest sample to anchor after the positive: which is the hardest negative to contrast. See Figure 1 ###reference_### for easy and hard sampling illustrations."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Architecture",
33
+ "text": "In the method outlined, each triplet uses a distinct reference, but within each triplet, the anchor, positive, and negative samples all come from the same reference (Figure 2 ###reference_###). Contrasting degraded samples from the same reference rather than dissimilar ones during training helps create an embedding space that captures degradation levels, facilitating the use of non-matching references.\nTo illustrate this, consider a triplet with the same degradation, like adding background noise linearly to clean speech with intensity , yielding noisy speech .\nHere, has three distinct scalar parameters: for the anchor, for the positive, and for the negative example. The goal is to obtain an embedding space where content and degradation are disentangled as they are in the waveform space, making Equation 4 ###reference_###:\n\nThis objective is facilitated if we use the same clean speech . Indeed, during training the model is forced to cancel out the clean component which is the common part between the 3 signals and to rely on the residual between both pairs anchor-positive and anchor-negative respectively to minimize the loss.\nTo achieve this, a feature representation model is needed that can disentangle factors like content and degradation, ensuring that samples with similar degradation levels are close in the embedding space. Attenuating the clean component is not trivial since degradations are usually more complex than a sum between two signals e.g. convolution in reverberated speech.\nTo this end, we propose using the pre-trained BASE wav2vec 2.0 model [16 ###reference_16###]. It consists of 7 convolutional layers followed by 12 transformer layers, yielding a -dimensional feature vector per time frame. We take the average over the time dimension at the final transformer layer, followed by a ReLU + -dimensional embedding layer. Embeddings are L2 normalized as described in [14 ###reference_14###].\nWe emphasize that constructing triplets from the same reference and utilizing the pre-trained wav2vec 2.0 model are crucial to achieving the results shown below. We tested models built from scratch and triplets with negative examples from different references, but both led to decreased performance."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Usage",
39
+ "text": "NOMAD embeddings can be used as follows. Given a degraded recording and a non-matching clean reference we calculate the euclidean distance between the embeddings as\n where is the model producing perceptual embeddings. NOMAD scores may vary based on the reference used. To minimize variability we calculate the mean on a large set of non-matching references as follows: .\n###figure_2###"
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Performance Evaluation",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "Experimental Setup",
51
+ "text": "Training and validation sets of NOMAD are created from the Librispeech [20 ###reference_20###] partition train-clean-100 which consists of hours of English clean speech spoken by female speakers and male speakers and recorded at kHz. We choose perturbations: speech clipping, background noise, Opus, and mp3 codecs. Each perturbation is generated at levels. Speech clipping is generated by choosing the percentage of samples to clip in the waveforms with 5%, 10%, 25%, 40%, 60%. Background noise is controlled with the amount of noise injected in the clean signal with 0, 8, 15, 25, and 40 db SNR. Noise files are randomly extracted from the training set of the MS-SNSD dataset [21 ###reference_21###]. Speech codecs mp3 and Opus are generated with the following conditions: 8, 16, 32, 64, and 128 kbps.\nBoth easy and hard sampling subsets are created using triplets which are split into training and validation. Training and validation do not overlap in terms of clean speech sources and they include the same conditions.\nFor the easy sampling we set the hyperparameter to avoid negative samples that are too close to the positive as illustrated in Figure 1 ###reference_###. The margin in the triplet loss is set to .\nThe NSIM values are calculated from the ViSQOL v3 model [22 ###reference_22###] which outputs patchwise scores. To get utterance-level scores, the average of all patch NSIM scores is computed.\nDuring training we freeze the convolutional layers, finetune the transformer layers with a learning rate equal to 0.00001 and the embedding layer with a learning rate set to 0.0001. Both learning rates decay exponentially with a decay factor of every epochs without improvement. The batch size is set to 8.\nTraining is stopped when the triplet loss does not decrease on the validation set for 200 epochs.\n###figure_3### All the non-matching reference model scores are calculated using a sample of clean speech sources from the TSP database [23 ###reference_23###]. Recordings are downsampled to kHz and the speakers that are used in the TCD-VoIP database [24 ###reference_24###] are excluded since it is one of the test databases that we use below."
52
+ },
53
+ {
54
+ "section_id": "3.2",
55
+ "parent_section_id": "3",
56
+ "section_name": "Degradation Monotonicity",
57
+ "text": "A descriptive examination is done on the validation set, depicted in Figure 3 ###reference_###. Here, we display averaged NOMAD scores, ordered from low (closer to non-matching clean speech) to high (more distorted). NOMAD ranks all validation conditions except clipping, demonstrating its performance and adaptability to non-matching references.\nWe investigate unseen conditions and degradations. We assess degradation monotonicity concerning intensity and quality using Spearman\u2019s rank correlation coefficient (SC). For ranking degradation parameters, we create an artificial test set from the Librispeech partition test-clean. This includes 26 unseen conditions for mp3 and Opus, 20 for clip, and 25 for background noise, drawn from the MS-SNSD [21 ###reference_21###] test set. To assess out-of-domain degradations, we create 30 conditions of reverberated speech using the SoX audio effects library [25 ###reference_25###] and 6 conditions using the Vorbis codec. Every degraded sample is created from a distinct clean source file. Scores are calculated using clean speech sources from the TSP database as non-matching references.\nWe compare NOMAD with 2 baselines; the average over the last transformer layer of the pre-trained BASE model wav2vec 2.0 and NORESQA, summarized in Table 1 ###reference_###. Results show NOMAD outperforms in most conditions, except clipping, where wav2vec 2.0 ranking is better. This highlights wav2vec 2.0\u2019s suitability as a pre-trained model for NOMAD, with our approach contributing to NOMAD\u2019s superior performance.\nWhile ranking by degradation intensity has its limitations, as it may not always reflect perception, we conduct a degradation-wise evaluation against MOS using the TCD-VoIP database, which includes both seen and unseen degradations. Table 1 ###reference_### confirms NOMAD\u2019s perceptual ranking ability, surpassing wav2vec 2.0 and NORESQA in all degradations except background noise.\nRanking Intensity\nRanking Quality, TCD-VoIP\n\n\nNOISE\nOPUS\nMP3\nCLIP\nVORB.\nREV.\n\nCLIP\nNOISE\nECHO\nCHOP\nCSPKR\n\nNOMAD\n-0.74\n-0.68\n-0.73\n0.89\n-0.83\n0.89\n\n-0.98\n-0.70\n-0.84\n-0.86\n-0.82\n\nw2v\n-0.73\n-0.42\n-0.54\n0.92\n0.03\n0.87\n\n-0.93\n-0.79\n-0.76\n-0.33\n-0.66\n\nNORESQA\n-0.41\n-0.20\n-0.45\n0.64\n-0.77\n0.81\n\n-0.52\n-0.18\n-0.01\n-0.37\n-0.52"
58
+ },
59
+ {
60
+ "section_id": "3.3",
61
+ "parent_section_id": "3",
62
+ "section_name": "Speech Quality Assessment",
63
+ "text": "We evaluate NOMAD for speech quality assessment using Pearson\u2019s correlation coefficient (PC) and SC of the NOMAD score against MOS. We consider 4 different speech MOS databases that cover a broad range of degradations. The ITU-T Supplement 23 to the P series of the ITU-T Recommendations Experiment 1 (P23 EXP1) and Experiment 3 (P23 EXP3) [26 ###reference_26###] are used to evaluate various codecs and an 8 kbps codec under different channel degradations respectively. The TCD-VoIP database is used to test typical degradations occurring in VoIP communications [24 ###reference_24###]. The Genspeech database includes parametric and generative codecs presenting differences such as slight pitch shift and microalignments which are imperceptible but penalized by full-reference metrics (ViSQOL, PESQ) [4 ###reference_4###]. The results aggregated per condition (Table 2 ###reference_###) show that NOMAD outperforms both NORESQA and the wav2vec 2.0 features and exhibits competitive results with full-reference metrics. Our method shows high invariance to clean speech demonstrated by the very close correlation scores between the non-matching reference NOMAD version and the full-reference mode (NOMAD FR) where we only used the clean counterpart as a reference.\nP23 EXP 1\nP23 EXP 3\nTCD-VoIP\nGENSPEECH\n\nType\nModel\nPC\nSC\nPC\nSC\nPC\nSC\nPC\nSC\n\nNMR\nNOMAD\n-0.85\n-0.88\n-0.85\n-0.75\n-0.64\n-0.64\n-0.94\n-0.90\n\n\nw2v\n-0.26\n-0.27\n-0.38\n-0.36\n-0.39\n-0.54\n-0.67\n-0.90\n\n\nNORESQA\n-0.24\n-0.20\n-0.46\n-0.23\n-0.11\n-0.14\n-0.69\n-0.69\n\nFR\nNOMAD FR\n-0.86\n-0.87\n-0.86\n-0.73\n-0.63\n-0.65\n-0.96\n-0.90\n\n\nCDPAM\n-0.48\n-0.35\n-0.39\n-0.37\n-0.76\n-0.79\n-0.93\n-0.90\n\n\nViSQOL\n0.87\n0.89\n0.78\n0.67\n0.74\n0.76\n0.64\n0.74\n\n\nWARP-Q\n-0.88\n-0.92\n-0.87\n-0.79\n-0.90\n-0.92\n-0.89\n-0.90\n\n\nPESQ\n0.91\n0.96\n0.87\n0.87\n0.91\n0.91\n0.49\n0.52"
64
+ },
65
+ {
66
+ "section_id": "3.4",
67
+ "parent_section_id": "3",
68
+ "section_name": "Speech Enhancement",
69
+ "text": "We evaluate NOMAD loss for the speech enhancement task using the model DEMUCS [27 ###reference_27###] following a similar approach of [3 ###reference_3###]. We train three models using the Valentini speech dataset (28 speakers) [28 ###reference_28###]; (1) The original DEMUCS trained from scratch using L1 loss between waveforms and multi-resolution STFT [27 ###reference_27###]; (2) MT NOMAD combines the losses of DEMUCS with the NOMAD loss in a multitask fashion; (3) FT NOMAD is based on finetuning the pretrained DEMUCS model in (1) using the NOMAD loss only. The NOMAD loss is computed as the sum of the L1 distance between clean speech and the estimated speech of each transformer layer and the embedding layer for every time frame. The frame-wise approach is preferred for this task to encourage a local reconstruction that might be lost in the final embedding layer.\nEvery model is trained for 110 epochs with batch size set to 8. For testing, the best model is taken as the one with the lowest validation loss. The validation partition is created by leaving out 2 speakers from the Valentini training set as mentioned in the DEMUCS repo [27 ###reference_27###]. Results are evaluated on the Valentini test set with PESQ and a listening test. PESQ is computed on the entire test set which includes 824 noisy speech samples at four SNR values 2.5, 7.5, 12.5, 17.5 dB, 1 male and 1 female speaker, and 5 noise types.\nA MUSHRA test is conducted with 12 samples, distributed in 4 recordings for 3 SNR values. For each SNR we take 2 male speaker and 2 female speaker samples and 4 noise types. In each MUSHRA session, we use 5 stimuli: noisy sample (anchor), clean (hidden reference), and three enhanced versions from DEMUCS, FT NOMAD, and MT NOMAD respectively. Listeners could also play the clean reference to compare.\nWe recruited 16 people for the listening test using the online platform Go listen [29 ###reference_29###]. We asked raters to indicate their knowledge of audio as follows; 80% as professionals working in the area of audio, 20% as audio enthusiasts, and 0% rarely paying attention to audio quality. Post-screening was done as indicated in the MUSHRA guidelines [30 ###reference_30###]. We removed 3 participants who judged the hidden reference under 90 for more than 15% of samples. Further, we removed another participant (audio enthusiast) who scored 0 on all enhanced models.\nIn Table 3 ###reference_###, for each SNR value we report the average PESQ, and the median and interquartile range for the MUSHRA test as recommended in [30 ###reference_30###]. Results indicate that both approaches improve over the baseline for both metrics. An inconsistency can be noted between subjective and objective scores i.e., MT NOMAD exhibits the highest MUSHRA scores while FT NOMAD shows the highest PESQ scores. The MOS predictions from PESQ for the samples used in the subjective study had a high correlation (SC=0.89) with the subjective scores.\nThis speech enhancement study further demonstrates that NOMAD embeddings encode perceptual similarity and that they can also be applied to generative tasks. A potential future application to further showcase the versatility of NOMAD embeddings is in non-parallel speech enhancement, where any clean signal can serve as the ground truth.\nPESQ\nMUSHRA\n\n\n2.5\n7.5\n12.5\n17.5\n\n2.5\n7.5\n12.5\n\nNoisy\n1.42\n1.76\n2.10\n2.60\n\n20 (10,52)\n30 (19,65)\n46 (20,82)\n\nDemucs (Baseline)\n2.40\n2.83\n3.06\n3.31\n\n58 (39,78)\n78 (51,90)\n84 (58,90)\n\nFT Nomad (Ours)\n2.43\n2.88\n3.14\n3.42\n\n70 (50,82)\n80 (50,90)\n88 (57,91)\n\nMT Nomad (Ours)\n2.42\n2.84\n3.10\n3.36\n\n72 (58,84)\n90 (63,94)\n90 (71,95)"
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusions",
75
+ "text": "We proposed NOMAD, a non-matching reference perceptual similarity metric that can be used for perceptual audio tasks. Future work will further analyse the role of wav2vec 2.0 in NOMAD. Its use is supported by its capacity to disentangle variational factors in speech and its superior performance compared to a model we trained from scratch. NOMAD outperforms other models in the task of ranking degradations and audio quality prediction with non-matching clean references. We observe that the fixed dimension of NOMAD embeddings helps in solving issues of microalignment of generative neural codecs, which is a known problem of full-reference metrics (ViSQOL, PESQ). Objective and subjective experiments show that NOMAD can be used as a perceptual loss for speech enhancement to further improve speech quality. Beyond the evaluated tasks, we believe that the proposed model could be used for many other generative tasks such as text-to-speech, as a feature extractor for no-reference quality metrics and to measure quality relative to any reference chosen."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {
80
+ "1": {
81
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.4.1.1\">Table 1</span>: </span>SC using degradation intensity and quality.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.5\" style=\"width:248.2pt;height:35.9pt;vertical-align:-0.2pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-416.3pt,59.8pt) scale(0.22963,0.22963) ;\">\n<p class=\"ltx_p\" id=\"S3.T1.5.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1\" style=\"font-size:298%;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S3.T1.5.1.1.1\" style=\"width:1080.7pt;height:156.3pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S3.T1.5.1.1.1.1\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1.1.1.1\" style=\"color:#000000;\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.5.1.1.1.1.1.1\">\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S3.T1.5.1.1.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.1.1.1\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt ltx_colspan ltx_colspan_6\" id=\"S3.T1.5.1.1.1.1.1.1.1.1.2\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.1.1.2.1\">Ranking Intensity</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt ltx_colspan ltx_colspan_6\" id=\"S3.T1.5.1.1.1.1.1.1.1.1.3\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.1.1.3.1\">Ranking Quality, TCD-VoIP</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.5.1.1.1.1.1.1.2.2\">\n<span class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.1\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.2\" style=\"padding-top:3pt;padding-bottom:3pt;\">NOISE</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.3\" style=\"padding-top:3pt;padding-bottom:3pt;\">OPUS</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.4\" style=\"padding-top:3pt;padding-bottom:3pt;\">MP3</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.5\" style=\"padding-top:3pt;padding-bottom:3pt;\">CLIP</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.6\" style=\"padding-top:3pt;padding-bottom:3pt;\">VORB.</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.7\" style=\"padding-top:3pt;padding-bottom:3pt;\">REV.</span>\n<span class=\"ltx_td\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.8\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.9\" style=\"padding-top:3pt;padding-bottom:3pt;\">CLIP</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.10\" style=\"padding-top:3pt;padding-bottom:3pt;\">NOISE</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.11\" style=\"padding-top:3pt;padding-bottom:3pt;\">ECHO</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.12\" style=\"padding-top:3pt;padding-bottom:3pt;\">CHOP</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.2.2.13\" style=\"padding-top:3pt;padding-bottom:3pt;\">CSPKR</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.5.1.1.1.1.1.1.3.3\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.1\" style=\"padding-top:3pt;padding-bottom:3pt;\">NOMAD</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.2\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.2.1\">-0.74</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.3\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.3.1\">-0.68</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.4\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.4.1\">-0.73</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.5\" style=\"padding-top:3pt;padding-bottom:3pt;\">0.89</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.6\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.6.1\">-0.83</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.7\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.7.1\">0.89</span></span>\n<span class=\"ltx_td ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.8\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.9\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.9.1\">-0.98</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.10\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.70</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.11\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.11.1\">-0.84</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.12\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.12.1\">-0.86</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.13\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.3.3.13.1\">-0.82</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.5.1.1.1.1.1.1.4.4\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.1\" style=\"padding-top:3pt;padding-bottom:3pt;\">w2v</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.2\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.73</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.3\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.42</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.4\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.54</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.5\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.5.1\">0.92</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.6\" style=\"padding-top:3pt;padding-bottom:3pt;\">0.03</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.7\" style=\"padding-top:3pt;padding-bottom:3pt;\">0.87</span>\n<span class=\"ltx_td\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.8\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.9\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.93</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.10\" style=\"padding-top:3pt;padding-bottom:3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.10.1\">-0.79</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.11\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.76</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.12\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.33</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.4.4.13\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.66</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.5.1.1.1.1.1.1.5.5\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.1\" style=\"padding-top:3pt;padding-bottom:3pt;\">NORESQA</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.2\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.41</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.3\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.20</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.4\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.45</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.5\" style=\"padding-top:3pt;padding-bottom:3pt;\">0.64</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.6\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.77</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.7\" style=\"padding-top:3pt;padding-bottom:3pt;\">0.81</span>\n<span class=\"ltx_td\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.8\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.9\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.52</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.10\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.18</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.11\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.01</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.12\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.37</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.5.1.1.1.1.1.1.5.5.13\" style=\"padding-top:3pt;padding-bottom:3pt;\">-0.52</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.5.1.1.1.1.1.1.6.6\">\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.1\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.2\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.3\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.4\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.5\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.6\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.7\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.8\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.9\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.10\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.11\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.12\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T1.5.1.1.1.1.1.1.6.6.13\" style=\"padding-top:3pt;padding-bottom:3pt;\"></span></span>\n</span>\n</span></span></span>\n</span></span></span></p>\n</span></div>\n</figure>",
82
+ "capture": "Table 1: SC using degradation intensity and quality."
83
+ },
84
+ "2": {
85
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.4.1.1\">Table 2</span>: </span>PC and SC of non-matching reference (NMR) and full-reference (FR) models.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T2.5\" style=\"width:227.9pt;height:75pt;vertical-align:-0.3pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-321.7pt,105.4pt) scale(0.26156,0.26156) ;\">\n<p class=\"ltx_p\" id=\"S3.T2.5.1\"><span class=\"ltx_text\" id=\"S3.T2.5.1.1\" style=\"font-size:298%;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S3.T2.5.1.1.1\" style=\"width:871.3pt;height:286.6pt;vertical-align:-1.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S3.T2.5.1.1.1.1\"><span class=\"ltx_text\" id=\"S3.T2.5.1.1.1.1.1\" style=\"color:#000000;\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T2.5.1.1.1.1.1.1\">\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.1.1.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S3.T2.5.1.1.1.1.1.1.1.1.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.1.1.2.1\">P23 EXP 1</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S3.T2.5.1.1.1.1.1.1.1.1.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.1.1.3.1\">P23 EXP 3</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S3.T2.5.1.1.1.1.1.1.1.1.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.1.1.4.1\">TCD-VoIP</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt ltx_colspan ltx_colspan_2\" id=\"S3.T2.5.1.1.1.1.1.1.1.1.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.1.1.5.1\">GENSPEECH</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.2.2\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.1.1\">Type</span></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.2.1\">Model</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">PC</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">SC</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">PC</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">SC</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">PC</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">SC</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">PC</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.2.2.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">SC</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.3.3\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">NMR</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">NOMAD</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.3.1\">-0.85</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.4.1\">-0.88</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.5.1\">-0.85</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.6.1\">-0.75</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.7.1\">-0.64</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.8.1\">-0.64</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.9.1\">-0.94</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.3.3.10.1\">-0.90</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.4.4\">\n<span class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.4.4.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.4.4.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">w2v</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.4.4.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.26</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.4.4.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.27</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.4.4.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.38</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.4.4.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.36</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.4.4.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.39</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.4.4.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.54</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.4.4.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.67</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.4.4.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.90</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.5.5\">\n<span class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.5.5.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.5.5.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">NORESQA</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.5.5.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.24</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.5.5.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.20</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.5.5.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.46</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.5.5.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.23</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.5.5.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.11</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.5.5.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.14</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.5.5.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.69</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.5.5.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.69</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.6.6\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">FR</span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">NOMAD FR</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.86</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.87</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.86</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.73</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.63</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.65</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.9.1\">-0.96</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.6.6.10.1\">-0.90</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.7.7\">\n<span class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">CDPAM</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.48</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.35</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.39</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.37</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.76</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.79</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.93</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.7.7.10.1\">-0.90</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.8.8\">\n<span class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.8.8.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.8.8.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">ViSQOL</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.8.8.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.87</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.8.8.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.89</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.8.8.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.78</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.8.8.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.67</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.8.8.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.74</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.8.8.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.76</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.8.8.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.64</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.8.8.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.74</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.9.9\">\n<span class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">WARP-Q</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.88</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.92</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.5.1\">-0.87</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.79</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.90</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.8.1\">-0.92</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">-0.89</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.9.9.10.1\">-0.90</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.10.10\">\n<span class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">PESQ</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.3.1\">0.91</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.4.1\">0.96</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.5.1\">0.87</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.6.1\">0.87</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.7.1\">0.91</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.91</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.49</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T2.5.1.1.1.1.1.1.10.10.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\">0.52</span></span>\n<span class=\"ltx_tr\" id=\"S3.T2.5.1.1.1.1.1.1.11.11\">\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.11.11.1\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.11.11.2\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.11.11.3\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.11.11.4\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.11.11.5\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.11.11.6\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.11.11.7\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.11.11.8\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.11.11.9\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T2.5.1.1.1.1.1.1.11.11.10\" style=\"padding-top:1.5pt;padding-bottom:1.5pt;\"></span></span>\n</span>\n</span>\n</span></span>\n</span></span></span></p>\n</span></div>\n</figure>",
86
+ "capture": "Table 2: PC and SC of non-matching reference (NMR) and full-reference (FR) models."
87
+ },
88
+ "3": {
89
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T3\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.6.1.1\">Table 3</span>: </span>Speech enhancement performance evaluation.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T3.2\" style=\"width:248.2pt;height:120pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-6.2pt,3.0pt) scale(0.95238,0.95238) ;\">\n<p class=\"ltx_p\" id=\"S3.T3.2.2\"><span class=\"ltx_text\" id=\"S3.T3.2.2.2\" style=\"font-size:90%;\">\n<span class=\"ltx_inline-block ltx_transformed_outer\" id=\"S3.T3.2.2.2.2\" style=\"width:260.6pt;height:126pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1,1) ;\">\n<span class=\"ltx_p\" id=\"S3.T3.2.2.2.2.2\"><span class=\"ltx_text\" id=\"S3.T3.2.2.2.2.2.2\" style=\"color:#000000;\">\n<span class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T3.2.2.2.2.2.2.2\">\n<span class=\"ltx_tbody\">\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.2.2.2.2.2.2\">\n<span class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T3.2.2.2.2.2.2.2.2.3\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt ltx_colspan ltx_colspan_4\" id=\"S3.T3.1.1.1.1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.1.1.1.1.1.1.1.1.1.1\">PESQ</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_tt ltx_colspan ltx_colspan_4\" id=\"S3.T3.2.2.2.2.2.2.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.2.2.2.2.2.2.1\">MUSHRA</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.2.2.2.2.2.3.1\">\n<span class=\"ltx_td ltx_th ltx_th_row\" id=\"S3.T3.2.2.2.2.2.2.2.3.1.1\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.3.1.2\">2.5</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.3.1.3\">7.5</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.3.1.4\">12.5</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.2.2.2.2.2.3.1.5\">17.5</span>\n<span class=\"ltx_td\" id=\"S3.T3.2.2.2.2.2.2.2.3.1.6\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.3.1.7\">2.5</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.3.1.8\">7.5</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.3.1.9\">12.5</span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.2.2.2.2.2.4.2\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.1\">Noisy</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.2\">1.42</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.3\">1.76</span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.4\">2.10</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.5\">2.60</span>\n<span class=\"ltx_td ltx_border_t\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.6\"></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.7\">20 <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.7.1\">(10,52)</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.8\">30 <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.8.1\">(19,65)</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.9\">46 <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.4.2.9.1\">(20,82)</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.2.2.2.2.2.5.3\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.1\">Demucs (<span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.1.1\">Baseline</span>)</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.2\">2.40</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.3\">2.83</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.4\">3.06</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.5\">3.31</span>\n<span class=\"ltx_td\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.6\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.7\">58 <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.7.1\">(39,78)</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.8\">78 <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.8.1\">(51,90)</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.9\">84 <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.5.3.9.1\">(58,90)</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.2.2.2.2.2.6.4\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.1\">FT Nomad (<span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.1.1\">Ours</span>)</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.2.1\">2.43</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.3.1\">2.88</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.4.1\">3.14</span></span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.5.1\">3.42</span></span>\n<span class=\"ltx_td\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.6\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.7\">70 <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.7.1\">(50,82)</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.8\">80 <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.8.1\">(50,90)</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.9\">88 <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.6.4.9.1\">(57,91)</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.2.2.2.2.2.7.5\">\n<span class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.1\">MT Nomad (<span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.1.1\">Ours</span>)</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.2\">2.42</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.3\">2.84</span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.4\">3.10</span>\n<span class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.5\">3.36</span>\n<span class=\"ltx_td\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.6\"></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.7.1\">72</span> <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.7.2\">(58,84)</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.8.1\">90</span> <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.8.2\">(63,94)</span></span>\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.9\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.9.1\">90</span> <span class=\"ltx_text ltx_font_italic\" id=\"S3.T3.2.2.2.2.2.2.2.7.5.9.2\">(71,95)</span></span></span>\n<span class=\"ltx_tr\" id=\"S3.T3.2.2.2.2.2.2.2.8.6\">\n<span class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_tt\" id=\"S3.T3.2.2.2.2.2.2.2.8.6.1\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T3.2.2.2.2.2.2.2.8.6.2\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T3.2.2.2.2.2.2.2.8.6.3\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T3.2.2.2.2.2.2.2.8.6.4\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T3.2.2.2.2.2.2.2.8.6.5\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T3.2.2.2.2.2.2.2.8.6.6\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T3.2.2.2.2.2.2.2.8.6.7\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T3.2.2.2.2.2.2.2.8.6.8\"></span>\n<span class=\"ltx_td ltx_border_tt\" id=\"S3.T3.2.2.2.2.2.2.2.8.6.9\"></span></span>\n</span>\n</span></span></span>\n</span></span></span></p>\n</span></div>\n</figure>",
90
+ "capture": "Table 3: Speech enhancement performance evaluation."
91
+ }
92
+ },
93
+ "image_paths": {
94
+ "1": {
95
+ "figure_path": "2309.16284v2_figure_1.png",
96
+ "caption": "Fig. 1: Easy sampling strategy (above). The conditions that have distance |Qk,m\u2212Qa|subscript\ud835\udc44\ud835\udc58\ud835\udc5asuperscript\ud835\udc44\ud835\udc4e|Q_{k,m}-Q^{a}|| italic_Q start_POSTSUBSCRIPT italic_k , italic_m end_POSTSUBSCRIPT - italic_Q start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT | lower than |Qp\u2212Qa|+ssuperscript\ud835\udc44\ud835\udc5dsuperscript\ud835\udc44\ud835\udc4e\ud835\udc60|Q^{p}-Q^{a}|+s| italic_Q start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT - italic_Q start_POSTSUPERSCRIPT italic_a end_POSTSUPERSCRIPT | + italic_s are excluded. Hard sampling strategy (below). The negative is the one with the shortest distance from the anchor after the positive.",
97
+ "url": "http://arxiv.org/html/2309.16284v2/x1.png"
98
+ },
99
+ "2": {
100
+ "figure_path": "2309.16284v2_figure_2.png",
101
+ "caption": "Fig. 2: Overview of the proposed method NOMAD.",
102
+ "url": "http://arxiv.org/html/2309.16284v2/x2.png"
103
+ },
104
+ "3": {
105
+ "figure_path": "2309.16284v2_figure_3.png",
106
+ "caption": "Fig. 3: Validation set conditions sorted by the NOMAD scores ( ).",
107
+ "url": "http://arxiv.org/html/2309.16284v2/x3.png"
108
+ }
109
+ },
110
+ "validation": true,
111
+ "references": [
112
+ {
113
+ "1": {
114
+ "title": "\u201cViSQOL: an objective speech quality model,\u201d",
115
+ "author": "Andrew Hines, Jan Skoglund, Anil C Kokaram, and Naomi Harte,",
116
+ "venue": "EURASIP Journal on Audio, Speech, and Music Processing, vol.\n2015, no. 1, 2015.",
117
+ "url": null
118
+ }
119
+ },
120
+ {
121
+ "2": {
122
+ "title": "\u201cPerceptual evaluation of speech quality (PESQ)-a new method for\nspeech quality assessment of telephone networks and codecs,\u201d",
123
+ "author": "Antony W Rix, John G Beerends, Michael P Hollier, and Andries P Hekstra,",
124
+ "venue": "in International Conference on Acoustics, Speech, and Signal\nProcessing (ICASSP). IEEE, 2001, vol. 2, pp. 749\u2013752.",
125
+ "url": null
126
+ }
127
+ },
128
+ {
129
+ "3": {
130
+ "title": "\u201cCDPAM: Contrastive learning for perceptual audio similarity,\u201d",
131
+ "author": "Pranay Manocha, Zeyu Jin, Richard Zhang, and Adam Finkelstein,",
132
+ "venue": "in International Conference on Acoustics, Speech and Signal\nProcessing (ICASSP). IEEE, 2021, pp. 196\u2013200.",
133
+ "url": null
134
+ }
135
+ },
136
+ {
137
+ "4": {
138
+ "title": "\u201cWARP-Q: Quality prediction for generative neural speech codecs,\u201d",
139
+ "author": "Wissam A Jassim, Jan Skoglund, Michael Chinen, and Andrew Hines,",
140
+ "venue": "in International Conference on Acoustics, Speech and Signal\nProcessing (ICASSP). IEEE, 2021, pp. 401\u2013405.",
141
+ "url": null
142
+ }
143
+ },
144
+ {
145
+ "5": {
146
+ "title": "\u201cNon-intrusive speech quality assessment using neural networks,\u201d",
147
+ "author": "Anderson R Avila, Hannes Gamper, Chandan Reddy, Ross Cutler, Ivan Tashev, and\nJohannes Gehrke,",
148
+ "venue": "in International Conference on Acoustics, Speech and Signal\nProcessing (ICASSP). IEEE, 2019, pp. 631\u2013635.",
149
+ "url": null
150
+ }
151
+ },
152
+ {
153
+ "6": {
154
+ "title": "\u201cMore for less: Non-intrusive speech quality assessment with\nlimited annotations,\u201d",
155
+ "author": "Alessandro Ragano, Emmanouil Benetos, and Andrew Hines,",
156
+ "venue": "in 2021 13th International Conference on Quality of Multimedia\nExperience (QoMEX). IEEE, 2021, pp. 103\u2013108.",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "7": {
162
+ "title": "\u201cQuality-net: An end-to-end non-intrusive speech quality assessment\nmodel based on BLSTM,\u201d",
163
+ "author": "Szu-Wei Fu, Yu Tsao, Hsin-Te Hwang, and Hsin-Min Wang,",
164
+ "venue": "in Proc. Interspeech, 2018, pp. 1873\u20131877.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "8": {
170
+ "title": "\u201cSESQA: semi-supervised learning for speech quality assessment,\u201d",
171
+ "author": "Joan Serr\u00e0, Jordi Pons, and Santiago Pascual,",
172
+ "venue": "in International Conference on Acoustics, Speech and Signal\nProcessing (ICASSP). IEEE, 2021, pp. 381\u2013385.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "9": {
178
+ "title": "\u201cWawenets: A no-reference convolutional waveform-based approach to\nestimating narrowband and wideband speech quality,\u201d",
179
+ "author": "Andrew A Catellier and Stephen D Voran,",
180
+ "venue": "in International Conference on Acoustics, Speech and Signal\nProcessing (ICASSP). IEEE, 2020, pp. 331\u2013335.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "10": {
186
+ "title": "\u201cITU-T Recommendation P.800: Methods for subjective determination\nof transmission quality,\u201d 1996.",
187
+ "author": "International Telecommunication Union,",
188
+ "venue": null,
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "11": {
194
+ "title": "\u201cOn some biases encountered in modern audio quality listening tests\n- A review,\u201d",
195
+ "author": "Slawomir Zielinski, Francis Rumsey, and S\u00f8ren Bech,",
196
+ "venue": "Journal of the Audio Engineering Society, vol. 56, no. 6, pp.\n427\u2013451, 2008.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "12": {
202
+ "title": "\u201cNORESQA: A framework for speech quality assessment using\nnon-matching references,\u201d",
203
+ "author": "Pranay Manocha, Buye Xu, and Anurag Kumar,",
204
+ "venue": "Advances in Neural Information Processing Systems, vol. 34, pp.\n22363\u201322378, 2021.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "13": {
210
+ "title": "Sensory evaluation of food: principles and practices, vol. 2,",
211
+ "author": "Harry T Lawless, Hildegarde Heymann, et al.,",
212
+ "venue": "Springer, 2010.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "14": {
218
+ "title": "\u201cFacenet: A unified embedding for face recognition and\nclustering,\u201d",
219
+ "author": "Florian Schroff, Dmitry Kalenichenko, and James Philbin,",
220
+ "venue": "in Proceedings of the IEEE conference on computer vision and\npattern recognition, 2015, pp. 815\u2013823.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "15": {
226
+ "title": "\u201cSpeech intelligibility prediction using a neurogram similarity\nindex measure,\u201d",
227
+ "author": "Andrew Hines and Naomi Harte,",
228
+ "venue": "Speech Communication, vol. 54, no. 2, pp. 306\u2013320, 2012.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "16": {
234
+ "title": "\u201cwav2vec 2.0: A framework for self-supervised learning of speech\nrepresentations,\u201d",
235
+ "author": "Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli,",
236
+ "venue": "Advances in neural information processing systems, vol. 33, pp.\n12449\u201312460, 2020.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "17": {
242
+ "title": "\u201cExploring the influence of fine-tuning data on wav2vec 2.0 model\nfor blind speech quality prediction,\u201d",
243
+ "author": "Helard Becerra, Alessandro Ragano, and Andrew Hines,",
244
+ "venue": "in Proc. Interspeech, 2022, pp. 4088\u20134092.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "18": {
250
+ "title": "\u201cMultimodal emotion recognition with high-level speech and text\nfeatures,\u201d",
251
+ "author": "Mariana Rodrigues Makiuchi, Kuniaki Uto, and Koichi Shinoda,",
252
+ "venue": "in 2021 IEEE Automatic Speech Recognition and Understanding\nWorkshop (ASRU). IEEE, 2021, pp. 350\u2013357.",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "19": {
258
+ "title": "\u201cA Step Towards Preserving Speakers\u2019 Identity While Detecting\nDepression Via Speaker Disentanglement,\u201d",
259
+ "author": "Vijay Ravi, Jinhan Wang, Jonathan Flint, and Abeer Alwan,",
260
+ "venue": "in Proc. Interspeech, 2022, pp. 3338\u20133342.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "20": {
266
+ "title": "\u201cLibrispeech: an ASR corpus based on public domain audio books,\u201d",
267
+ "author": "Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur,",
268
+ "venue": "in International Conference on Acoustics, Speech and Signal\nProcessing (ICASSP). IEEE, 2015, pp. 5206\u20135210.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "21": {
274
+ "title": "\u201cA Scalable Noisy Speech Dataset and Online Subjective Test\nFramework,\u201d",
275
+ "author": "Chandan KA Reddy, Ebrahim Beyrami, Jamie Pool, Ross Cutler, Sriram Srinivasan,\nand Johannes Gehrke,",
276
+ "venue": "in Proc. Interspeech, 2019, pp. 1816\u20131820.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "22": {
282
+ "title": "\u201cViSQOL v3: An open source production ready objective speech and\naudio metric,\u201d",
283
+ "author": "Michael Chinen, Felicia SC Lim, Jan Skoglund, Nikita Gureev, Feargus O\u2019Gorman,\nand Andrew Hines,",
284
+ "venue": "in 2020 Twelfth international conference on quality of\nmultimedia experience (QoMEX). IEEE, 2020.",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "23": {
290
+ "title": "\u201cTsp speech database,\u201d",
291
+ "author": "Peter Kabal,",
292
+ "venue": "McGill University, Database Version, vol. 1, no. 0, pp. 09\u201302,\n2002.",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "24": {
298
+ "title": "\u201cTCD-VoIP, a research database of degraded speech for assessing\nquality in VoIP applications,\u201d",
299
+ "author": "Naomi Harte, Eoin Gillen, and Andrew Hines,",
300
+ "venue": "in 2015 Seventh International Workshop on Quality of Multimedia\nExperience (QoMEX). IEEE, 2015.",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "25": {
306
+ "title": "\u201cITU-T P. Supplement 23 coded-speech database,\u201d 1998.",
307
+ "author": "International Telecommunication Union,",
308
+ "venue": null,
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "26": {
314
+ "title": "\u201cReal time speech enhancement in the waveform domain,\u201d",
315
+ "author": "Alexandre D\u00e9fossez, Gabriel Synnaeve, and Yossi Adi,",
316
+ "venue": "Proc. Interspeech, pp. 3291\u20133295, 2020.",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "27": {
322
+ "title": "\u201cNoisy speech database for training speech enhancement algorithms\nand TTS models,\u201d 2017.",
323
+ "author": "C. Valentini-Botinhao,",
324
+ "venue": null,
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "28": {
330
+ "title": "\u201cGo listen: an end-to-end online listening test platform,\u201d",
331
+ "author": "Dan Barry, Qijian Zhang, Pheobe Wenyi Sun, and Andrew Hines,",
332
+ "venue": "Journal of Open Research Software, vol. 9, no. 1, 2021.",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "29": {
338
+ "title": "\u201cITU-R Recommendation BS.1534-3: Method for the subjective\nassessment of intermediate quality level of audio systems,\u201d 2015.",
339
+ "author": "International Telecommunication Union,",
340
+ "venue": null,
341
+ "url": null
342
+ }
343
+ }
344
+ ],
345
+ "url": "http://arxiv.org/html/2309.16284v2"
346
+ }