yilunzhao commited on
Commit
2d18767
·
verified ·
1 Parent(s): 8f5b997

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240322/1912.07383v2.json +0 -0
  2. 20240322/2204.01368v3.json +0 -0
  3. 20240322/2210.06015v4.json +491 -0
  4. 20240322/2211.06003v2.json +0 -0
  5. 20240322/2212.06370v4.json +151 -0
  6. 20240322/2212.10744v2.json +55 -0
  7. 20240322/2302.05440v2.json +0 -0
  8. 20240322/2302.05951v2.json +559 -0
  9. 20240322/2302.07433v5.json +0 -0
  10. 20240322/2304.07696v2.json +182 -0
  11. 20240322/2305.10061v2.json +0 -0
  12. 20240322/2305.13802v3.json +175 -0
  13. 20240322/2306.03111v2.json +0 -0
  14. 20240322/2306.04337v2.json +237 -0
  15. 20240322/2306.04366v4.json +0 -0
  16. 20240322/2306.06721v3.json +479 -0
  17. 20240322/2306.13185v2.json +372 -0
  18. 20240322/2306.16973v2.json +344 -0
  19. 20240322/2307.05279v2.json +0 -0
  20. 20240322/2307.08080v2.json +272 -0
  21. 20240322/2307.08309v3.json +0 -0
  22. 20240322/2308.04025v3.json +0 -0
  23. 20240322/2308.13712v3.json +0 -0
  24. 20240322/2309.07139v2.json +126 -0
  25. 20240322/2309.07289v3.json +659 -0
  26. 20240322/2309.09510v2.json +0 -0
  27. 20240322/2309.11639v2.json +0 -0
  28. 20240322/2309.13456v2.json +298 -0
  29. 20240322/2309.13950v3.json +0 -0
  30. 20240322/2309.14913v2.json +106 -0
  31. 20240322/2309.15271v2.json +147 -0
  32. 20240322/2310.00354v3.json +0 -0
  33. 20240322/2310.10065v2.json +150 -0
  34. 20240322/2311.03821v3.json +595 -0
  35. 20240322/2311.04147v2.json +0 -0
  36. 20240322/2311.07440v2.json +459 -0
  37. 20240322/2311.10278v2.json +477 -0
  38. 20240322/2311.14033v2.json +0 -0
  39. 20240322/2312.01697v4.json +0 -0
  40. 20240322/2312.03408v4.json +0 -0
  41. 20240322/2312.04964v2.json +0 -0
  42. 20240322/2312.09016v2.json +281 -0
  43. 20240322/2312.10070v2.json +0 -0
  44. 20240322/2312.12973v2.json +570 -0
  45. 20240322/2312.17543v2.json +704 -0
  46. 20240322/2401.05224v2.json +0 -0
  47. 20240322/2401.05943v2.json +102 -0
  48. 20240322/2401.11170v2.json +0 -0
  49. 20240322/2402.00631v2.json +220 -0
  50. 20240322/2402.14704v3.json +0 -0
20240322/1912.07383v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2204.01368v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2210.06015v4.json ADDED
@@ -0,0 +1,491 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "EC-NAS: Energy Consumption Aware Tabular Benchmarks for Neural Architecture Search",
3
+ "abstract": "Energy consumption from the selection, training, and deployment of deep learning models has seen a significant uptick recently. This work aims to facilitate the design of energy-efficient deep learning models that require less computational resources and prioritize environmental sustainability by focusing on the energy consumption. Neural architecture search (NAS) benefits from tabular benchmarks, which evaluate NAS strategies cost-effectively through pre-computed performance statistics. We advocate for including energy efficiency as an additional performance criterion in NAS. To this end, we introduce an enhanced tabular benchmark encompassing data on energy consumption for varied architectures. The benchmark, designated as EC-NAS111Source code is available at: https://github.com/saintslab/EC-NAS-Bench, has been made available in an open-source format to advance research in energy-conscious NAS.\nEC-NAS incorporates a surrogate model to predict energy consumption, aiding in diminishing the energy expenditure of the dataset creation. Our findings emphasize the potential of EC-NAS by leveraging multi-objective optimization algorithms, revealing a balance between energy usage and accuracy. This suggests the feasibility of identifying energy-lean architectures with little or no compromise in performance.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "###figure_1### Neural Architecture Search (NAS) strategies, which explore model architectures based on training and evaluation metrics, have demonstrated their ability to reveal novel designs with state-of-the-art performance [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###]. While promising, NAS comes with computational and energy-intensive demands, leading to significant environmental concerns due to the carbon footprint incurred due to energy consumption [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###]. Given the rapidly increasing computational requirements of deep learning models [7 ###reference_b7###], there is an imperative to address the balance between performance and resource efficiency.\nEfficient evaluation of NAS strategies has gained traction, using pre-computed performance statistics in tabular benchmarks and the use of surrogate and one-shot models [8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###, 2 ###reference_b2###, 11 ###reference_b11###].\nNevertheless, the primary focus remains on performance, with the trade-offs between performance and energy efficiency often overlooked. This trade-off is visually represented in Figure 1 ###reference_###, illustrating the potential to find energy-efficient models without compromising performance. Aligning with recent advancements in energy-aware NAS research, we advocate for integrating energy consumption as a pivotal metric in tabular NAS benchmarks. We aim to uncover inherently efficient deep learning models, leveraging pre-computed energy statistics for sustainable model discovery. This perspective is supported by recent works, such as the EA-HAS-Bench [12 ###reference_b12###], which emphasizes the trade-offs between performance and energy consumption. Furthermore, the diverse applications of NAS in areas like speech emotion recognition [13 ###reference_b13###] and visual-inertial odometry [14 ###reference_b14###] underscore its versatility and the need for efficiency."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Energy Awareness in NAS",
15
+ "text": "Building upon the foundational NAS-Bench-101 [10 ###reference_b10###], we introduce our benchmark, EC-NAS, to accentuate the imperative of energy efficiency in NAS. Our adaptation of this dataset, initially computed using an exorbitant 100 TPU years equivalent of compute time, serves our broader mission of steering NAS methodologies towards energy consumption awareness."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Architectural Design and Blueprint",
21
+ "text": "Central to our method are architectures tailored for CIFAR-10 image classification [15 ###reference_b15###]. We introduce additional objectives for emphasizing the significance of hardware-specific efficiency trends in deep learning models. The architectural space is confined to the topological space of cells, with each cell being a configurable feedforward network. In terms of cell encoding, these individual cells are represented as directed acyclic graphs (DAGs). Each DAG, , has vertices (or nodes) and edges described in a binary adjacency matrix . The set of operations (labels) that each node can realise is given by , where . Two of the nodes are always fixed as and to the network. The remaining nodes can take up one of the labels in . The connections between nodes of the DAG are encoded in the upper-triangular adjacency matrix with no self-connections (zero main diagonal entries). For a given architecture, , every entry denotes an edge, from node to node with operations and its labelled adjacency matrix, ."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Energy Measures in NAS",
27
+ "text": "Traditional benchmarks, while insightful, often fall short of providing a complete energy consumption profile. In EC-NAS, we bring the significance of energy meaures to the forefront, crafting a comprehensive view that synthesizes both hardware and software intricacies. The mainstays of neural network training \u2013 GPUs and TPUs \u2013 are notorious for their high energy consumption [6 ###reference_b6###, 16 ###reference_b16###]. To capture these nuances, we utilize and adopt the Carbontracker tool [6 ###reference_b6###] to our specific needs, allowing us to observe total energy costs, computational times, and aggregate carbon footprints."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Surrogate Model for Energy Estimation",
33
+ "text": "The landscape of NAS has transformed to encompass a broader spectrum of metrics. Energy consumption, pivotal during model training, offers insights beyond the purview of traditional measures such as floating-point operations (FPOPs) and computational time. Given the variability in computational time, owing to diverse factors like parallel infrastructure, this metric can occasionally be misleading. Energy consumption, in contrast, lends itself as a more consistent and comprehensive measure, factoring in software and hardware variations. We measure the energy consumption of training the architectures on the CIFAR-10 dataset, following the protocols to NAS-Bench-101. The in-house SLURM cluster, powered by an NVIDIA Quadro RTX 6000 GPU and two Intel CPUs, provides an optimal environment.\nThe vast architecture space, however, introduces challenges in the direct energy estimation. Our remedy to this is a surrogate model approach, wherein we derived insights to guide a multi-layer perceptron (MLP) model by training using a representative subset of architectures. This surrogate model adeptly predicts energy consumption patterns, bridging computational demand and energy efficiency. Its efficacy is highlighted by the strong correlation between its predictions and actual energy consumption values, as illustrated in Figure 2 ###reference_###.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7###"
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Dataset Analysis and Hardware Consistency",
39
+ "text": "Understanding architectural characteristics and the trade-offs they introduce is crucial. This involves studying operations, their impacts on efficiency and performance, as well as the overarching influence of hardware on energy costs. Training time and energy consumption trends naturally increase with model size. However, gains in performance tend to plateau for models characterized by larger DAGs. Interestingly, while parameter variation across model sizes remains minimal, training time and energy consumption show more significant variability for more extensive models. These findings highlight the multifaceted factors affecting performance and efficiency.\nDifferent operations can also have a profound impact on performance. For instance, specific operation replacements significantly boost validation accuracy while increasing energy consumption without increasing training time. This complex relationship between training time, energy consumption and performance underscore the importance of a comprehensive approach in NAS. The impact of swapping one operation for another on various metrics, including energy consumption, training time, validation accuracy, and parameter count, is captured in Figure 3 ###reference_###.\nIn EC-NAS, we further probed the energy consumption patterns of models characterized by DAGs with , spanning various GPUs. This exploration, depicted in Figure 4 ###reference_###, confirms the flexibility of the benchmark across different hardware environments. This adaptability paves the way for advanced NAS strategies, notably for multi-objective optimization (MOO). It signifies a paradigm shift towards a balanced pursuit of performance and energy efficiency, echoing the call for sustainable computing.\n###figure_8### ###figure_9### ###figure_10### ###figure_11###"
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Leveraging EC-NAS in NAS Strategies",
45
+ "text": "Tabular benchmarks like EC-NAS offer insights into energy consumption alongside traditional performance measures, facilitating the exploration of energy-efficient architectures using multi-objective optimization (MOO) whilst emphasizing the rising need for sustainable computing.\n\nRole of Multi-objective Optimization in NAS:\nIn the context of NAS, MOO has emerged as an instrumental approach for handling potentially conflicting objectives. We utilize the EC-NAS benchmark to apply diverse MOO algorithms, encompassing our own simple evolutionary MOO algorithm (SEMOA) based on [17 ###reference_b17###] and other prominent algorithms such as SH-EMOA and MS-EHVI from [18 ###reference_b18###]. These methodologies are assessed against the conventional random rearch (RS) technique.\nOur exploration within EC-NAS span both single-objective optimization (SOO) and MOO. We execute algorithms across various training epoch budgets over evolutions with a population size of . For SOO, evolutions were designated to equate the discovery potential. Results, averaged over trials, followed the methodology of [18 ###reference_b18###].\nFor MOO, validation accuracy and the training energy cost, (in kWh), were chosen as the dual objectives and, for SOO, simply the performance metric. Given its indifference to parallel computing, energy consumption was chosen over training time. Inverse objectives were used for optimization in maximization tasks (e.g., ).\n\nTrade-offs in Energy Efficiency and Performance:\n\nBalancing energy efficiency with performance presents a layered challenge in NAS. Figure 5 ###reference_### elucidates the architectural intricacies and the prowess of various MOO algorithms in identifying energy-conservative neural architectures.\nFigure 5 ###reference_### (left) evaluates the architecture discovery efficacy of MOO algorithms, presenting the median solutions achieved over multiple runs. SEMOA, in particular, showcases an even distribution of models attributed to its ability to exploit model locality. In contrast, SH-EMOA and MSE-HVI display a more substantial variation, highlighting the robust search space exploration of SEMOA.\nThe Pareto front, as depicted in Figure 5 ###reference_### (center), highlights extrema () and the knee point (), which represents an optimal trade-off between objectives. The extrema prioritize energy efficiency or validation accuracy, while the knee point achieves a balanced feature distribution. MOO algorithms\u2019 capability to navigate the NAS space effectively is evident in their identification of architectures that balance competing objectives."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Discussions",
51
+ "text": "Single versus Multi-objective Optimisation:\nFigure 5 ###reference_### and Table 1 ###reference_### capture the performance trends of solutions, elucidating that knee point solutions, , offer architectures with about less energy consumption with only a performance degradation. Depending on specific applications, this might be an acceptable trade-off. If performance degradation is unacceptable, the Pareto front also provides alternative candidate solutions. For instance, extremum solution achieves nearly the same performance as the SOO solution but consumes about less energy. This trend is consistent across various solutions.\n\nTraining Time vs. Energy Consumption:\nWhile the original NAS-Bench-101 dataset reports training time, it cannot replace energy consumption as a metric. Even though training time generally correlates with energy consumption in single hardware regimes, the scenario changes with large-scale parallelism on multiple GPUs. Aggregate energy consumption encompasses parallel hardware and its associated overheads. Even in single GPU scenarios, energy consumption provides insights into energy-efficient models. For instance, a small architecture might consume more energy on a large GPU due to under-utilization.\n\nEnergy-Efficient Tabular NAS Benchmarks:\nDespite the immense one-time cost of generating tabular benchmark, these benchmarks have proven highly useful for efficient evaluation of NAS strategies. For instance, our EC-NAS dataset, predicting metrics after training models for only epochs, results in a reduction compared to a dataset creation from scratch. Other techniques such as predictive modelling based on learning curves [19 ###reference_b19###], gradient approximations [20 ###reference_b20###], and surrogate models fitted to architecture subsets [9 ###reference_b9###] also prove very useful in creating new architecture spaces to consider. However, incorporating energy consumption metrics is frequently overlooked and challenges arise in integrating with existing NAS strategies. This is considering NAS benchmarks and strategies and closely intertwined, which often restricts benchmarks to tailored strategies.\n\nCarbon-footprint Aware NAS:\nThe EC-NAS dataset provides various metrics for each architecture. By using MOO, NAS can directly optimize the carbon footprint of models. Although instantaneous energy consumption and carbon footprint are linearly correlated, fluctuations in instantaneous regional carbon intensities can introduce discrepancies during extended training periods [6 ###reference_b6###]. By reporting the carbon footprint of model training in EC-NAS, we facilitate carbon-footprint-aware NAS [21 ###reference_b21###]. In this work, our focus remains on energy consumption awareness, sidestepping the temporal and spatial variations of carbon intensity.\n\nEnergy Consumption Aware Few-shot NAS:\nWhile tabular benchmarks like NAS-Bench-101 [10 ###reference_b10###] facilitate efficient exploration of various NAS strategies, they are constrained to specific architectures and datasets. Addressing this challenge involves one- or few-shot learning methods [22 ###reference_b22###, 11 ###reference_b11###]. A bridge between few-shot and surrogate tabular benchmarks emerges by combining surrogate models for predicting learning dynamics [9 ###reference_b9###] with energy measurements. We have illustrated integrating surrogate models with existing tabular benchmarks, seamlessly extending these to surrogate benchmarks.\n\nLimitations: Our proposed approach has certain limitations. To manage the search space, which expands exponentially with the number of vertices in the network specification DAGs, we have limited the vertices count to , in line with NAS-Bench-101[10 ###reference_b10###]. Moreover, in EC-NAS, we utilized surrogate time and energy measurements, sidestepping the variability of training time. Despite these limitations, which primarily aim to conserve energy in experiments, the insights from these experiments can be extrapolated to broader architectural spaces."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion",
57
+ "text": "We have enriched an established NAS benchmark by incorporating energy consumption and carbon footprint measures. EC-NAS, encompassing entries, was crafted using an accurate surrogate model that predicts energy consumption. By showcasing Pareto-optimal solutions through MOO methods, we illuminate the potential for achieving significant energy reductions with minimal performance compromises. With its diverse metrics, EC-NAS invites further research into developing energy-efficient and environmentally sustainable models."
58
+ }
59
+ ],
60
+ "appendix": [
61
+ {
62
+ "section_id": "Appendix 1",
63
+ "parent_section_id": null,
64
+ "section_name": "Appendix A Additional Benchmarks and Metrics",
65
+ "text": "For all benchmarks in EC-NAS, we report on operations, parameter count and performance metrics, similar to NAS-Bench-101, with the addition of energy consumption. However, we introduce separate benchmarks for models characterized by DAGs with and , denoted the 4V and 5V space, respectively, where we also detail the carbon footprint. For the 4V and 5V spaces the energy efficiency metrics are derived from direct measurements, independent of surrogate modeling. These datasets were compiled by performing exhaustive model training limited to 4 epochs, with the resource costs for the remaining epochs extrapolated through linear scaling.\nThe primary focus for efficiency metrics is quantifying resource costs specific to model training, however, we also report total resource costs, including computational overheads, e.g., data movements. Lastly, we include the average energy consumption of computing hardware. A complete overview of the metrics relevant to this work is presented in Table 2 ###reference_###."
66
+ },
67
+ {
68
+ "section_id": "Appendix 2",
69
+ "parent_section_id": null,
70
+ "section_name": "Appendix B Measurements from Carbontracker",
71
+ "text": "Our measurements account for the energy consumption of Graphics Processing Units (GPUs), Central Processing Units (CPUs), and Dynamic Random Access Memory (DRAM), with the CPU energy usage inclusive of DRAM power consumption. Energy usage data is collected and logged at 10-second intervals, and this information is averaged over the duration of model training. The total energy consumed is then calculated and reported in kilowatt-hours (kWh), where . In addition, we assess the emission of greenhouse gases (GHG) in terms of carbon dioxide equivalents (\\chCO2eq), calculated by applying the carbon intensity metric, which denotes the \\chCO2eq emitted per kWh of electricity generated. This carbon intensity data is updated every 15 minutes during model training from a designated provider.\nHowever, considering only the direct energy consumption of these components does not fully capture the carbon footprint of model training, as it overlooks the energy consumption of auxiliary infrastructure, such as data centers. To address this, we refine our estimations of energy usage and carbon footprint by incorporating the 2020 global average Power Usage Effectiveness (PUE) of data centers, which stands at 1.59, as reported in [23 ###reference_b23###].\n###figure_12### ###figure_13### ###figure_14### ###figure_15###"
72
+ },
73
+ {
74
+ "section_id": "Appendix 3",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix C Surrogate Model Implementation",
77
+ "text": "We use a simple four-layered MLP with gelu activation functions, except for the final layer, which transforms the input in this sequence .\nThe surrogate energy model is trained using actual energy measurements from randomly sampled architectures from the 7V space. The model was implemented in Pytorch [24 ###reference_b24###] and trained on a single NVIDIA RTX 3060 GPU. We use a training, validation and test split of ratio resulting in data points, respectively. The MLP, , is trained for 200 epochs with an initial learning rate of to minimise the -norm loss function between the predicted and actual energy measurements using the Adam optimiser [25 ###reference_b25###]."
78
+ },
79
+ {
80
+ "section_id": "Appendix 4",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix D Additional Discussion",
83
+ "text": "Resource constraind NAS. Resource-constrained NAS for obtaining efficient architectures has been explored mainly by optimising the run-time or the number of floating point operations (FPOs). For instance, the now widely popular EfficientNet architecture was discovered using a constraint of FPOs [4 ###reference_b4###]. Optimising for FPOs, however, is not entirely indicative of the efficiency of models [26 ###reference_b26###]. It has been reported that models with fewer FPOs could have bottleneck operations that consume the bulk of the training time [27 ###reference_b27###], and some models with higher FPOs might have lower inference time [28 ###reference_b28###]. Energy consumption optimised hyperparameter selection outside of NAS settings for large language models has been recently investigated in [29 ###reference_b29###].\nSurrogate model adaptability. Our surrogate energy model is promising in predicting energy consumption within our current search space. We have also adapted the surrogate model to the OFA search space, achieving comparable results in terms of energy consumption prediction. This suggests the potential for the surrogate model to be generalized and applied to other search spaces, broadening its applicability and usefulness in future research. Estimates for reduction in compute costs for the EC-NAS benchmark datasets are presented in Table 3 ###reference_###.\nWhile a comprehensive investigation of the surrogate model\u2019s performance in different search spaces is beyond the scope of this work, it is worth noting that the model could potentially serve as a valuable tool for researchers seeking to optimize energy consumption and other efficiency metrics across various architectural search spaces. Further studies focusing on the adaptability and performance of surrogate models in diverse search spaces will undoubtedly contribute to developing more efficient and environmentally sustainable AI models.\nHardware accelerators.\nHardware accelerators have become increasingly efficient and widely adopted for edge computing and similar applications. These specialized devices offer significant performance improvements and energy efficiency, allowing faster processing and lower power consumption than traditional computing platforms. However, deriving general development principles and design directions from these accelerators can be challenging due to their highly specialized nature. Moreover, measuring energy efficiency on such devices tends to be hardware-specific, with results that may need to be more easily transferable or applicable to other platforms. Despite these challenges, we acknowledge the importance and necessity of using hardware accelerators for specific applications and recognize the value of development further to improve energy efficiency and performance on these specialized devices."
84
+ },
85
+ {
86
+ "section_id": "Appendix 5",
87
+ "parent_section_id": null,
88
+ "section_name": "Appendix E Multi-objective optimisation",
89
+ "text": "Formally, let the MOO problem be described by\n.\nHere denotes the search space of the optimisation problem and\n refers to the number of objectives. We assume w.l.o.g. that all objectives are to be minimized. For two points we say that dominates and write if . For we say that dominates and write if . The subset of non-dominated solutions in a set is given by . The Pareto front of a set defined as and, thus, the goal of MOO can be formalised as approximating .\nIn iterative MOO, the strategy is to step-wise improve a set of candidate solutions towards a sufficiently good approximation of . For the design of a MOO algorithm, it is important to have a way to rank two sets and w.r.t. the overall MOO goal even if neither nor . This ranking can be done by the hypervolume measure. The hypervolume measure or -metric (see [30 ###reference_b30###]) of a set is the volume of the union of regions in that are dominated by and bounded by some appropriately chosen reference point :\nwhere is the Lebesgue measure.\nThe hypervolume is, up to weighting objectives, the only strictly Pareto\ncompliant measure [31 ###reference_b31###] in the sense that given two sets and we have if dominates . As stated by [32 ###reference_b32###], the worst-case approximation factor of a Pareto front obtained from any hypervolume-optimal set with size \nis asymptotically equal to the best worst-case approximation factor achievable by any set of size , namely for additive approximation\nand for relative approximation [33 ###reference_b33###]. Now we define the contributing hypervolume of an individual as\nThe value quantifies how much a candidate solution contributed to the total hypervolume of and can be regarded as a measure of the relevance of the point. Therefore, the contributing hypervolume is a popular criterion\nin MOO algorithms [34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###, 17 ###reference_b17###]. If we iteratively optimize some solution set , then points with low are candidates in an already crowded region of the current Pareto front , while points with high mark areas that are promising to explore further.\nIn this study, we used a simple MOO algorithm based on hypervolume maximisation outlined in Algorithm 1 ###reference_### inspired by [17 ###reference_b17###]. The algorithm iteratively updates a set of candidate solutions, starting from a set of random network architectures. Dominated solutions are removed from . Then new architectures are generated by first selecting architectures from and then modifying these architectures according to the perturbation described in Procedure 2 ###reference_###. The new architectures are added to and the next iteration starts. In Procedure 2 ###reference_###, the probability for changing (i.e., either adding or removing) an edge is chosen such that in expectation, two edges are changed, and the probability for changing a node is set such that in expectation every second perturbation changes the label of a node.\nThe selection of the architectures from the current solution set\nis described in Procedure 3 ###reference_###. We always select the extreme points in that minimize a single objective (thus, the precise choice of the reference point is of lesser importance).\nThe other points are randomly chosen preferring points with higher contributing hypervolume. The points in are ranked according to their hypervolume contribution. The probability of being selected depends linearly on the rank. We use linear ranking selection [37 ###reference_b37###, 38 ###reference_b38###], where the parameter controlling the slope is set to . Always selecting the extreme points and focusing on points with large contributing hypervolume leads to a wide spread of non-dominated solutions.\nHyperparameters for the MOO Baseline Methods\nAll baseline methods employ EC-NAS for exploring and optimizing architectures. We select hyperparameters for each method to prevent unfair advantages due to increased computation time, such as the number of iterations or function evaluations. Despite allocating similar resources to the baseline methods, assessing fairness in their comparison is challenging due to the disparity in their algorithmic approaches. To mitigate uncertainties in the results, we average the outcomes over 10 experiments using different initial seeds, providing a measure of variability.\nWe adopt the bag-of-baselines implementation presented in [18 ###reference_b18###] for compatibility with the tabular benchmarks of EC-NAS. Additionally, we implement the previously presented MOO algorithm SEMOA within the same framework as the baseline methods to ensure consistency. Here, we provide further details on the modifications and characteristics of the baseline methods. Summary of metrics for each method over all runs can be seen in Figure 6 ###reference_###.\nRandom Search Unlike other methods, Random Search does not utilize evolutionary search heuristics to optimize architectures in the search space. It does not inherently consider multiple objectives but relies on processing each randomly queried model. Specifically, all queried architectures are stored, and a Pareto front is computed over all models to obtain the MOO interpretation of this method. We allow 1,000 queries for this search scheme.\nSpeeding up Evolutionary Multi-Objective Algorithm (SH-EMOA) We initialize SH-EMOA with a population size of 10 and limit the search to 100 function evaluations for budgets between 4 and 108. The algorithm is constrained to use budgets of 4, 12, 36, and 108, available in our search space. The remaining hyperparameters are set to default values, including a uniform mutation type for architecture perturbation and a tournament-style parent selection for offspring generation.\nMixed Surrogate Expected Hypervolume Improvement (MS-EHVI) This evolutionary algorithm is also initialized with a population size of 10 and limited to 100 evolutions. We provide an auxiliary function to discretize parameters to accommodate the experimental setup using tabular benchmarks. MS-EHVI integrates surrogate models to estimate objective values and employs the expected hypervolume improvement criterion to guide the search. This combination allows for an efficient exploration and exploitation of the search space, especially when dealing with high-dimensional and multi-objective problems."
90
+ }
91
+ ],
92
+ "tables": {
93
+ "1": {
94
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.66\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.6.6.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.6.6.7.1\" style=\"font-size:50%;\">Method</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.6.6.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.6.6.8.1\" style=\"font-size:50%;\">Arch.</span></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.4.4.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.4.4.4.1\" style=\"font-size:50%;\">(kWh)</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.6.6.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.6.6.6.1\" style=\"font-size:50%;\">(M)</span>\n</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.11.11\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T1.11.11.6\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T1.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S3.T1.8.8.2\">\n<span class=\"ltx_text\" id=\"S3.T1.8.8.2.1\" style=\"font-size:50%;\">15.22 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.9.9.3\">\n<span class=\"ltx_text\" id=\"S3.T1.9.9.3.1\" style=\"font-size:50%;\">0.52 </span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T1.10.10.4\">\n<span class=\"ltx_text\" id=\"S3.T1.10.10.4.1\" style=\"font-size:50%;\">0.01 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.11.11.5\">\n<span class=\"ltx_text\" id=\"S3.T1.11.11.5.1\" style=\"font-size:50%;\">5.98 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.16.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.16.16.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.16.16.6.1\" style=\"font-size:50%;\">SH-EMOA</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.12.12.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.13.13.2\">\n<span class=\"ltx_text\" id=\"S3.T1.13.13.2.1\" style=\"font-size:50%;\">1034.35 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.14.14.3\">\n<span class=\"ltx_text\" id=\"S3.T1.14.14.3.1\" style=\"font-size:50%;\">0.91 </span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.15.15.4\">\n<span class=\"ltx_text\" id=\"S3.T1.15.15.4.1\" style=\"font-size:50%;\">0.27 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.16.16.5\">\n<span class=\"ltx_text\" id=\"S3.T1.16.16.5.1\" style=\"font-size:50%;\">6.55 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.21.21\">\n<td class=\"ltx_td\" id=\"S3.T1.21.21.6\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.17.17.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.18.18.2\">\n<span class=\"ltx_text\" id=\"S3.T1.18.18.2.1\" style=\"font-size:50%;\">226.28 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.19.19.3\">\n<span class=\"ltx_text\" id=\"S3.T1.19.19.3.1\" style=\"font-size:50%;\">0.85 </span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.20.20.4\">\n<span class=\"ltx_text\" id=\"S3.T1.20.20.4.1\" style=\"font-size:50%;\">0.04 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.21.21.5\">\n<span class=\"ltx_text\" id=\"S3.T1.21.21.5.1\" style=\"font-size:50%;\">6.27 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.26.26\">\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T1.26.26.6\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.22.22.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.23.23.2\">\n<span class=\"ltx_text\" id=\"S3.T1.23.23.2.1\" style=\"font-size:50%;\">14.23 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.24.24.3\">\n<span class=\"ltx_text\" id=\"S3.T1.24.24.3.1\" style=\"font-size:50%;\">0.52 </span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.25.25.4\">\n<span class=\"ltx_text\" id=\"S3.T1.25.25.4.1\" style=\"font-size:50%;\">0.01 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.26.26.5\">\n<span class=\"ltx_text\" id=\"S3.T1.26.26.5.1\" style=\"font-size:50%;\">5.95 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.31.31\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.31.31.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.31.31.6.1\" style=\"font-size:50%;\">RS</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.27.27.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.28.28.2\">\n<span class=\"ltx_text\" id=\"S3.T1.28.28.2.1\" style=\"font-size:50%;\">1649.11 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.29.29.3\">\n<span class=\"ltx_text\" id=\"S3.T1.29.29.3.1\" style=\"font-size:50%;\">0.94 </span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.30.30.4\">\n<span class=\"ltx_text\" id=\"S3.T1.30.30.4.1\" style=\"font-size:50%;\">0.41 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.31.31.5\">\n<span class=\"ltx_text\" id=\"S3.T1.31.31.5.1\" style=\"font-size:50%;\">7.05 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.36.36\">\n<td class=\"ltx_td\" id=\"S3.T1.36.36.6\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.32.32.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.33.33.2\">\n<span class=\"ltx_text\" id=\"S3.T1.33.33.2.1\" style=\"font-size:50%;\">310.93 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.34.34.3\">\n<span class=\"ltx_text\" id=\"S3.T1.34.34.3.1\" style=\"font-size:50%;\">0.89 </span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.35.35.4\">\n<span class=\"ltx_text\" id=\"S3.T1.35.35.4.1\" style=\"font-size:50%;\">0.07 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.36.36.5\">\n<span class=\"ltx_text\" id=\"S3.T1.36.36.5.1\" style=\"font-size:50%;\">6.51 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.41.41\">\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T1.41.41.6\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.37.37.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.38.38.2\">\n<span class=\"ltx_text\" id=\"S3.T1.38.38.2.1\" style=\"font-size:50%;\">14.23 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.39.39.3\">\n<span class=\"ltx_text\" id=\"S3.T1.39.39.3.1\" style=\"font-size:50%;\">0.52 </span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.40.40.4\">\n<span class=\"ltx_text\" id=\"S3.T1.40.40.4.1\" style=\"font-size:50%;\">0.01 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.41.41.5\">\n<span class=\"ltx_text\" id=\"S3.T1.41.41.5.1\" style=\"font-size:50%;\">5.95 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.46.46\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.46.46.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.46.46.6.1\" style=\"font-size:50%;\">MSE-HVI</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.42.42.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.43.43.2\">\n<span class=\"ltx_text\" id=\"S3.T1.43.43.2.1\" style=\"font-size:50%;\">1112.13 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.44.44.3\">\n<span class=\"ltx_text\" id=\"S3.T1.44.44.3.1\" style=\"font-size:50%;\">0.92 </span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.45.45.4\">\n<span class=\"ltx_text\" id=\"S3.T1.45.45.4.1\" style=\"font-size:50%;\">0.25 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.46.46.5\">\n<span class=\"ltx_text\" id=\"S3.T1.46.46.5.1\" style=\"font-size:50%;\">6.80 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.51.51\">\n<td class=\"ltx_td\" id=\"S3.T1.51.51.6\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.47.47.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.48.48.2\">\n<span class=\"ltx_text\" id=\"S3.T1.48.48.2.1\" style=\"font-size:50%;\">191.09 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.49.49.3\">\n<span class=\"ltx_text\" id=\"S3.T1.49.49.3.1\" style=\"font-size:50%;\">0.83 </span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.50.50.4\">\n<span class=\"ltx_text\" id=\"S3.T1.50.50.4.1\" style=\"font-size:50%;\">0.02 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.51.51.5\">\n<span class=\"ltx_text\" id=\"S3.T1.51.51.5.1\" style=\"font-size:50%;\">6.01 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.56.56\">\n<td class=\"ltx_td ltx_border_t\" id=\"S3.T1.56.56.6\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.52.52.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S3.T1.53.53.2\">\n<span class=\"ltx_text\" id=\"S3.T1.53.53.2.1\" style=\"font-size:50%;\">14.23 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.54.54.3\">\n<span class=\"ltx_text\" id=\"S3.T1.54.54.3.1\" style=\"font-size:50%;\">0.52 </span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.55.55.4\">\n<span class=\"ltx_text\" id=\"S3.T1.55.55.4.1\" style=\"font-size:50%;\">0.01 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.56.56.5\">\n<span class=\"ltx_text\" id=\"S3.T1.56.56.5.1\" style=\"font-size:50%;\">5.95 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.61.61\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.61.61.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.61.61.6.1\" style=\"font-size:50%;\">SEMOA</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.57.57.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S3.T1.58.58.2\">\n<span class=\"ltx_text\" id=\"S3.T1.58.58.2.1\" style=\"font-size:50%;\">2555.95 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.59.59.3\">\n<span class=\"ltx_text\" id=\"S3.T1.59.59.3.1\" style=\"font-size:50%;\">0.94 </span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.60.60.4\">\n<span class=\"ltx_text\" id=\"S3.T1.60.60.4.1\" style=\"font-size:50%;\">0.62 </span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.61.61.5\">\n<span class=\"ltx_text\" id=\"S3.T1.61.61.5.1\" style=\"font-size:50%;\">7.26 </span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.66.66\">\n<td class=\"ltx_td ltx_border_bb\" id=\"S3.T1.66.66.6\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T1.62.62.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S3.T1.63.63.2\">\n<span class=\"ltx_text\" id=\"S3.T1.63.63.2.1\" style=\"font-size:50%;\">306.9 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.64.64.3\">\n<span class=\"ltx_text\" id=\"S3.T1.64.64.3.1\" style=\"font-size:50%;\">0.92 </span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T1.65.65.4\">\n<span class=\"ltx_text\" id=\"S3.T1.65.65.4.1\" style=\"font-size:50%;\">0.07 </span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.66.66.5\">\n<span class=\"ltx_text\" id=\"S3.T1.66.66.5.1\" style=\"font-size:50%;\">6.43 </span>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:50%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.79.1.1\">Table 1</span>: </span>Average performance and resource consumption for models. Architectures , , and correspond to the two extrema and the knee point, respectively.</figcaption>\n</figure>",
95
+ "capture": "Table 1: Average performance and resource consumption for models. Architectures , , and correspond to the two extrema and the knee point, respectively."
96
+ },
97
+ "2": {
98
+ "table_html": "<figure class=\"ltx_table\" id=\"A1.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T2.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T2.7.8.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T2.7.8.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.7.8.1.1.1\" style=\"font-size:90%;\">Metrics</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T2.7.8.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.7.8.1.2.1\" style=\"font-size:90%;\">Unit of measurement</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A1.T2.7.8.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.7.8.1.3.1\" style=\"font-size:90%;\">Notation</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.1.1.2\"><span class=\"ltx_text\" id=\"A1.T2.1.1.2.1\" style=\"font-size:90%;\">Model parameters</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.1.1.3\"><span class=\"ltx_text\" id=\"A1.T2.1.1.3.1\" style=\"font-size:90%;\">Million (M)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A1.T2.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.2.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.2.2.2\"><span class=\"ltx_text\" id=\"A1.T2.2.2.2.1\" style=\"font-size:90%;\">Test/Train/Eval. time</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.2.2.3\"><span class=\"ltx_text\" id=\"A1.T2.2.2.3.1\" style=\"font-size:90%;\">Seconds (s)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.2.2.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.4.4.3\"><span class=\"ltx_text\" id=\"A1.T2.4.4.3.1\" style=\"font-size:90%;\">Test/Train/Val. Acc.</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.5.5.2\"><span class=\"ltx_text\" id=\"A1.T2.5.5.2.1\" style=\"font-size:90%;\">Energy consumption</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.5.5.3\"><span class=\"ltx_text\" id=\"A1.T2.5.5.3.1\" style=\"font-size:90%;\">Kilowatt-hour (kWh)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.5.5.1\">\n<span class=\"ltx_text\" id=\"A1.T2.5.5.1.1\" style=\"font-size:90%;\">(kWh)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.7.7.3\"><span class=\"ltx_text\" id=\"A1.T2.7.7.3.1\" style=\"font-size:90%;\">Power consumption</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.7.7.4\"><span class=\"ltx_text\" id=\"A1.T2.7.7.4.1\" style=\"font-size:90%;\">Joule (J), Watt (W)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.7.7.2\">\n<span class=\"ltx_text\" id=\"A1.T2.7.7.2.1\" style=\"font-size:90%;\">(J), </span><span class=\"ltx_text\" id=\"A1.T2.7.7.2.2\" style=\"font-size:90%;\">(W)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.7.9.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.7.9.1.1\"><span class=\"ltx_text\" id=\"A1.T2.7.9.1.1.1\" style=\"font-size:90%;\">Carbon footprint</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.7.9.1.2\">\n<span class=\"ltx_text\" id=\"A1.T2.7.9.1.2.1\" style=\"font-size:90%;\">kg</span><span class=\"ltx_ERROR undefined\" id=\"A1.T2.7.9.1.2.2\">\\ch</span><span class=\"ltx_text\" id=\"A1.T2.7.9.1.2.3\" style=\"font-size:90%;\">CO2eq</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A1.T2.7.9.1.3\"><span class=\"ltx_text\" id=\"A1.T2.7.9.1.3.1\" style=\"font-size:90%;\">\u2013</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T2.7.10.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.7.10.2.1\"><span class=\"ltx_text\" id=\"A1.T2.7.10.2.1.1\" style=\"font-size:90%;\">Carbon intensity</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.7.10.2.2\"><span class=\"ltx_text\" id=\"A1.T2.7.10.2.2.1\" style=\"font-size:90%;\">g/kWh</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A1.T2.7.10.2.3\"><span class=\"ltx_text\" id=\"A1.T2.7.10.2.3.1\" style=\"font-size:90%;\">\u2013</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T2.13.1.1\">Table 2</span>: </span>Metrics reported in <span class=\"ltx_text ltx_font_typewriter\" id=\"A1.T2.14.2\">EC-NAS-Bench</span>.</figcaption>\n</figure>",
99
+ "capture": "Table 2: Metrics reported in EC-NAS-Bench."
100
+ },
101
+ "3": {
102
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A2.T3.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A2.T3.8.9.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T3.8.9.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T3.8.9.1.1.1\" style=\"font-size:80%;\">Space</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T3.8.9.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T3.8.9.1.2.1\" style=\"font-size:80%;\">Red. GPU days</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T3.8.9.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T3.8.9.1.3.1\" style=\"font-size:80%;\">Red. kWh</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T3.8.9.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T3.8.9.1.4.1\" style=\"font-size:80%;\">Red. kg<span class=\"ltx_ERROR undefined\" id=\"A2.T3.8.9.1.4.1.1\">\\ch</span>CO2eq</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T3.3.3.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.3.3.4.1\" style=\"font-size:80%;\">4V</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T3.1.1.1\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.1.1.1.1\" style=\"font-size:80%;\"></span><svg class=\"ltx_picture\" height=\"18.68\" id=\"A2.T3.1.1.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"33.52\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,18.68) matrix(1 0 0 -1 0 0) translate(0,-2.08)\"><g fill=\"#008000\" fill-opacity=\"1.000000\"><path d=\"M 0 5.53 L 0 13.15 C 0 16.2 2.48 18.68 5.53 18.68 L 27.98 18.68 C 31.04 18.68 33.52 16.2 33.52 13.15 L 33.52 5.53 C 33.52 2.48 31.04 0 27.98 0 L 5.53 0 C 2.48 0 0 2.48 0 5.53 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.000000\"><path d=\"M 1.38 5.53 L 1.38 13.15 C 1.38 15.44 3.24 17.3 5.53 17.3 L 27.98 17.3 C 30.27 17.3 32.13 15.44 32.13 13.15 L 32.13 5.53 C 32.13 3.24 30.27 1.38 27.98 1.38 L 5.53 1.38 C 3.24 1.38 1.38 3.24 1.38 5.53 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.000000\" transform=\"matrix(1.0 0.0 0.0 1.0 4.15 2.08)\"><foreignobject height=\"13.84\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"25.21\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:10.0pt;position:relative; bottom:-3.0pt;background:black;display:inline-block;\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.1.1.1.pic1.1.1.1.1.1\" style=\"font-size:80%;\">3.758</span></foreignobject></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T3.2.2.2\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.2.2.2.1\" style=\"font-size:80%;\"></span><svg class=\"ltx_picture\" height=\"18.68\" id=\"A2.T3.2.2.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"39.05\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,18.68) matrix(1 0 0 -1 0 0) translate(0,-2.08)\"><g fill=\"#008000\" fill-opacity=\"1.000000\"><path d=\"M 0 5.53 L 0 13.15 C 0 16.2 2.48 18.68 5.53 18.68 L 33.52 18.68 C 36.57 18.68 39.05 16.2 39.05 13.15 L 39.05 5.53 C 39.05 2.48 36.57 0 33.52 0 L 5.53 0 C 2.48 0 0 2.48 0 5.53 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.000000\"><path d=\"M 1.38 5.53 L 1.38 13.15 C 1.38 15.44 3.24 17.3 5.53 17.3 L 33.52 17.3 C 35.81 17.3 37.67 15.44 37.67 13.15 L 37.67 5.53 C 37.67 3.24 35.81 1.38 33.52 1.38 L 5.53 1.38 C 3.24 1.38 1.38 3.24 1.38 5.53 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.000000\" transform=\"matrix(1.0 0.0 0.0 1.0 4.15 2.08)\"><foreignobject height=\"13.84\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"30.75\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:10.0pt;position:relative; bottom:-3.0pt;background:black;display:inline-block;\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.2.2.2.pic1.1.1.1.1.1\" style=\"font-size:80%;\">48.931</span></foreignobject></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T3.3.3.3\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.3.3.3.1\" style=\"font-size:80%;\"></span><svg class=\"ltx_picture\" height=\"18.68\" id=\"A2.T3.3.3.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"33.52\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,18.68) matrix(1 0 0 -1 0 0) translate(0,-2.08)\"><g fill=\"#008000\" fill-opacity=\"1.000000\"><path d=\"M 0 5.53 L 0 13.15 C 0 16.2 2.48 18.68 5.53 18.68 L 27.98 18.68 C 31.04 18.68 33.52 16.2 33.52 13.15 L 33.52 5.53 C 33.52 2.48 31.04 0 27.98 0 L 5.53 0 C 2.48 0 0 2.48 0 5.53 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.000000\"><path d=\"M 1.38 5.53 L 1.38 13.15 C 1.38 15.44 3.24 17.3 5.53 17.3 L 27.98 17.3 C 30.27 17.3 32.13 15.44 32.13 13.15 L 32.13 5.53 C 32.13 3.24 30.27 1.38 27.98 1.38 L 5.53 1.38 C 3.24 1.38 1.38 3.24 1.38 5.53 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.000000\" transform=\"matrix(1.0 0.0 0.0 1.0 4.15 2.08)\"><foreignobject height=\"13.84\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"25.21\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:10.0pt;position:relative; bottom:-3.0pt;background:black;display:inline-block;\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.3.3.3.pic1.1.1.1.1.1\" style=\"font-size:80%;\">6.327</span></foreignobject></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T3.6.6.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.6.6.4.1\" style=\"font-size:80%;\">5V</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T3.4.4.1\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.4.4.1.1\" style=\"font-size:80%;\"></span><svg class=\"ltx_picture\" height=\"18.68\" id=\"A2.T3.4.4.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"44.59\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,18.68) matrix(1 0 0 -1 0 0) translate(0,-2.08)\"><g fill=\"#008000\" fill-opacity=\"1.000000\"><path d=\"M 0 5.53 L 0 13.15 C 0 16.2 2.48 18.68 5.53 18.68 L 39.05 18.68 C 42.11 18.68 44.59 16.2 44.59 13.15 L 44.59 5.53 C 44.59 2.48 42.11 0 39.05 0 L 5.53 0 C 2.48 0 0 2.48 0 5.53 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.000000\"><path d=\"M 1.38 5.53 L 1.38 13.15 C 1.38 15.44 3.24 17.3 5.53 17.3 L 39.05 17.3 C 41.34 17.3 43.2 15.44 43.2 13.15 L 43.2 5.53 C 43.2 3.24 41.34 1.38 39.05 1.38 L 5.53 1.38 C 3.24 1.38 1.38 3.24 1.38 5.53 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.000000\" transform=\"matrix(1.0 0.0 0.0 1.0 4.15 2.08)\"><foreignobject height=\"13.84\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"36.28\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:10.0pt;position:relative; bottom:-3.0pt;background:black;display:inline-block;\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.4.4.1.pic1.1.1.1.1.1\" style=\"font-size:80%;\">121.109</span></foreignobject></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T3.5.5.2\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.5.5.2.1\" style=\"font-size:80%;\"></span><svg class=\"ltx_picture\" height=\"18.68\" id=\"A2.T3.5.5.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"50.12\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,18.68) matrix(1 0 0 -1 0 0) translate(0,-2.08)\"><g fill=\"#008000\" fill-opacity=\"1.000000\"><path d=\"M 0 5.53 L 0 13.15 C 0 16.2 2.48 18.68 5.53 18.68 L 44.59 18.68 C 47.64 18.68 50.12 16.2 50.12 13.15 L 50.12 5.53 C 50.12 2.48 47.64 0 44.59 0 L 5.53 0 C 2.48 0 0 2.48 0 5.53 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.000000\"><path d=\"M 1.38 5.53 L 1.38 13.15 C 1.38 15.44 3.24 17.3 5.53 17.3 L 44.59 17.3 C 46.88 17.3 48.74 15.44 48.74 13.15 L 48.74 5.53 C 48.74 3.24 46.88 1.38 44.59 1.38 L 5.53 1.38 C 3.24 1.38 1.38 3.24 1.38 5.53 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.000000\" transform=\"matrix(1.0 0.0 0.0 1.0 4.15 2.08)\"><foreignobject height=\"13.84\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"41.82\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:10.0pt;position:relative; bottom:-3.0pt;background:black;display:inline-block;\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.5.5.2.pic1.1.1.1.1.1\" style=\"font-size:80%;\">1970.495</span></foreignobject></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T3.6.6.3\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.6.6.3.1\" style=\"font-size:80%;\"></span><svg class=\"ltx_picture\" height=\"18.68\" id=\"A2.T3.6.6.3.pic1\" overflow=\"visible\" version=\"1.1\" width=\"44.59\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,18.68) matrix(1 0 0 -1 0 0) translate(0,-2.08)\"><g fill=\"#008000\" fill-opacity=\"1.000000\"><path d=\"M 0 5.53 L 0 13.15 C 0 16.2 2.48 18.68 5.53 18.68 L 39.05 18.68 C 42.11 18.68 44.59 16.2 44.59 13.15 L 44.59 5.53 C 44.59 2.48 42.11 0 39.05 0 L 5.53 0 C 2.48 0 0 2.48 0 5.53 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.000000\"><path d=\"M 1.38 5.53 L 1.38 13.15 C 1.38 15.44 3.24 17.3 5.53 17.3 L 39.05 17.3 C 41.34 17.3 43.2 15.44 43.2 13.15 L 43.2 5.53 C 43.2 3.24 41.34 1.38 39.05 1.38 L 5.53 1.38 C 3.24 1.38 1.38 3.24 1.38 5.53 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.000000\" transform=\"matrix(1.0 0.0 0.0 1.0 4.15 2.08)\"><foreignobject height=\"13.84\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"36.28\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:10.0pt;position:relative; bottom:-3.0pt;background:black;display:inline-block;\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.6.6.3.pic1.1.1.1.1.1\" style=\"font-size:80%;\">252.571</span></foreignobject></g></g></svg>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T3.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T3.8.8.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.8.8.3.1\" style=\"font-size:80%;\">7V</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T3.7.7.1\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.7.7.1.1\" style=\"font-size:80%;\"></span><svg class=\"ltx_picture\" height=\"18.68\" id=\"A2.T3.7.7.1.pic1\" overflow=\"visible\" version=\"1.1\" width=\"55.66\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,18.68) matrix(1 0 0 -1 0 0) translate(0,-2.08)\"><g fill=\"#008000\" fill-opacity=\"1.000000\"><path d=\"M 0 5.53 L 0 13.15 C 0 16.2 2.48 18.68 5.53 18.68 L 50.12 18.68 C 53.18 18.68 55.66 16.2 55.66 13.15 L 55.66 5.53 C 55.66 2.48 53.18 0 50.12 0 L 5.53 0 C 2.48 0 0 2.48 0 5.53 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.000000\"><path d=\"M 1.38 5.53 L 1.38 13.15 C 1.38 15.44 3.24 17.3 5.53 17.3 L 50.12 17.3 C 52.41 17.3 54.27 15.44 54.27 13.15 L 54.27 5.53 C 54.27 3.24 52.41 1.38 50.12 1.38 L 5.53 1.38 C 3.24 1.38 1.38 3.24 1.38 5.53 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.000000\" transform=\"matrix(1.0 0.0 0.0 1.0 4.15 2.08)\"><foreignobject height=\"13.84\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"47.35\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:10.0pt;position:relative; bottom:-3.0pt;background:black;display:inline-block;\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.7.7.1.pic1.1.1.1.1.1\" style=\"font-size:80%;\">14037.058</span></foreignobject></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T3.8.8.2\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.8.8.2.1\" style=\"font-size:80%;\"></span><svg class=\"ltx_picture\" height=\"18.68\" id=\"A2.T3.8.8.2.pic1\" overflow=\"visible\" version=\"1.1\" width=\"61.19\"><g color=\"#000000\" fill=\"#000000\" stroke=\"#000000\" stroke-width=\"0.4pt\" transform=\"translate(0,18.68) matrix(1 0 0 -1 0 0) translate(0,-2.08)\"><g fill=\"#008000\" fill-opacity=\"1.000000\"><path d=\"M 0 5.53 L 0 13.15 C 0 16.2 2.48 18.68 5.53 18.68 L 55.66 18.68 C 58.71 18.68 61.19 16.2 61.19 13.15 L 61.19 5.53 C 61.19 2.48 58.71 0 55.66 0 L 5.53 0 C 2.48 0 0 2.48 0 5.53 Z\" style=\"stroke:none\"></path></g><g fill=\"#E6FFE6\" fill-opacity=\"1.000000\"><path d=\"M 1.38 5.53 L 1.38 13.15 C 1.38 15.44 3.24 17.3 5.53 17.3 L 55.66 17.3 C 57.95 17.3 59.81 15.44 59.81 13.15 L 59.81 5.53 C 59.81 3.24 57.95 1.38 55.66 1.38 L 5.53 1.38 C 3.24 1.38 1.38 3.24 1.38 5.53 Z\" style=\"stroke:none\"></path></g><g fill-opacity=\"1.000000\" transform=\"matrix(1.0 0.0 0.0 1.0 4.15 2.08)\"><foreignobject height=\"13.84\" overflow=\"visible\" transform=\"matrix(1 0 0 -1 0 16.6)\" width=\"52.89\"><span class=\"ltx_rule\" style=\"width:0.0pt;height:10.0pt;position:relative; bottom:-3.0pt;background:black;display:inline-block;\"></span><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.8.8.2.pic1.1.1.1.1.1\" style=\"font-size:80%;\">259840.907</span></foreignobject></g></g></svg>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T3.8.8.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A2.T3.8.8.4.1\" style=\"font-size:80%;\">\u2013</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"A2.T3.11.1.1\">Table 3</span>: </span>Estimated reduction in actual resource costs when creating <span class=\"ltx_text ltx_font_typewriter\" id=\"A2.T3.12.2\">EC-NAS</span>\u00a0dataset for the 4V and 5V using linear scaling and 7V space using the surrogate model.</figcaption>\n</figure>",
103
+ "capture": "Table 3: Estimated reduction in actual resource costs when creating EC-NAS\u00a0dataset for the 4V and 5V using linear scaling and 7V space using the surrogate model."
104
+ }
105
+ },
106
+ "image_paths": {
107
+ "1": {
108
+ "figure_path": "2210.06015v4_figure_1.png",
109
+ "caption": "Fig. 1: Scatter plot of about 423k CNN architectures showing training energy (E\ud835\udc38Eitalic_E) vs. validation performance (Pvsubscript\ud835\udc43\ud835\udc63P_{v}italic_P start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT) across four training budgets. Solutions in the top-right (red ellipse) prioritize performance at high energy costs. Joint optimization shifts preferred solutions to the left (green ellipse), indicating reduced energy with minimal performance loss.",
110
+ "url": "http://arxiv.org/html/2210.06015v4/extracted/5488679/images/scatter_7v.png"
111
+ },
112
+ "2(a)": {
113
+ "figure_path": "2210.06015v4_figure_2(a).png",
114
+ "caption": "Fig. 2: Scatter plot depicting the Kendall-Tau correlation coefficient between predicted and actual energy consumption (left) and the influence of training data size on test accuracy (right). Error bars are based on 10 random initializations.",
115
+ "url": "http://arxiv.org/html/2210.06015v4/x1.png"
116
+ },
117
+ "2(b)": {
118
+ "figure_path": "2210.06015v4_figure_2(b).png",
119
+ "caption": "Fig. 2: Scatter plot depicting the Kendall-Tau correlation coefficient between predicted and actual energy consumption (left) and the influence of training data size on test accuracy (right). Error bars are based on 10 random initializations.",
120
+ "url": "http://arxiv.org/html/2210.06015v4/x2.png"
121
+ },
122
+ "3(a)": {
123
+ "figure_path": "2210.06015v4_figure_3(a).png",
124
+ "caption": "Fig. 3: Aggregated impact of swapping one operator for another on energy consumption, training time, validation accuracy, and parameter count. The figure illustrates how changing a single operator can affect the different aspects of model performance, emphasizing the importance of selecting the appropriate operators to balance energy efficiency and performance.",
125
+ "url": "http://arxiv.org/html/2210.06015v4/x3.png"
126
+ },
127
+ "3(b)": {
128
+ "figure_path": "2210.06015v4_figure_3(b).png",
129
+ "caption": "Fig. 3: Aggregated impact of swapping one operator for another on energy consumption, training time, validation accuracy, and parameter count. The figure illustrates how changing a single operator can affect the different aspects of model performance, emphasizing the importance of selecting the appropriate operators to balance energy efficiency and performance.",
130
+ "url": "http://arxiv.org/html/2210.06015v4/x4.png"
131
+ },
132
+ "3(c)": {
133
+ "figure_path": "2210.06015v4_figure_3(c).png",
134
+ "caption": "Fig. 3: Aggregated impact of swapping one operator for another on energy consumption, training time, validation accuracy, and parameter count. The figure illustrates how changing a single operator can affect the different aspects of model performance, emphasizing the importance of selecting the appropriate operators to balance energy efficiency and performance.",
135
+ "url": "http://arxiv.org/html/2210.06015v4/x5.png"
136
+ },
137
+ "3(d)": {
138
+ "figure_path": "2210.06015v4_figure_3(d).png",
139
+ "caption": "Fig. 3: Aggregated impact of swapping one operator for another on energy consumption, training time, validation accuracy, and parameter count. The figure illustrates how changing a single operator can affect the different aspects of model performance, emphasizing the importance of selecting the appropriate operators to balance energy efficiency and performance.",
140
+ "url": "http://arxiv.org/html/2210.06015v4/x6.png"
141
+ },
142
+ "4": {
143
+ "figure_path": "2210.06015v4_figure_4.png",
144
+ "caption": "Fig. 4: Energy consumption of models with DAGs where |V|\u22644\ud835\udc494|V|\\leq 4| italic_V | \u2264 4 on different GPUs. Models are organized by their average energy consumption for clarity.",
145
+ "url": "http://arxiv.org/html/2210.06015v4/x7.png"
146
+ },
147
+ "5(a)": {
148
+ "figure_path": "2210.06015v4_figure_5(a).png",
149
+ "caption": "Fig. 5: (Left) The attainment curve showing median solutions for 10 random initializations on the surrogate 7V space from EC-NAS dataset.\n(Center) A representation of the Pareto front for one run of SEMOA.\n(Right) Summary of metrics for the extrema and knee point architectures for one SEMOA run.",
150
+ "url": "http://arxiv.org/html/2210.06015v4/x8.png"
151
+ },
152
+ "5(b)": {
153
+ "figure_path": "2210.06015v4_figure_5(b).png",
154
+ "caption": "Fig. 5: (Left) The attainment curve showing median solutions for 10 random initializations on the surrogate 7V space from EC-NAS dataset.\n(Center) A representation of the Pareto front for one run of SEMOA.\n(Right) Summary of metrics for the extrema and knee point architectures for one SEMOA run.",
155
+ "url": "http://arxiv.org/html/2210.06015v4/x9.png"
156
+ },
157
+ "5(c)": {
158
+ "figure_path": "2210.06015v4_figure_5(c).png",
159
+ "caption": "Fig. 5: (Left) The attainment curve showing median solutions for 10 random initializations on the surrogate 7V space from EC-NAS dataset.\n(Center) A representation of the Pareto front for one run of SEMOA.\n(Right) Summary of metrics for the extrema and knee point architectures for one SEMOA run.",
160
+ "url": "http://arxiv.org/html/2210.06015v4/x10.png"
161
+ },
162
+ "6(a)": {
163
+ "figure_path": "2210.06015v4_figure_6(a).png",
164
+ "caption": "Fig. 6: Average performance and resource consumption across models for all baseline methods, including SEMOA. Architectures \ud835\udc9c\ud835\udc2b0subscript\ud835\udc9csubscript\ud835\udc2b0\\mathcal{A}_{\\mathbf{r}_{0}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT, \ud835\udc9c\ud835\udc2b1subscript\ud835\udc9csubscript\ud835\udc2b1\\mathcal{A}_{\\mathbf{r}_{1}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT, and \ud835\udc9c\ud835\udc2bksubscript\ud835\udc9csubscript\ud835\udc2b\ud835\udc58\\mathcal{A}_{\\mathbf{r}_{k}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT denote the two extremes and the knee point, respectively. For precise numerical data, refer to Table 1.",
165
+ "url": "http://arxiv.org/html/2210.06015v4/x11.png"
166
+ },
167
+ "6(b)": {
168
+ "figure_path": "2210.06015v4_figure_6(b).png",
169
+ "caption": "Fig. 6: Average performance and resource consumption across models for all baseline methods, including SEMOA. Architectures \ud835\udc9c\ud835\udc2b0subscript\ud835\udc9csubscript\ud835\udc2b0\\mathcal{A}_{\\mathbf{r}_{0}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT, \ud835\udc9c\ud835\udc2b1subscript\ud835\udc9csubscript\ud835\udc2b1\\mathcal{A}_{\\mathbf{r}_{1}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT, and \ud835\udc9c\ud835\udc2bksubscript\ud835\udc9csubscript\ud835\udc2b\ud835\udc58\\mathcal{A}_{\\mathbf{r}_{k}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT denote the two extremes and the knee point, respectively. For precise numerical data, refer to Table 1.",
170
+ "url": "http://arxiv.org/html/2210.06015v4/x12.png"
171
+ },
172
+ "6(c)": {
173
+ "figure_path": "2210.06015v4_figure_6(c).png",
174
+ "caption": "Fig. 6: Average performance and resource consumption across models for all baseline methods, including SEMOA. Architectures \ud835\udc9c\ud835\udc2b0subscript\ud835\udc9csubscript\ud835\udc2b0\\mathcal{A}_{\\mathbf{r}_{0}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT, \ud835\udc9c\ud835\udc2b1subscript\ud835\udc9csubscript\ud835\udc2b1\\mathcal{A}_{\\mathbf{r}_{1}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT, and \ud835\udc9c\ud835\udc2bksubscript\ud835\udc9csubscript\ud835\udc2b\ud835\udc58\\mathcal{A}_{\\mathbf{r}_{k}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT denote the two extremes and the knee point, respectively. For precise numerical data, refer to Table 1.",
175
+ "url": "http://arxiv.org/html/2210.06015v4/x13.png"
176
+ },
177
+ "6(d)": {
178
+ "figure_path": "2210.06015v4_figure_6(d).png",
179
+ "caption": "Fig. 6: Average performance and resource consumption across models for all baseline methods, including SEMOA. Architectures \ud835\udc9c\ud835\udc2b0subscript\ud835\udc9csubscript\ud835\udc2b0\\mathcal{A}_{\\mathbf{r}_{0}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT, \ud835\udc9c\ud835\udc2b1subscript\ud835\udc9csubscript\ud835\udc2b1\\mathcal{A}_{\\mathbf{r}_{1}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT, and \ud835\udc9c\ud835\udc2bksubscript\ud835\udc9csubscript\ud835\udc2b\ud835\udc58\\mathcal{A}_{\\mathbf{r}_{k}}caligraphic_A start_POSTSUBSCRIPT bold_r start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT denote the two extremes and the knee point, respectively. For precise numerical data, refer to Table 1.",
180
+ "url": "http://arxiv.org/html/2210.06015v4/x14.png"
181
+ }
182
+ },
183
+ "validation": true,
184
+ "references": [
185
+ {
186
+ "1": {
187
+ "title": "\u201cA comprehensive survey of neural architecture search: Challenges\nand solutions,\u201d",
188
+ "author": "Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen,\nand Xin Wang,",
189
+ "venue": "ACM Computing Surveys, vol. 54, no. 4, pp. 1\u201334, 2021.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "2": {
195
+ "title": "\u201cZen-NAS: A Zero-Shot NAS for High-Performance Deep Image\nRecognition,\u201d",
196
+ "author": "Ming Lin, Pichao Wang, Zhenhong Sun, Hesen Chen, Xiuyu Sun, Qi Qian, Hao Li,\nand Rong Jin,",
197
+ "venue": "in International Conference on Computer Vision (ICCV), 2021.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "3": {
203
+ "title": "\u201cAccelerating neural architecture search using performance\nprediction,\u201d",
204
+ "author": "Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik,",
205
+ "venue": "in International Conference on Learning Representations (ICLR) -\nWorkshop Track, 2017.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "4": {
211
+ "title": "\u201cEfficientnet: Rethinking model scaling for convolutional neural\nnetworks,\u201d",
212
+ "author": "Mingxing Tan and Quoc Le,",
213
+ "venue": "in International Conference on Machine Learning (ICML), 2019.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "5": {
219
+ "title": "\u201cGreen AI,\u201d",
220
+ "author": "Roy Schwartz, Jesse Dodge, Noah A Smith, and Oren Etzioni,",
221
+ "venue": "Communications of the ACM, vol. 63, no. 12, pp. 54\u201363, 2020.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "6": {
227
+ "title": "\u201cCarbontracker: Tracking and Predicting the Carbon Footprint of\nTraining Deep Learning Models,\u201d ICML Workshop on Challenges in Deploying\nand monitoring Machine Learning Systems, 2020.",
228
+ "author": "Lasse F. Wolff Anthony, Benjamin Kanding, and Raghavendra Selvan,",
229
+ "venue": null,
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "7": {
235
+ "title": "\u201cCompute trends across three eras of machine learning,\u201d",
236
+ "author": "Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, and\nPablo Villalobos,",
237
+ "venue": "in International Joint Conference on Neural Networks (IJCNN),\n2022.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "8": {
243
+ "title": "\u201cTabular benchmarks for joint architecture and hyperparameter\noptimization,\u201d Arxiv, 2019.",
244
+ "author": "Aaron Klein and Frank Hutter,",
245
+ "venue": null,
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "9": {
251
+ "title": "\u201cSurrogate NAS benchmarks: Going beyond the limited search spaces\nof tabular NAS benchmarks,\u201d",
252
+ "author": "Arber Zela, Julien Niklas Siems, Lucas Zimmer, Jovita Lukasik, Margret Keuper,\nand Frank Hutter,",
253
+ "venue": "in International Conference on Learning Representations (ICLR),\n2022.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "10": {
259
+ "title": "\u201cNAS-Bench-101: Towards reproducible neural architecture search,\u201d",
260
+ "author": "Chris Ying, Aaron Klein, Eric Christiansen, Esteban Real, Kevin Murphy, and\nFrank Hutter,",
261
+ "venue": "in International Conference on Machine Learning (ICML), 2019.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "11": {
267
+ "title": "\u201cNAS-Bench-1Shot1: Benchmarking and dissecting one-shot neural\narchitecture search,\u201d",
268
+ "author": "Arber Zela, Julien Siems, and Frank Hutter,",
269
+ "venue": "in International Conference on Learning Representations (ICLR),\n2020.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "12": {
275
+ "title": "\u201cEA-HAS-bench: Energy-aware hyperparameter and architecture\nsearch benchmark,\u201d",
276
+ "author": "Shuguang Dou, Xinyang Jiang, Cai Rong Zhao, and Dongsheng Li,",
277
+ "venue": "in International Conference on Learning Representations (ICLR),\n2023.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "13": {
283
+ "title": "\u201cNeural architecture search for speech emotion recognition,\u201d Arxiv,\n2022.",
284
+ "author": "Xixin Wu, Shoukang Hu, Zhiyong Wu, Xunying Liu, and Helen Meng,",
285
+ "venue": null,
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "14": {
291
+ "title": "\u201cSearch for efficient deep visual-inertial odometry through neural\narchitecture search,\u201d",
292
+ "author": "Yu Chen, Mingyu Yang, and Hun-Seok Kim,",
293
+ "venue": "in IEEE International Conference on Acoustics, Speech and Signal\nProcessing (ICASSP), 2023.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "15": {
299
+ "title": "\u201cLearning multiple layers of features from tiny images,\u201d",
300
+ "author": "Alex Krizhevsky,",
301
+ "venue": "Tech. Rep., Univeristy of Toronto, 2009.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "16": {
307
+ "title": "\u201cMeasuring the carbon intensity of ai in cloud instances,\u201d",
308
+ "author": "Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy\nSchwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole\nDeCario, and Will Buchanan,",
309
+ "venue": "in Conference on Fairness, Accountability, and Transparency\n(FAccT), 2022.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "17": {
315
+ "title": "\u201cMulti-objective optimization with unbounded solution sets,\u201d",
316
+ "author": "Oswin Krause, Tobias Glasmachers, and Christian Igel,",
317
+ "venue": "in NeurIPS Workshop on Bayesian Optimization (BayesOpt 2016),\n2016.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "18": {
323
+ "title": "\u201cBag of baselines for multi-objective joint neural architecture\nsearch and hyperparameter optimization,\u201d",
324
+ "author": "Sergio Izquierdo, Julia Guerrero-Viu, Sven Hauns, Guilherme Miotto, Simon\nSchrodi, Andr\u00e9 Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer,\nand Frank Hutter,",
325
+ "venue": "in ICML Workshop on Automated Machine Learning (AutoML), 2021.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "19": {
331
+ "title": "\u201cNas-bench-x11 and the power of learning curves,\u201d",
332
+ "author": "Shen Yan, Colin White, Yash Savani, and Frank Hutter,",
333
+ "venue": "Advances in Neural Information Processing Systems (NeurIPS),\n2021.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "20": {
339
+ "title": "\u201cKNAS: green neural architecture search,\u201d",
340
+ "author": "Jingjing Xu, Liang Zhao, Junyang Lin, Rundong Gao, Xu Sun, and Hongxia Yang,",
341
+ "venue": "in International Conference on Machine Learning (ICML), 2021.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "21": {
347
+ "title": "\u201cCarbon footprint of selecting and training deep learning models for\nmedical image analysis,\u201d",
348
+ "author": "Raghavendra Selvan, Nikhil Bhagwat, Lasse F. Wolff Anthony, Benjamin Kanding,\nand Erik B. Dam,",
349
+ "venue": "in International Conference on Medical Image Computing and\nComputer-Assisted Intervention (MICCAI), 2022.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "22": {
355
+ "title": "\u201cFew-shot neural architecture search,\u201d",
356
+ "author": "Yiyang Zhao, Linnan Wang, Yuandong Tian, Rodrigo Fonseca, and Tian Guo,",
357
+ "venue": "in International Conference on Machine Learning (ICML), 2021.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "23": {
363
+ "title": "\u201cUptime Institute global data center survey 2020,\u201d",
364
+ "author": "Rhonda Ascierto and Andy Lawrence,",
365
+ "venue": "Tech. Rep., Uptime Institute, 07 2020.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "24": {
371
+ "title": "\u201cPytorch: An imperative style, high-performance deep learning\nlibrary,\u201d",
372
+ "author": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory\nChanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban\nDesmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan\nTejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith\nChintala,",
373
+ "venue": "in Advances in Neural Information Processing Systems\n(NeurIPS)). 2019.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "25": {
379
+ "title": "\u201cAdam: A method for stochastic optimization,\u201d",
380
+ "author": "Diederik P Kingma and Jimmy Ba,",
381
+ "venue": "in International Conference on Learning Representations, 2015.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "26": {
387
+ "title": "\u201cTowards the systematic reporting of the energy and carbon\nfootprints of machine learning,\u201d",
388
+ "author": "Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and\nJoelle Pineau,",
389
+ "venue": "Journal of Machine Learning Research, vol. 21, no. 248, pp.\n1\u201343, 2020.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "27": {
395
+ "title": "\u201cMobileNets: Efficient convolutional neural networks for mobile\nvision applications,\u201d",
396
+ "author": "Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang,\nTobias Weyand, Marco Andreetto, and Hartwig Adam,",
397
+ "venue": "arXiv preprint arXiv:1704.04861, 2017.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "28": {
403
+ "title": "\u201cConstructing fast network through deconstruction of convolution,\u201d",
404
+ "author": "Yunho Jeon and Junmo Kim,",
405
+ "venue": "Advances in Neural Information Processing Systems, 2018.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "29": {
411
+ "title": "\u201cHyperparameter power impact in transformer language model\ntraining,\u201d",
412
+ "author": "Lucas H\u00f8yberg Puvis de Chavannes, Mads Guldborg Kjeldgaard Kongsbak, Timmie\nRantzau, and Leon Derczynski,",
413
+ "venue": "in Proceedings of the Second Workshop on Simple and Efficient\nNatural Language Processing, Virtual, Nov. 2021, pp. 96\u2013118, Association\nfor Computational Linguistics.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "30": {
419
+ "title": "\u201cMultiobjective evolutionary algorithms: A comparative case study\nand the strength Pareto approach,\u201d",
420
+ "author": "Eckart Zitzler and Lothar Thiele,",
421
+ "venue": "IEEE Transactions on Evolutionary Computation, vol. 3, no. 4,\npp. 257\u2013271, 1999.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "31": {
427
+ "title": "\u201cPerformance assessment of multiobjective optimizers: An analysis\nand review,\u201d",
428
+ "author": "Eckart Zitzler, Lothar Thiele, Marco Laumanns, Carlos M. Fonseca, and Viviane\nGrunert da Fonseca,",
429
+ "venue": "IEEE Transactions on Evolutionary Computation, vol. 7, no. 2,\npp. 117\u2013132, 2003.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "32": {
435
+ "title": "\u201cSpeeding up many-objective optimization by Monte Carlo\napproximations,\u201d",
436
+ "author": "Karl Bringmann, Tobias Friedrich, Christian Igel, and Thomas Vo\u00df,",
437
+ "venue": "Artificial Intelligence, vol. 204, pp. 22\u201329, 2013.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "33": {
443
+ "title": "\u201cApproximation quality of the hypervolume indicator,\u201d",
444
+ "author": "Karl Bringmann and Tobias Friedrich,",
445
+ "venue": "Artificial Intelligence, vol. 195, pp. 265\u2013290, 2013.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "34": {
451
+ "title": "\u201cSMS-EMOA: Multiobjective selection based on dominated\nhypervolume,\u201d",
452
+ "author": "Nicola Beume, Boris Naujoks, and Michael Emmerich,",
453
+ "venue": "European Journal of Operational Research, vol. 181, no. 3, pp.\n1653\u20131669, 2007.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "35": {
459
+ "title": "\u201cCovariance matrix adaptation for multi-objective optimization,\u201d",
460
+ "author": "Christian Igel, Nikolaus Hansen, and Stefan Roth,",
461
+ "venue": "Evolutionary Computation, vol. 15, no. 1, pp. 1\u201328, 2007.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "36": {
467
+ "title": "\u201cHypE: An algorithm for fast hypervolume-based many-objective\noptimization,\u201d",
468
+ "author": "Johannes Bader and Eckart Zitzler,",
469
+ "venue": "Evolutionary computation, vol. 19, no. 1, pp. 45\u201376, 2011.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "37": {
475
+ "title": "\u201cAdaptive selection methods for genetic algorithms,\u201d",
476
+ "author": "James Edward Baker,",
477
+ "venue": "in International Conference on Genetic Algorithms and their\nApplications, 1985, vol. 1.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "38": {
483
+ "title": "\u201cHow genetic algorithms work: A critical look at implicit\nparallelism,\u201d",
484
+ "author": "John J. Greffenstette and James E. Baker,",
485
+ "venue": "in International Conference on Genetic Algorithms, 1989, pp.\n20\u201327.",
486
+ "url": null
487
+ }
488
+ }
489
+ ],
490
+ "url": "http://arxiv.org/html/2210.06015v4"
491
+ }
20240322/2211.06003v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2212.06370v4.json ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Dual Accuracy-Quality-Driven Neural Network for Prediction Interval Generation",
3
+ "abstract": "Accurate uncertainty quantification is necessary to enhance the reliability of deep learning models in real-world applications.\nIn the case of regression tasks, prediction intervals (PIs) should be provided along with the deterministic predictions of deep learning models.\nSuch PIs are useful or \u201chigh-quality\u201d as long as they are sufficiently narrow and capture most of the probability density.\nIn this paper, we present a method to learn prediction intervals for regression-based neural networks automatically in addition to the conventional target predictions.\nIn particular, we train two companion neural networks: one that uses one output, the target estimate, and another that uses two outputs, the upper and lower bounds of the corresponding PI.\nOur main contribution is the design of a novel loss function for the PI-generation network that takes into account the output of the target-estimation network and has two optimization objectives: minimizing the mean prediction interval width and ensuring the PI integrity using constraints that maximize the prediction interval probability coverage implicitly.\nFurthermore, we introduce a self-adaptive coefficient that balances both objectives within the loss function, which alleviates the task of fine-tuning.\nExperiments using a synthetic dataset, eight benchmark datasets, and a real-world crop yield prediction dataset showed that our method was able to maintain a nominal probability coverage and produce significantly narrower PIs without detriment to its target estimation accuracy when compared to those PIs generated by three state-of-the-art neural-network-based methods.\nIn other words, our method was shown to produce higher-quality PIs.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Deep learning has gained a great deal of attention due to its ability to outperform alternative machine learning methods in solving complex problems in a variety of domains.\nIn conjunction with the availability of large-scale datasets and modern parallel hardware architectures (e.g., GPUs), convolutional neural networks (CNNs), as one popular deep learning technique, have attained unprecedented achievements in fields such as computer vision, speech recognition, natural language processing, medical diagnosis, and others [1 ###reference_b1###].\nWhile the undeniable success of deep learning (DL) has impacted applications that are used on a daily basis, many theoretical aspects remain unclear, which is why these models are usually referred to as \u201cblack boxes\u201d in the literature [2 ###reference_b2###].\nIn addition, numerous reports suggest that current DL techniques typically lead to unstable predictions that can occur randomly and not only in worst-case scenarios [3 ###reference_b3###].\nAs a consequence, they are considered unreliable for applications that deal with uncertainty in the data or in the underlying system, such as weather forecasting [4 ###reference_b4###], electronic manufacturing [5 ###reference_b5###], or precision agriculture [6 ###reference_b6###].\nNote that, in this context, reliability is defined as the ability for a model to work consistently across real-world settings [7 ###reference_b7###].\nOne of the limitations of conventional neural networks is that they only provide deterministic point estimates without any additional indication of their approximate accuracy [8 ###reference_b8###].\nReliability and accuracy of the generated point predictions are affected by factors such as the sparsity of training data or target variables affected by probabilistic events [9 ###reference_b9###].\nOne way to improve the reliability and credibility of such complex models is to quantify the uncertainty in the predictions they generate [10 ###reference_b10###]. This uncertainty () can be quantified using prediction intervals (PIs), which provide an estimate of the upper and the lower bounds within which a prediction will fall according to a certain probability [11 ###reference_b11###].\nHence, the amount of uncertainty for each prediction is provided by the width of its corresponding PI.\nPIs account for two types of uncertainty: model uncertainty () and data noise variance () [11 ###reference_b11###], where . Model uncertainty arises due to model selection, training data variance, and parameter uncertainty [12 ###reference_b12###].\nData noise variance measures the variance of the error between observable target values and the outputs produced by the learned models.\nRecently, some NN-based methods have been proposed to solve the PI generation problem [11 ###reference_b11###, 13 ###reference_b13###, 14 ###reference_b14###, 12 ###reference_b12###, 15 ###reference_b15###, 16 ###reference_b16###].\nThese methods aim to train NNs using loss functions that aim to balance at least two of the following three objectives: minimizing mean PI width, maximizing PI coverage probability, and minimizing the mean error of the target predictions.\nAlthough the aforementioned works have achieved promising results, there exist some limitations that need to be addressed.\nFor instance, they rely on the use of deep ensembles; however, training several models may become impractical when applied to complex models and large datasets [17 ###reference_b17###].\nFurthermore, their performance is sensitive to the selection of multiple tunable hyperparameters whose values may differ substantially depending on the application.\nTherefore, fine-tuning an ensemble of deep NNs becomes a computationally expensive task.\nFinally, methods that generate PI bounds and target estimations simultaneously have to deal with a trade-off between the quality of generated PIs and the accuracy of the target estimations.\nPearce et al. [12 ###reference_b12###] coined the term High-quality (HQ) principle, which refers to the requirement that PIs be as narrow as possible while capturing some specified proportion of the predicted data points.\nFollowing this principle, we pose the PI generation problem for regression as a multi-objective optimization problem.\nIn particular, our proposal involves training two neural networks (NNs): one that generates accurate target estimations and one that generates narrow PIs (see Fig. 1 ###reference_###).\n###figure_1### The first NN is trained to minimize the mean squared error of the target estimations.\nOur main contribution is the design of a loss function for the second NN that, besides the generated PI bounds and the target, considers the output of the first NN as an additional input.\nIt minimizes the mean prediction interval width and uses constraints to ensure the integrity of the generated PIs while implicitly maximizing the probability coverage (Sec. III-A ###reference_###).\nOur second contribution is a method that updates the coefficient that balances the two optimization objectives of our loss function automatically throughout training (Sec. III-C ###reference_###).\nOur method avoids generating unnecessarily wide PIs by using a technique that sorts the mini-batches at the beginning of each training epoch according to the width of the generated PIs (Sec. III-B ###reference_###).\nThen we apply a Monte Carlo-based approach to account for the uncertainty of the generated upper and lower bounds\n(Sec. III-E ###reference_###).\nFinally, when compared to three state-of-the-art NN-based methods, we show that our method is able to produce PIs that maintain the target probability coverage while yielding better mean width without detriment to its target estimation accuracy (Sec. IV ###reference_###).\nOur specific contributions are summarized as follows:\nOur main contribution is a novel loss function called Dual Accuracy-Quality-Driven (DualAQD) used to train a PI-generation NN. It is designed to solve a multi-objective optimization problem: minimizing the mean PI width while ensuring PI integrity using constraints that maximize the probability coverage implicitly.\nWe present a new PI-generation framework that consists of two companion NNs: one that is trained to produce accurate target estimations, and another that generates high-quality PIs; thus, avoiding the common trade-off between target estimation accuracy, and quality of PIs.\nWe introduce a self-adaptive coefficient that balances the two objectives of our DualAQD loss function. This differs from previous approaches that consider this balancing coefficient as a tunable hyperparameter with a fixed value throughout the training process.\nWe present a method called batch-sorting that sorts the mini-batches according to their corresponding PI width and, as such, avoids generating unnecessarily wide PIs.\nOur method is shown to generate higher quality PIs and better reflects varying levels of uncertainty within the data than the compared methods."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Related Work",
15
+ "text": "One of the more common approaches to uncertainty quantification for regression tasks is via Bayesian approaches, such as those represented by Bayesian neural networks (BNNs),\nwhich model the NN parameters as distributions.\nAs such, they have the advantage that they allow for a natural quantification of uncertainty.\nIn particular, uncertainty is quantified by learning a posterior weight distribution [18 ###reference_b18###, 19 ###reference_b19###].\nThe inference process involves marginalization over the weights, which in general is intractable, and sampling processes such as Markov chain Monte Carlo (MCMC) can be computationally prohibitive.\nThus, approximate solutions have been formulated using variational inference (VI) [20 ###reference_b20###].\nHowever, Wu et al. [21 ###reference_b21###] argued that VI approaches are fragile since they require careful initialization and tuning.\nTo overcome these issues, they proposed approximating moments in NNs to eliminate gradient variance.\nThey also presented an empirical Bayes procedure for selecting prior variances automatically.\nMoreover, Izmailov et al. [22 ###reference_b22###] discussed scaling BNNs to deep neural networks by constructing low-dimensional subspaces of the parameter space.\nBy doing so, they were able to apply elliptical slice sampling and VI, which struggle in the full parameter space.\nIn addition, Lut et al. [23 ###reference_b23###]\npresented a Bayesian-learning-based sparse stochastic configuration network that replaces the Gaussian distribution with a Laplace one as the prior distribution for output weights.\nDespite the aforementioned improvements in Bayesian approaches, they still suffer from various limitations.\nNamely, the high dimensionality of the parameter space of deep NNs, including complex models such as CNNs, makes the cost of characterizing uncertainty over the parameters prohibitive [24 ###reference_b24###].\nAttempts to scale BNNs to deep NNs are considerably more expensive computationally than VI-based methods and have been scaled up to low-complexity problems only, such as MNIST [25 ###reference_b25###].\nConversely, non-Bayesian methods do not require the use of initial prior distributions and biases to train the models [11 ###reference_b11###].\nRecent works have demonstrated that non-Bayesian approaches provide better or competitive uncertainty estimates than their Bayesian counterparts [26 ###reference_b26###, 11 ###reference_b11###, 12 ###reference_b12###].\nIn addition, they are scalable to complex problems and can handle millions of parameters.\nMC-Dropout was proposed by Gal and Ghahramani [8 ###reference_b8###] to quantify model uncertainty in NNs.\nThey cast dropout training in deep NNs as approximate Bayesian inference in deep Gaussian processes.\nThe method uses dropout repeatedly to select subsamples of active nodes in the network, turning a single\nnetwork into an ensemble.\nHence, model uncertainty is estimated by the sample variance of the ensemble predictions.\nMC-Dropout is not able to estimate PIs themselves, as it does not account for data noise variance.\nTherefore, Zhu and Laptev [27 ###reference_b27###] proposed estimating PIs by quantifying the model uncertainty through MC-Dropout, coupled with estimating the data noise variance as the mean squared error (MSE) calculated over an independent held-out validation set.\nRecently, several non-Bayesian approaches have been proposed for approximate uncertainty quantification.\nSuch approaches use models whose outputs provide estimations of the predictive uncertainty directly.\nFor instance, Schupbach et al. [28 ###reference_b28###] proposed a method that estimates confidence intervals in NN ensembles based on the use of U-statistics.\nOther techniques estimate PIs by using ensembles of feedforward networks [29 ###reference_b29###]\nor stochastic configuration networks [30 ###reference_b30###]\nand bootstrapping.\nLakshminarayanan et al. [26 ###reference_b26###] presented an ensemble approach based on the Mean-Variance Estimation (MVE) method introduced by Nix and Weigend [31 ###reference_b31###].\nHere, each NN has two outputs: one that represents the mean (or target estimation) and the other that represents the variance of a normal distribution, which is used to quantify the data noise variance.\nOther approaches use models that generate PI bounds explicitly.\nKhrosavi et al. [11 ###reference_b11###] proposed a Lower Upper Bound Estimation (LUBE) method that uses a NN and a loss function to minimize the PI width while maximizing the probability coverage using simulated annealing.\nSimilar approaches have attempted to optimize the LUBE loss function using methods such as genetic algorithms [13 ###reference_b13###] and particle swarm optimization [14 ###reference_b14###].\nPearce et al. [12 ###reference_b12###] proposed a method called QD-Ens that consists of a quality-driven loss function similar to LUBE but that is compatible with gradient descent.\nThen Salem et al. [16 ###reference_b16###] proposed QD+ which is based on QD-Ens, which uses exactly the same two penalty functions to reduce the PI width and maximize the probability coverage.\nThey used three-output NNs and included a third penalty term that aims to decrease the mean squared error of the target predictions and a fourth penalty term to enforce the point predictions to lay inside the generated PIs.\nIn our work, we use only three penalty terms; the differences are explained in Sec. III-F ###reference_###.\nFinally, both QD-Ens and QD+ used an ensemble approach to estimate the model uncertainty while we use a Monte Carlo approach on a single network."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Proposed Methodology",
21
+ "text": ""
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Dual Accuracy-Quality-Driven Loss Function",
27
+ "text": "Let be a training batch with samples where each sample consists of covariates.\nFurthermore, let be a set of corresponding target observations where .\nWe construct a NN regression model that captures the association between and .\nMore specifically, denotes the function computed by the NN, and denotes its weights.\nHence, given an input , computes the target estimate .\nThis network is trained to generate accurate estimates with respect to .\nWe quantify this accuracy by calculating the mean squared error of the estimation \nThus, is conventionally optimized as follows:\nOnce network is trained, we use a separate NN whose goal is to generate prediction intervals for given data .\nLet denote the function computed by this PI-generation NN, and denotes its weights.\nGiven an input , generates its corresponding upper and lower bounds, and , such that .\nNote that there is no assumption of and being symmetric with respect to the target estimate produced by network .\nWe describe its optimization procedure below.\nWe say that a training sample is covered (i.e., we set ) if both the predicted value and the target observation fall within the estimated PI:\nThen, using , we define the prediction interval coverage probability () for as the percent of covered samples with respect to the batch size : .\nThe HQ principle suggests that the width of the prediction intervals should be minimized as long as they capture the target observation value.\nThus, Pearce et al. [12 ###reference_b12###] considered the mean prediction interval width of captured points () as part of their loss function:\nwhere is a small number used to avoid dividing by zero.\nHowever, we argue that minimizing does not imply that the width of the PIs generated for the non-captured samples will not decrease along with the width of the PIs generated for the captured samples111\nWe provide a toy example demonstrating this behavior in the following link\nhttps://github.com/NISL-MSU/PredictionIntervals/tree/master/src/PredictionIntervals/models/QD_toy_example.ipynb ###reference_rvals/tree/master/src/PredictionIntervals/models/QD_toy_example.ipynb###.\nFurthermore, consider the case where none of the samples are captured by the PIs, as likely happens at the beginning of the training. Then, the penalty is minimum (i.e., ).\nHence, the calculated gradients of the loss function will force the weights of the NN to remain in the state where , which contradicts the goal of maximizing .\nInstead of minimizing directly, we let\nwhich we minimize instead. This function quantifies the width of the PI as the sum of the distance between the upper bound and the target and the distance between the lower bound and the target.\nWe argue that is more suitable than given that it forces , , and to be closer together.\nFor example, suppose that the following case is observed during the first training epoch: , , , and .\nThen given that the target is not covered by the PI, while .\nAs a result, will penalize this state while will not.\nThus, we define our first optimization objective as:\nHowever, minimizing is not enough to ensure the integrity of the PIs.\nTheir integrity is given by the conditions that the upper bound must be greater than the target and the target estimate ( and ) and that the target and the target estimate, in turn, must be greater than the lower bound ( and ).\nNote that if the differences and are greater than the maximum estimation error within the training batch (i.e., and , ), it is implied that all samples are covered ().\nMotivated by this, we include an additional penalty function to ensure PI integrity and maximize the number of covered samples within the batch simultaneously.\nLet us denote the mean differences between the PI bounds and the target estimates as and .\nLet \ndenote the maximum distance between a target estimate and its corresponding target value within the batch ().\nFrom this, our penalty function is defined as:\nHere, if the PI integrity is not met (i.e., or ) then their exponent magnitude becomes larger than , producing a large penalty value.\nMoreover, these terms encourage both and not only to be positive but also to be greater than .\nThis implies that the distance between the target and any of its bounds will be larger than the maximum error within the batch, , thus the target will lie within the PI.\nFrom this, we define our second optimization objective as:\nThen our proposed dual accuracy-quality-driven loss function is given by\nwhere is a self-adaptive coefficient that controls the relative importance of and .\nHence, our multi-objective optimization problem can be expressed as:\nFor simplicity, we assume that and have layers and the same network architecture except for the output layer.\nNetwork is trained first.\nThen, weights are initialized using weights except for those of the last layer: .\nNote, that, in general, DualAQD can use different network architectures for and ."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Batch Sorting",
33
+ "text": "The objective function minimizes the term (Eq. 4 ###reference_###), forcing the distance between the target estimate of a sample and its PI bounds to be larger than the maximum absolute error within its corresponding batch.\nThis term assumes there exists a similarity among the samples within a batch.\nHowever, consider the case depicted in Fig. 2 ###reference_### where we show four samples that have been split randomly into two batches.\nIn Fig. 2 ###reference_###a, the PIs of the second and third samples already cover their observed targets.\nNevertheless, according to , these samples will yield high penalties because the distances between their target estimates and their PI bounds are less than and , respectively, forcing their widths to increase unnecessarily.\nFor this reason, we propose a method called \u201cbatch sorting\u201d, which consists of sorting the training samples with respect to their corresponding generated PI widths after each epoch.\nBy doing so, the batches will process samples with similar widths, avoiding unnecessary widening.\nFor example, in Fig. 2 ###reference_###b, the penalty terms are low given that and .\nNote that, during testing, the PI generated for a given sample is independent of other samples and, as such, batch sorting becomes unnecessary during inference.\n###figure_2###"
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-C Self-adaptive Coefficient",
39
+ "text": "The coefficient of Eq. 5 ###reference_### balances the two optimization objectives and .\nIn this section, we propose that, instead of being a tunable hyperparameter with a fixed value throughout training, it should be adapted throughout the learning process automatically.\nTypically, the value improves as long as the value increases;\nhowever, extremely wide PIs are not useful.\nWe usually aim to obtain PIs with a nominal probability coverage no greater than .\nA common value for the significance level is , in which case we say that we are 95% confident that the target value will fall within the PI.\nLet denote the value calculated on the training set after the -th training epoch.\nIf is below the confidence target , more relative importance should be given to the objective that enforces PI integrity (i.e., should increase).\nLikewise, if is higher than , more relative importance should be given to the objective that minimizes (i.e., should decrease).\nWe formalize this intuition by defining the cost that quantifies the distance from to the confidence target :\n\nThen, we propose to increase or decrease proportionally to the cost function after each training epoch as follows (see Algorithm 1 ###reference_###):\nwhere is the value of the coefficient at the -th iteration (we consider that ), and is a tunable scale factor.\nNote that Algorithm 1 ###reference_### takes as inputs the data and corresponding targets as well as the trained prediction network , the untrained network , the significance level , and the scale factor .\nFunction batchSorting returns a list of batches sorted according to the PI widths generated during the previous training epoch (see Sec.III-B ###reference_###).\nFunction DualAQD represents the DualAQD loss function (Eq.5 ###reference_###) while update() encompasses the conventional backpropagation and gradient descent processes used to update the weights of network .\nFurthermore, function metrics passes through to generate the corresponding PIs and their widths, and to calculate compares the output to to calculate the value using ."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-D Parameter and Hyperparameter Selection",
45
+ "text": "We train a neural network on the training set during epochs using as the loss function.\nAfter the -th training epoch, we calculate the performance metrics on the validation set .\nThus, we consider that the set of optimal weights of the network, , will be those that maximize performance on the validation set.\nThe remaining question is what are the criteria to compare two solutions and .\nTaking this criterion into account, we consider that a solution dominates another solution () if:\nand .\nand\nand\nIn other words, if , we seek a solution whose value is at least 95%.\nAfter exceeding this value, a solution is said to dominate another solution only if it produces narrower PIs.\nWe use a grid search to tune the hyperparameter for training (Eq. 6 ###reference_###).\nFor each value, we train a NN using 10-fold cross-validation and calculate the average performance metrics on the validation sets.\nThen, the hyperparameters are selected using the dominance criteria explained above."
46
+ },
47
+ {
48
+ "section_id": "3.5",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-E PI Aggregation Using MC-Dropout",
51
+ "text": "In Sec. I ###reference_###, we explained that both the model uncertainty () and the data noise variance () have to be taken into account when generating PIs.\nA model trained using generates PI estimates based on the training data; that is, it accounts for .\nHowever, we still need to quantify the uncertainty of those estimates due to .\nUnlike previous work that used explicit NN ensembles to quantify [26 ###reference_b26###, 12 ###reference_b12###], we propose to use a Monte Carlo-based approach.\nSpecifically, we use MC-Dropout [32 ###reference_b32###], which consists of using dropout layers that ignore each neuron of the network according to some probability or dropout rate.\nThen, during each forward pass with active dropout layers, a slightly different network architecture is used and, as a result, a slightly different prediction is obtained.\nAccording to Gal and Ghahramani [8 ###reference_b8###], this process can be interpreted as a Bayesian approximation of the Gaussian process.\nOur approach consists of using forward passes through the network with active dropout layers.\nGiven an input , the estimates , , and are obtained at the -th iteration.\nHence, the expected target estimate , the expected upper bound , and the expected lower bound are calculated as:"
52
+ },
53
+ {
54
+ "section_id": "3.6",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-F Comparison to QD-Ens and QD+",
57
+ "text": "Here we consider the differences between our method (DualAQD) and the two methods QD-Ens [12 ###reference_b12###] and QD+ [16 ###reference_b16###].\nFor reference, we include the loss functions used by QD-Ens and QD+:\nwhere , , , and are hyperparameters used by QD-Ens and QD+ to balance the learning objectives.\nThe differences compared to our method are listed in order of importance from highest to lowest as follows:\nQD-Ens and QD+ use objective functions that maximize directly aiming to a goal of at the batch level.\nWe maximize indirectly through , which encourages the model to produce PIs that cover as many training points as possible.\nThis is achieved by producing PIs whose widths are larger than the maximum absolute error within each training batch.\nThen the optimal weights of the network are selected as those that produce a coverage probability on the validation set of at least .\nNote that is not directly differentiable as it involves counting the number of samples that lay within the predicted PIs.\nHowever, QD-Ens and QD+ force its differentiation by including a sigmoid operation and a softening factor (i.e., an additional hyperparameter).\nOn the other hand, the loss functions of DualAQD are already differentiable.\nOur objective minimizes , which is a more suitable penalty function than (cf. Sec III-A ###reference_###).\nOur objective maximizes and ensures PI integrity simultaneously.\nQD+ uses a truncated linear constraint and a separate function to maximize .\nNN-based PI generation methods aim to balance three objectives: (1) accurate target prediction, (2) generation of narrow PIs, and (3) high coverage probability.\nQD-Ens uses a single coefficient within its loss function that balances objectives (2) and (3) and does not optimize objective (1) explicitly, while QD+ uses three coefficients , , and to balance the three objectives.\nAll of the coefficients are tunable hyperparameters.\nOur loss function, , uses a balancing coefficient whose value is not fixed but is adapted throughout the training process using a single hyperparameter (i.e., the scale factor ).\nOur approach uses two companion NNs and that optimize objective (1) and objectives (2) and (3), respectively, to avoid the trade-off between them.\nConversely, the other approaches optimize a single NN architecture.\nWe use MC-Dropout to estimate the model uncertainty. By doing so, we need to train only a single model instead of using an explicit ensemble of models, as in QD-Ens and QD+. Also,\nQD+ requires fitting a split normal density function [33 ###reference_b33###] for each data point to aggregate the PIs produced by the ensemble, thus increasing the complexity of the learning process."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "IV Experiments",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-A Experiments with Synthetic Data",
69
+ "text": "Previous approaches have been tested on datasets with similar uncertainty levels across all their samples, or on synthetic datasets with a single region of low uncertainty surrounded by a gradual increase of noise.\nThis is a limitation as it does not allow testing the ability of the PI\u2019s to adapt to rapid changes of uncertainty within the data.\nTherefore, we test all of the methods on a more challenging synthetic dataset with more fluctuations and extreme levels of uncertainty.\nThe code is available at https://github.com/NISL-MSU/PredictionIntervals ###reference_rvals###.\nWe created a synthetic dataset with varying PI widths that consists of a sinusoid with Gaussian noise.\nSpecifically, the dataset contains 1000 points generated using the equation , where and is Gaussian noise whose magnitude depends on : where .\nFor these experiments, we trained a feed-forward neural network with two hidden layers, each with 100 nodes with ReLU activation.\nA -fold cross-validation design was used to train and evaluate all networks.\nKnowing the probability distribution of the noise at each position allows us to calculate the ideal 95% PIs (), , as follows:\nwhere is the approximate value of the 95% confidence interval of the normal distribution.\nTherefore, we define a new metric we called that sums the absolute differences between the estimated bounds and the ideal 95% bounds for all the samples within a set X:\nWe compared the performance of DualAQD using batch sorting and without using batch sorting (denoted as \u201cDualAQD_noBS\u201d in Table I ###reference_###).\nAll networks were trained using a fixed mini-batch size of 16 and the Adadelta optimizer.\nTable I ###reference_### gives the average performance for the metrics calculated on the validation sets, , , , and , and corresponding standard deviations.\nWe also compared our DualAQD PI generation methodology to three other NN-based methods: QD+ [16 ###reference_b16###], QD-Ens [12 ###reference_b12###], and a PI generation method based on MC-Dropout alone [27 ###reference_b27###] (denoted MC-Dropout-PI).\nFor the sake of consistency and fairness, we used the same configuration (i.e., network architecture, optimizer, and batch size) for all the networks trained in our experiments.\nIn our preliminary experiments, for the case of QD+, QD-Ens, and MC-Dropout-PI, we found that batch sorting either helped to improve their performance or there was no significant change.\nThus, for the sake of fairness and consistency, we decided to use batch sorting for all compared methods.\nIn addition, we tested Dropout rates between and .\nThe obtained results did not indicate a statistically significant difference; thus, we used a Dropout rate of 0.1 for all networks and datasets.\nNote that the only difference between the network architecture used by the four methods is that QD+ requires three outputs, QD-Ens requires two (i.e., the lower and upper bounds), and MC-Dropout-PI requires one.\nFor DualAQD and MC-Dropout-PI, we used forward passes with active dropout layers.\nFor QD+ and QD-Ens, we used an ensemble of five networks and a grid search to choose the hyperparameter values.\nFig. 3 ###reference_### shows the PIs generated by the four methods from the first validation set together with the ideal 95% PIs.\n###figure_3###"
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-B Benchmarking Experiments",
75
+ "text": "We experimented with eight open-access datasets from the UC Irvine Machine Learning Repository [34 ###reference_b34###].\nNote that even though our experiments use scalar and 2-D regression tasks (Sec. IV-C ###reference_###), our proposed method can be extended to other tasks such as classification.\nFor each dataset, we used a feed-forward neural network whose architecture was the same as that described in Sec. IV-A ###reference_###.\nWe used 10-fold cross-validation to train and evaluate all networks.\nTable II ###reference_### gives the average performance for the metrics calculated on the validation sets, , , and , and corresponding standard deviations.\nWe applied -score normalization (mean equal to 0 and standard deviation equal to 1) to each feature in the training set while the exact same scaling was applied to the features in the validation and test sets.\nLikewise, min-max normalization was applied to the response variable; however, Table II ###reference_### shows the results after re-scaling to the original scale.\nSimilar to Sec. IV-A ###reference_###, all networks were trained using a fixed mini-batch size of 16, except for the Protein and Year datasets that used a mini-batch size of 512 due to their large size.\n###figure_4### The bold entries in Table II ###reference_### indicate the method that achieved the lowest average value and that its difference with respect to the values obtained by the other methods is statistically significant according to a paired -test performed at the 0.05 significance level.\nThe results obtained by DualAQD were significantly narrower than the compared methods while having similar and of at least 95%.\nFurthermore, Fig. 4 ###reference_### depicts the distribution of the scores achieved by all the compared methods on all the datasets, where the line through the center of each box indicates the median F1 score, the edges of the boxes are the 25th and 75th percentiles, whiskers extend to the maximum and minimum points (not counting outliers), and\noutlier points are those past the end of the whiskers (i.e., those points greater than plus the third quartile or less than minus the first quartile, where is the inter-quartile range).\nNote that even though QD-Ens uses only one hyperparameter (see Sec. III-F ###reference_###), it is more sensitive to small changes.\nFor example, a hyperparameter value of yielded poor PIs with while a value of yielded too wide PIs with .\nFor this reason, the hyperparameter of the QD-Ens approach was chosen manually while the scale factor of DualAQD was chosen using a grid search with values .\nFig. 5 ###reference_### shows the difference between the learning curves obtained during one iteration of the cross-validation for the Power dataset using two different values (i.e., and ).\nThe dashed lines indicate the training epoch at which the optimal weights were selected according to the dominance criteria explained in Sec. III-D ###reference_###.\nOn the other hand, the hyperparameters and of QD+ were chosen using a random search since it requires significantly higher training and execution time.\n###figure_5###"
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "IV-C Prediction Intervals for Crop Yield Prediction",
81
+ "text": "We assert our approach is general in applicability.\nTo test this assertion, we decided to experiment with a difficult, real-world application of 2D regression using spatially correlated data to convey the usefulness of our method.\nSpecifically, we focused on the crop yield prediction problem, which has an important impact on society and is one of the main tasks of precision agriculture.\nAccurate and reliable crop yield prediction, along with careful uncertainty management strategies, enables farmers to make informed management decisions, such as determining the nitrogen fertilizer rates needed in specific regions of their fields to maximize profit while minimizing environmental impact [35 ###reference_b35###].\nWe use an early-yield prediction dataset of winter wheat we curated and presented in a previous work [36 ###reference_b36###].\nThe early-yield prediction is posed as a regression problem where the explanatory variables are represented by a set of eight features obtained during the growing season (March).\nThese features consist of nitrogen rate applied, precipitation, slope, elevation, topographic position index (TPI), aspect, and two backscattering coefficients obtained from synthetic aperture radar (SAR) images from Sentinel-I.\nThe response variable corresponds to the yield value in bushels per acre (bu/ac), measured during the harvest season (August).\nIn other words, the data acquired in March is used to predict crop yield values in August of the same year.\nThe yield prediction problem requires two-dimensional (2D) inputs and 2D outputs.\nAs such, it can be viewed as a 2D regression task.\nTo tackle this problem, we trained a CNN using the Hyper3DNetReg\n3D-2D network, architecture we presented in [36 ###reference_b36###], which was specifically designed to predict the yield values of small spatial neighborhoods of a field simultaneously.\nWe then modified this architecture to produce three output patches of pixels (i.e., the estimated yield patch and two patches containing the upper and lower bounds of each pixel, respectively) instead of one.\nFor our experiments, we used data collected from three winter wheat fields, which we refer to as \u201cA,\u201d \u201cB,\u201d and \u201cC\u201d, respectively. Three crop years of data were collected for each field.\nThe information from the first two years was used to create the training and validation sets (90% of the data is used for\ntraining and 10% for validation).\nThe four methods, AQD, QD+, QD-Ens, and MC-Dropout-PI, were compared using the results from the test set of each field, which consists of data from the last observed year and whose ground-truth yield map is denoted as .\nThe test set was used to generate a predicted yield map of the entire field, , and its corresponding lower and upper bounds, and , respectively.\nFig. 6 ###reference_### shows the ground-truth yield map for field \u201cA\u201d (darker colors represent lower yield values) along with the uncertainty maps obtained by the four compared methods and their corresponding and values.\nField \u201cA\u201d is used as a representative field for presenting our results, since we obtained similar results on the other fields.\nHere, we define the uncertainty map as a map that contains the PI width of each point of the field (darker colors represent lower PI width and thus lower uncertainty).\nThat is, the wider the PI of a given point, the more uncertain its yield prediction.\n###figure_6### We used four metrics to assess the behavior of the four methods (Table III ###reference_###).\nFirst, we calculated the root mean square error () between the ground-truth yield map and the estimated yield map .\nThen, we considered the mean prediction interval width () and prediction interval probability coverage ().\nNote that -fold or cross-validation cannot be used in this experimental setting.\nThus, to help us explain the advantages of our method over the others in the context of the HQ principle, we introduce a new metric that summarizes the and metrics shown in Table III ###reference_###.\nLet represent the mean PI width after min-max normalization using as upper bound the maximum value among the four methods in each field.\nLet denote the weighted geometric mean between and ( (i.e., the complement of the PI coverage probability) with being the relative importance between both terms. Then\nAccording to the HQ principle that aims to obtain narrow PIs and high probability coverage, low values are preferable when comparing the performance of different PI-generation methods.\nFig. 7 ###reference_### shows the comparison of the metric obtained for each method on the three tested fields for different values.\nIn order to summarize the behavior shown in Fig. 7 ###reference_### into a single metric, we calculated the integral .\nSince we seek to obtain low values for various , low values are preferable.\nBold entries in Table III ###reference_### indicate the method with the lowest .\n###figure_7###"
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Discussion",
87
+ "text": "Our loss function was designed to minimize the estimation error and produce narrow PIs simultaneously while using constraints that maximize the coverage probability inherently.\nFrom Tables I ###reference_### and II ###reference_###, we note that DualAQD consistently produced significantly narrower PIs than the compared methods, according to the paired -test performed at the 0.05 significance level, except for the Protein dataset, where QD+ obtained comparable PI widths.\nSimultaneously, we yielded values of at least 95% and better or comparable values.\nIn addition, the values reported in Table I ###reference_### demonstrate that DualAQD is the method that best adapted to the highly varying uncertainty levels of our synthetic dataset.\nThus, the PI bounds generated by DualAQD were the closest to the ideal 95% PIs.\nNotice that DualAQD obtains lower values than QD+ consistently despite the fact that QD+ also includes an objective function that minimizes the error of the target predictions.\nThe reason is that our method uses a NN (i.e., ) that is specialized in generating accurate target predictions, and its optimization objective does not compete with others.\nConversely, QD+ uses a loss function that balances four objective functions: minimizing the PI widths, maximizing PI coverage probability, minimizing the target prediction errors, and ensuring PI integrity.\nThe NN used by QD-Ens, on the other hand, only generates the upper and lower bounds of the PIs.\nThe target estimate is then calculated as the central point between the PI bounds.\nAs a consequence of not using a NN specialized in minimizing the target prediction error, QD-Ens achieved the worst values of the compared methods, except for the Year dataset.\nIt is worth mentioning that one of the advantages of using DualAQD over QD+ and QD-Ens is that we achieved better PIs while requiring less computational complexity.\nThat is, our method requires training only two NNs and uses MC-Dropout to account for the model uncertainty while QD+ and QD-Ens require training ensembles of five NNs.\nIn addition, QD+ requires extra complexity given that it uses a split normal aggregation method that involves an additional fitting process for each data point during testing.\nNote that using deep ensembles of models is expected to perform better or similar to MC-Dropout when using forward passes [37 ###reference_b37###].\nIn other words, using an ensemble of five NNs, as QD and QD+ do, is expected to perform better than using five forward passes through the NN using MC-Dropout.\nNevertheless, during inference, we are able to perform not only five but 100 passes through the NN without significantly adding computationally cost.\nOur method becomes more practical in the sense that, even when it uses the rough estimates of model uncertainty provided by MC-Dropout, it is still able to generate significantly higher-quality PIs.\nIn Fig. 5 ###reference_###, we see the effect of using different scale factors to update the balancing coefficient of .\nNotice that DualAQD produced wide PIs at the beginning of the training process in order to ensure PI integrity; as a consequence, the and values improved drastically.\nOnce the generated PIs were wide enough to cover most of the samples in the training set (i.e., ), DualAQD focused on reducing the PI widths until reached the nominal probability coverage .\nThe rate at which and were reduced was determined by the scale factor .\nFurthermore, Fig. 5 ###reference_###a () and Fig. 5 ###reference_###b () show that both models converged to a similar value () despite having improved at different rates.\nIt is worth noting that we did not find a statistical difference between the results produced by the different values that were tested on all the datasets (i.e., ), except for the case of Kin8nm.\nWhen various values were considered equally as good for a given dataset, we selected the value that yielded the lowest average , which was for Boston, Concrete, and Yacht, for Kin8nm, and for the rest of the datasets.\nThis is significant because it shows that the sensitivity of our method to the scale factor is low, unlike the hyperparameters required by QD-Ens, as explained in detail in Sec. IV-B ###reference_###.\nWhat is more, our method requires a single hyperparameter, , while QD-Ens requires two: and a softening factor used to enforce differentiability of its loss function; and QD+ requires four: , , and , and the same softening factor used by QD-Ens.\nNote that our method does not need an additional softening factor given that the functions of DualAQD are already differentiable.\nWe see in Table III ###reference_### that DualAQD yielded better values than the other methods, except for field \u201cB\u201d where QD-Ens had the highest value, albeit at the expense of generating excessively wide PIs.\nWhat is more, Fig. 7 ###reference_### shows that, in general, DualAQD obtained lower values; as a consequence, it achieved the lowest value in each of the three fields (Table III ###reference_###), which implies that it offers a better width-coverage trade-off in comparison to the other methods.\nNotice that Table III ###reference_###\nshows values lower than 95% for field A.\nDuring training and validation, the coverage probability did reach the nominal value of 95%.\nNote that, since the distribution of the test set (2020) differs from the one seen during training (2016 and 2018), the values may not be equal to those obtained during training.\nThis illustrates the ability to show increased uncertainty when insufficient data is available for making reliable predictions.\nFig. 6 ###reference_### shows that DualAQD was able to produce better distributed PIs for field \u201cA\u201d (i.e., with a wider range of values) while achieving slightly better and values than QD-Ens.\nThis means that DualAQD is more dynamic in the sense that it outputs narrower PIs when it considers there is more certainty and wider PIs when there is more uncertainty (recall the behavior in Fig. 3 ###reference_###).\nAs a consequence, 54.4%, 44.3%, and 40.3% of the points processed by DualAQD on field \u201cA\u201d have smaller PI width than QD+, QD, and MC-Dropout, respectively, while still being able to cover the observed target values.\nSimilarly, 88.7%, 65.3%, and 49.9% of the points processed by DualAQD on field \u201cB\u201d have smaller PI width than QD+, QD, and MC-Dropout while still covering the observed target values;\nand 62.5%, 6.0%, and 8.8% of the points processed by DualAQD on field \u201cC\u201d have smaller PI width than QD+, QD, and MC-Dropout while still covering the observed target values.\nFinally, Fig. 6 ###reference_### shows that DualAQD indicates higher uncertainty in the lower (southern) region of the field, which received a nitrogen rate value that was not used in previous years (i.e., it was not available for training).\nSimilarly, regions of high yield values are related to high nitrogen rate values; however, there exist considerably fewer training samples of this type, which logically would lead to greater uncertainty.\nThus, there is more uncertainty when predicting regions that received high nitrogen rate values, and this is represented effectively by the uncertainty map generated by DualAQD but not the compared methods.\nIt is worth mentioning that even though DualAQD showed some degree of robustness empirically when given previously unseen samples, neural network-based PI generation methods do not offer any guarantee for the behavior of the model for out-of-distribution samples."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "VI Conclusion",
93
+ "text": "Accurate uncertainty quantification is important to increase the reliability of deep learning models in real-world applications that require uncertainty to be addressed.\nIn this work, we focus on methods that generate prediction intervals using conventional deep neural networks for regression tasks.\nAs such, we presented a method that uses two companion NNs: one that specializes in generating accurate target estimations and another that has two outputs and is trained using a novel loss function designed to generate accurate and narrow PIs.\nWe tested our method, DualAQD, with a challenging synthetic dataset and seven benchmark datasets using feedforward neural networks.\nWe also experimented with a real-world application of 2D regression using spatially correlated data to convey the usefulness and applicability of our PI generation method.\nTherefore, we conclude that by using our loss function , we were able to produce higher-quality PIs in comparison to QD+, QD-Ens, and MC-Dropout-PI; that is, our method generated significantly narrower PIs while maintaining a nominal probability coverage without detriment to its target estimation accuracy.\nDualAQD was also shown to be more dynamic in the sense that it better reflects varying levels of uncertainty within the data.\nIt is important to point out that we achieved better performance metrics than the competing algorithms using less computational complexity and fewer tunable hyperparameters.\nIn the future, we plan to adapt our loss function for its use in Bayesian neural networks."
94
+ }
95
+ ],
96
+ "appendix": [],
97
+ "tables": {
98
+ "1": {
99
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>PI metrics , , , and evaluated on the synthetic dataset using cross-validation.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.34\" style=\"width:433.6pt;height:162.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(72.6pt,-27.2pt) scale(1.50352833716953,1.50352833716953) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.34.24\">\n<tr class=\"ltx_tr\" id=\"S4.T1.14.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.14.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.14.4.4.5.1\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.11.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.12.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.13.3.3.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.14.4.4.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.18.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.18.8.8.5\">DualAQD</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.15.5.5.1\">5.27 0.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.16.6.6.2\">7.30 0.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.17.7.7.3\">95.5 0.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.18.8.8.4\">1.52 0.13</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.22.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.22.12.12.5\">DualAQD_noBS</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.19.9.9.1\">5.27 0.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.20.10.10.2\">9.16 0.35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.21.11.11.3\">96.3 0.77</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.22.12.12.4\">3.08 0.19</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.26.16.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.26.16.16.5\">QD+</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.23.13.13.1\">5.28 0.29</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.24.14.14.2\">8.56 0.14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.25.15.15.3\">95.5 0.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.26.16.16.4\">3.12 0.24</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.30.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.30.20.20.5\">QD-Ens</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.27.17.17.1\">5.31 0.26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.28.18.18.2\">10.17 0.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.29.19.19.3\">94.0 1.57</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.30.20.20.4\">4.88 0.17</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.34.24.24\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.34.24.24.5\">MC-Dropout-PI</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.31.21.21.1\">5.22 0.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.32.22.22.2\">9.31 0.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.33.23.23.3\">93.3 0.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.34.24.24.4\">5.04 0.08</td>\n</tr>\n</table>\n</span></div>\n</figure>",
100
+ "capture": "TABLE I: PI metrics , , , and evaluated on the synthetic dataset using cross-validation."
101
+ },
102
+ "2": {
103
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>PI metrics , , and evaluated on the benchmark datasets using -fold cross-validation.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.128\" style=\"width:433.6pt;height:709.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(66.7pt,-109.1pt) scale(1.44413505809349,1.44413505809349) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.128.120\">\n<tr class=\"ltx_tr\" id=\"S4.T2.128.120.121\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.128.120.121.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.128.120.121.1.1\">Dataset</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.128.120.121.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.128.120.121.2.1\">Metric</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.128.120.121.3\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.128.120.121.3.1\">DualAQD</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.128.120.121.4\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.128.120.121.4.1\">QD+</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.128.120.121.5\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.128.120.121.5.1\">QD-Ens</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.128.120.121.6\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.128.120.121.6.1\">\n<tr class=\"ltx_tr\" id=\"S4.T2.128.120.121.6.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.128.120.121.6.1.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.128.120.121.6.1.1.1.1\">MC-</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.128.120.121.6.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.128.120.121.6.1.2.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.128.120.121.6.1.2.1.1\">Dropout-PI</span></td>\n</tr>\n</table>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.13.5.5\">\n<td class=\"ltx_td ltx_align_right ltx_border_l\" id=\"S4.T2.13.5.5.6\" rowspan=\"3\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_text\" id=\"S4.T2.13.5.5.6.1\">Boston</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.9.1.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.10.2.2.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.10.2.2.2.1\">9.992.26</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.11.3.3.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">12.142.05</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.12.4.4.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">16.130.67</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.13.5.5.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">12.522.28</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.18.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.14.6.6.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.15.7.7.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">8.913.90</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.16.8.8.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">11.915.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.17.9.9.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">15.295.07</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.18.10.10.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">8.943.87</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.23.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.19.11.11.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.20.12.12.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.01.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.21.13.13.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.61.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.22.14.14.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">97.21.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.23.15.15.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">96.00.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.28.20.20\">\n<td class=\"ltx_td ltx_align_right ltx_border_l\" id=\"S4.T2.28.20.20.6\" rowspan=\"3\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_text\" id=\"S4.T2.28.20.20.6.1\">Concrete</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.24.16.16.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.25.17.17.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.25.17.17.2.1\">15.721.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.26.18.18.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">18.572.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.27.19.19.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">25.421.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.28.20.20.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">20.521.74</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.33.25.25\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.29.21.21.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.30.22.22.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">22.454.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.31.23.23.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">26.658.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.32.24.24.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">29.305.25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.33.25.25.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">22.714.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.38.30.30\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.34.26.26.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.35.27.27.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.20.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.36.28.28.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.21.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.37.29.29.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">97.91.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.38.30.30.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.71.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.43.35.35\">\n<td class=\"ltx_td ltx_align_right ltx_border_l\" id=\"S4.T2.43.35.35.6\" rowspan=\"3\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_text\" id=\"S4.T2.43.35.35.6.1\">Energy</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.39.31.31.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.40.32.32.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.40.32.32.2.1\">1.410.12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.41.33.33.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">2.940.05</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.42.34.34.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">10.991.47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.43.35.35.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">3.810.21</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.48.40.40\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.44.36.36.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.45.37.37.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.250.05</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.46.38.38.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.310.08</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.47.39.39.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.350.25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.48.40.40.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.260.05</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.53.45.45\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.49.41.41.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.50.42.42.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">96.50.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.51.43.43.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">99.01.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.52.44.44.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">100.00.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.53.45.45.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">99.50.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.58.50.50\">\n<td class=\"ltx_td ltx_align_right ltx_border_l\" id=\"S4.T2.58.50.50.6\" rowspan=\"3\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_text\" id=\"S4.T2.58.50.50.6.1\">Kin8nm</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.54.46.46.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.55.47.47.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.55.47.47.2.1\">0.2800.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.56.48.48.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.3110.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.57.49.49.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.5020.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.58.50.50.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.3360.01</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.63.55.55\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.59.51.51.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.60.52.52.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.0050.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.61.53.53.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.0070.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.62.54.54.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.0090.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.63.55.55.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.0050.00</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.68.60.60\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.64.56.56.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.65.57.57.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.10.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.66.58.58.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">96.60.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.67.59.59.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">98.50.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.68.60.60.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">97.50.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.73.65.65\">\n<td class=\"ltx_td ltx_align_right ltx_border_l\" id=\"S4.T2.73.65.65.6\" rowspan=\"3\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_text\" id=\"S4.T2.73.65.65.6.1\">Power</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.69.61.61.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.70.62.62.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.70.62.62.2.1\">14.600.35</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.71.63.63.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">15.310.44</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.72.64.64.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">27.571.54</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.73.65.65.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">16.080.63</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.78.70.70\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.74.66.66.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.75.67.67.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">15.231.34</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.76.68.68.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">16.431.34</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.77.69.69.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">17.141.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.78.70.70.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">15.261.31</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.83.75.75\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.79.71.71.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.80.72.72.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.20.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.81.73.73.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.70.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.82.74.74.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">99.60.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.83.75.75.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">96.40.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.88.80.80\">\n<td class=\"ltx_td ltx_align_right ltx_border_l\" id=\"S4.T2.88.80.80.6\" rowspan=\"3\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_text\" id=\"S4.T2.88.80.80.6.1\">Protein</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.84.76.76.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.85.77.77.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.85.77.77.2.1\">13.020.26</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.86.78.78.3\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.86.78.78.3.1\">13.050.14</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.87.79.79.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">15.790.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.88.80.80.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">15.950.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.93.85.85\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.89.81.81.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.90.82.82.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">14.790.40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.91.83.83.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">17.510.59</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.92.84.84.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">18.350.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.93.85.85.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">15.050.42</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.98.90.90\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.94.86.86.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.95.87.87.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.00.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.96.88.88.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.40.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.97.89.89.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.10.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.98.90.90.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">94.80.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.103.95.95\">\n<td class=\"ltx_td ltx_align_right ltx_border_l\" id=\"S4.T2.103.95.95.6\" rowspan=\"3\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_text\" id=\"S4.T2.103.95.95.6.1\">Yacht</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.99.91.91.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.100.92.92.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.100.92.92.2.1\">1.560.42</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.101.93.93.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">4.100.17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.102.94.94.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">10.991.47</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.103.95.95.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">4.741.20</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.108.100.100\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.104.96.96.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.105.97.97.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.510.53</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.106.98.98.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.720.70</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.107.99.99.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.350.25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.108.100.100.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">0.530.54</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.113.105.105\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.109.101.101.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.110.102.102.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">97.10.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.111.103.103.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">98.42.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.112.104.104.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">100.00.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.113.105.105.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">100.00.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.118.110.110\">\n<td class=\"ltx_td ltx_align_right ltx_border_l\" id=\"S4.T2.118.110.110.6\" rowspan=\"3\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span>\n<span class=\"ltx_text\" id=\"S4.T2.118.110.110.6.1\">Year</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.114.106.106.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.115.107.107.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.115.107.107.2.1\">29.680.29</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.116.108.108.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">32.680.25</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.117.109.109.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">37.030.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.118.110.110.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">34.250.16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.123.115.115\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.119.111.111.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.120.112.112.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">73.260.76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.121.113.113.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">104.88.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.122.114.114.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">78.120.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.123.115.115.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">73.130.69</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.128.120.120\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.124.116.116.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.125.117.117.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.10.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.126.118.118.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">95.40.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.127.119.119.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">37.030.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.128.120.120.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">93.820.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.128.120.122\">\n<td class=\"ltx_td ltx_align_right ltx_border_l\" id=\"S4.T2.128.120.122.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span></td>\n<td class=\"ltx_td\" id=\"S4.T2.128.120.122.2\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td\" id=\"S4.T2.128.120.122.3\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td\" id=\"S4.T2.128.120.122.4\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td\" id=\"S4.T2.128.120.122.5\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<td class=\"ltx_td\" id=\"S4.T2.128.120.122.6\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n</tr>\n</table>\n</span></div>\n</figure>",
104
+ "capture": "TABLE II: PI metrics , , and evaluated on the benchmark datasets using -fold cross-validation."
105
+ },
106
+ "3": {
107
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>PI metrics , , , and evaluated on the yield prediction datasets.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.12\" style=\"width:433.6pt;height:578.7pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(120.4pt,-160.7pt) scale(2.24842660806258,2.24842660806258) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.12.4\">\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.12.4.4.5.1\" style=\"font-size:80%;\">Field</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.4.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.12.4.4.6.1\" style=\"font-size:80%;\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.9.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.10.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.11.3.3.3\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.11.3.3.3.1\">\n<tr class=\"ltx_tr\" id=\"S4.T3.11.3.3.3.1.1\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.11.3.3.3.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.11.3.3.3.1.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.11.3.3.3.1.2.1\"><span class=\"ltx_text\" id=\"S4.T3.11.3.3.3.1.2.1.1\" style=\"font-size:80%;\">(%)</span></td>\n</tr>\n</table>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.4.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.5.1\" rowspan=\"4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.12.4.5.1.1\" style=\"font-size:80%;\">A</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.5.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.5.2.1\" style=\"font-size:80%;\">DualAQD</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.5.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.5.3.1\" style=\"font-size:80%;\">15.44</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.5.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.5.4.1\" style=\"font-size:80%;\">53.75</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.5.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.5.5.1\" style=\"font-size:80%;\">92.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.5.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.12.4.5.6.1\" style=\"font-size:80%;\">.350</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.6.1\"><span class=\"ltx_text\" id=\"S4.T3.12.4.6.1.1\" style=\"font-size:80%;\">QD+</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.6.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.6.2.1\" style=\"font-size:80%;\">17.73</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.6.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.6.3.1\" style=\"font-size:80%;\">54.27</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.6.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.6.4.1\" style=\"font-size:80%;\">89.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.6.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.6.5.1\" style=\"font-size:80%;\">.397</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.7.1\"><span class=\"ltx_text\" id=\"S4.T3.12.4.7.1.1\" style=\"font-size:80%;\">QD-Ens</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.7.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.7.2.1\" style=\"font-size:80%;\">15.55</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.7.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.7.3.1\" style=\"font-size:80%;\">53.99</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.7.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.7.4.1\" style=\"font-size:80%;\">92.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.7.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.7.5.1\" style=\"font-size:80%;\">.359</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.8.1\"><span class=\"ltx_text\" id=\"S4.T3.12.4.8.1.1\" style=\"font-size:80%;\">MC-Dropout-PI</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.8.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.8.2.1\" style=\"font-size:80%;\">15.27</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.8.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.8.3.1\" style=\"font-size:80%;\">51.68</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.8.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.8.4.1\" style=\"font-size:80%;\">91.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.8.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.8.5.1\" style=\"font-size:80%;\">.355</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.9\">\n<td class=\"ltx_td ltx_align_right ltx_border_l\" id=\"S4.T3.12.4.9.1\" rowspan=\"4\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span><span class=\"ltx_text\" id=\"S4.T3.12.4.9.1.1\" style=\"font-size:80%;\"> </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.12.4.9.1.2\" style=\"font-size:80%;\">B</span><span class=\"ltx_text\" id=\"S4.T3.12.4.9.1.3\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.12.4.9.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.9.2.1\" style=\"font-size:80%;\">DualAQD</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.12.4.9.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.9.3.1\" style=\"font-size:80%;\">11.16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.12.4.9.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.9.4.1\" style=\"font-size:80%;\">43.45</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.12.4.9.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.9.5.1\" style=\"font-size:80%;\">94.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.12.4.9.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.12.4.9.6.1\" style=\"font-size:80%;\">.221</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.10.1\"><span class=\"ltx_text\" id=\"S4.T3.12.4.10.1.1\" style=\"font-size:80%;\">QD+</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.10.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.10.2.1\" style=\"font-size:80%;\">11.83</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.10.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.10.3.1\" style=\"font-size:80%;\">50.17</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.10.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.10.4.1\" style=\"font-size:80%;\">93.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.10.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.10.5.1\" style=\"font-size:80%;\">.261</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.11.1\"><span class=\"ltx_text\" id=\"S4.T3.12.4.11.1.1\" style=\"font-size:80%;\">QD-Ens</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.11.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.11.2.1\" style=\"font-size:80%;\">12.95</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.11.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.11.3.1\" style=\"font-size:80%;\">73.09</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.11.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.11.4.1\" style=\"font-size:80%;\">95.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.11.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.11.5.1\" style=\"font-size:80%;\">.306</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.12.1\"><span class=\"ltx_text\" id=\"S4.T3.12.4.12.1.1\" style=\"font-size:80%;\">MC-Dropout-PI</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.12.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.12.2.1\" style=\"font-size:80%;\">10.83</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.12.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.12.3.1\" style=\"font-size:80%;\">47.18</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.12.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.12.4.1\" style=\"font-size:80%;\">94.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.12.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.12.5.1\" style=\"font-size:80%;\">.241</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.13\">\n<td class=\"ltx_td ltx_align_right ltx_border_b ltx_border_l\" id=\"S4.T3.12.4.13.1\" rowspan=\"4\">\n<span class=\"ltx_rule\" style=\"width:100%;height:1.2pt;background:black;display:inline-block;\">\u00a0</span><span class=\"ltx_text\" id=\"S4.T3.12.4.13.1.1\" style=\"font-size:80%;\">\n</span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.12.4.13.1.2\" style=\"font-size:80%;\">C</span><span class=\"ltx_text\" id=\"S4.T3.12.4.13.1.3\" style=\"font-size:80%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.12.4.13.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.13.2.1\" style=\"font-size:80%;\">DualAQD</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.12.4.13.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.13.3.1\" style=\"font-size:80%;\">18.48</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.12.4.13.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.13.4.1\" style=\"font-size:80%;\">59.96</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.12.4.13.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.13.5.1\" style=\"font-size:80%;\">96.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.12.4.13.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.12.4.13.6.1\" style=\"font-size:80%;\">.279</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.14.1\"><span class=\"ltx_text\" id=\"S4.T3.12.4.14.1.1\" style=\"font-size:80%;\">QD+</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.14.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.14.2.1\" style=\"font-size:80%;\">22.27</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.14.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.14.3.1\" style=\"font-size:80%;\">62.02</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.14.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.14.4.1\" style=\"font-size:80%;\">93.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.14.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.14.5.1\" style=\"font-size:80%;\">.336</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.15.1\"><span class=\"ltx_text\" id=\"S4.T3.12.4.15.1.1\" style=\"font-size:80%;\">QD-Ens</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.15.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.15.2.1\" style=\"font-size:80%;\">17.75</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.15.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.15.3.1\" style=\"font-size:80%;\">39.93</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.15.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.15.4.1\" style=\"font-size:80%;\">63.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.15.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.15.5.1\" style=\"font-size:80%;\">.490</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.12.4.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.16.1\"><span class=\"ltx_text\" id=\"S4.T3.12.4.16.1.1\" style=\"font-size:80%;\">MC-Dropout-PI</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.16.2\"><span class=\"ltx_text\" id=\"S4.T3.12.4.16.2.1\" style=\"font-size:80%;\">17.15</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.16.3\"><span class=\"ltx_text\" id=\"S4.T3.12.4.16.3.1\" style=\"font-size:80%;\">50.61</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.16.4\"><span class=\"ltx_text\" id=\"S4.T3.12.4.16.4.1\" style=\"font-size:80%;\">89.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T3.12.4.16.5\"><span class=\"ltx_text\" id=\"S4.T3.12.4.16.5.1\" style=\"font-size:80%;\">.349</span></td>\n</tr>\n</table>\n</span></div>\n</figure>",
108
+ "capture": "TABLE III: PI metrics , , , and evaluated on the yield prediction datasets."
109
+ }
110
+ },
111
+ "image_paths": {
112
+ "1": {
113
+ "figure_path": "2212.06370v4_figure_1.png",
114
+ "caption": "Figure 1: An example of our PI-generation method on a synthetic dataset.",
115
+ "url": "http://arxiv.org/html/2212.06370v4/extracted/5487902/images/introduction.jpg"
116
+ },
117
+ "2": {
118
+ "figure_path": "2212.06370v4_figure_2.png",
119
+ "caption": "Figure 2: \u21123subscript\u21123\\mathcal{L}_{3}caligraphic_L start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT penalty calculation, (a) without batch sorting; (b) with batch sorting.",
120
+ "url": "http://arxiv.org/html/2212.06370v4/extracted/5487902/images/bs.jpg"
121
+ },
122
+ "3": {
123
+ "figure_path": "2212.06370v4_figure_3.png",
124
+ "caption": "Figure 3: Performance of PI generation methods on the synthetic dataset.",
125
+ "url": "http://arxiv.org/html/2212.06370v4/extracted/5487902/images/comparison.jpg"
126
+ },
127
+ "4": {
128
+ "figure_path": "2212.06370v4_figure_4.png",
129
+ "caption": "Figure 4: Box plots of the M\u2062P\u2062I\u2062Wv\u2062a\u2062l\ud835\udc40\ud835\udc43\ud835\udc3csubscript\ud835\udc4a\ud835\udc63\ud835\udc4e\ud835\udc59MPIW_{val}italic_M italic_P italic_I italic_W start_POSTSUBSCRIPT italic_v italic_a italic_l end_POSTSUBSCRIPT and M\u2062S\u2062Ev\u2062a\u2062l\ud835\udc40\ud835\udc46subscript\ud835\udc38\ud835\udc63\ud835\udc4e\ud835\udc59MSE_{val}italic_M italic_S italic_E start_POSTSUBSCRIPT italic_v italic_a italic_l end_POSTSUBSCRIPT scores of DualAQD, QD+, QD-Ens, and MC-Dropout-PI PI generation methods on the synthetic and benchmarking datasets: (a) Synthetic. (b) Boston. (c) Concrete. (d) Energy. (e) Kin8nm. (f) Power. (g) Protein. (h) Yacht. (i) Year.",
130
+ "url": "http://arxiv.org/html/2212.06370v4/extracted/5487902/images/box_plots2.jpg"
131
+ },
132
+ "5": {
133
+ "figure_path": "2212.06370v4_figure_5.png",
134
+ "caption": "Figure 5: M\u2062P\u2062I\u2062W\ud835\udc40\ud835\udc43\ud835\udc3c\ud835\udc4aMPIWitalic_M italic_P italic_I italic_W and P\u2062I\u2062C\u2062P\ud835\udc43\ud835\udc3c\ud835\udc36\ud835\udc43PICPitalic_P italic_I italic_C italic_P learning curves obtained for the Power dataset using DualAQD. (a) \u03b7=0.01\ud835\udf020.01\\eta=0.01italic_\u03b7 = 0.01. (b) \u03b7=0.1\ud835\udf020.1\\eta=0.1italic_\u03b7 = 0.1.",
135
+ "url": "http://arxiv.org/html/2212.06370v4/extracted/5487902/images/Power_curves2.jpg"
136
+ },
137
+ "6": {
138
+ "figure_path": "2212.06370v4_figure_6.png",
139
+ "caption": "Figure 6: Uncertainty maps comparison for field A.",
140
+ "url": "http://arxiv.org/html/2212.06370v4/extracted/5487902/images/unc_maps.jpg"
141
+ },
142
+ "7": {
143
+ "figure_path": "2212.06370v4_figure_7.png",
144
+ "caption": "Figure 7: \u03bc\u03c9subscript\ud835\udf07\ud835\udf14\\mu_{\\omega}italic_\u03bc start_POSTSUBSCRIPT italic_\u03c9 end_POSTSUBSCRIPT vs. \u03c9\ud835\udf14\\omegaitalic_\u03c9 comparison on yield prediction datasets.",
145
+ "url": "http://arxiv.org/html/2212.06370v4/extracted/5487902/images/mucurves.jpg"
146
+ }
147
+ },
148
+ "validation": true,
149
+ "references": [],
150
+ "url": "http://arxiv.org/html/2212.06370v4"
151
+ }
20240322/2212.10744v2.json ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "An Audio-Visual Speech Separation Model Inspired by Cortico-Thalamo-Cortical Circuits",
3
+ "abstract": "Audio-visual approaches involving visual inputs have laid the foundation for recent progress in speech separation. However, the optimization of the concurrent usage of auditory and visual inputs is still an active research area. Inspired by the cortico-thalamo-cortical circuit, in which the sensory processing mechanisms of different modalities modulate one another via the non-lemniscal sensory thalamus, we propose a novel cortico-thalamo-cortical neural network (CTCNet) for audio-visual speech separation. First, the CTCNet learns hierarchical auditory and visual representations in a bottom-up manner in separate auditory and visual subnetworks, mimicking the functions of the auditory and visual cortical areas. Then, inspired by the large number of connections between cortical regions and the thalamus, the model fuses the auditory and visual information in a thalamic subnetwork through top-down connections. Finally, the model transmits this fused information back to the auditory and visual subnetworks, and the above process is repeated several times. The results of experiments on three speech separation benchmark datasets show that CTCNet remarkably outperforms existing methods. Our results suggest that mimicking the anatomical connectome of the mammalian brain has great potential for advancing the development of deep neural networks.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "This demo file is intended to serve as a \u201cstarter file\u201d\nfor IEEE Computer Society journal papers produced under LATEX using\nIEEEtran.cls version 1.8b and later.\nI wish you the best of success.\nmds\nAugust 26, 2015"
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Subsection Heading Here",
15
+ "text": "Subsection text here."
16
+ },
17
+ {
18
+ "section_id": "1.1.1",
19
+ "parent_section_id": "1.1",
20
+ "section_name": "1.1.1 Subsubsection Heading Here",
21
+ "text": "Subsubsection text here."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "Conclusion",
27
+ "text": "The conclusion goes here."
28
+ }
29
+ ],
30
+ "appendix": [
31
+ {
32
+ "section_id": "Appendix 1",
33
+ "parent_section_id": null,
34
+ "section_name": "Appendix A Proof of the First Zonklar Equation",
35
+ "text": "Appendix one text goes here."
36
+ },
37
+ {
38
+ "section_id": "Appendix 2",
39
+ "parent_section_id": null,
40
+ "section_name": "Appendix B",
41
+ "text": "Appendix two text goes here."
42
+ },
43
+ {
44
+ "section_id": "Appendix x1",
45
+ "parent_section_id": null,
46
+ "section_name": "Acknowledgments",
47
+ "text": "The authors would like to thank\u2026"
48
+ }
49
+ ],
50
+ "tables": {},
51
+ "image_paths": {},
52
+ "validation": true,
53
+ "references": [],
54
+ "url": "http://arxiv.org/html/2212.10744v2"
55
+ }
20240322/2302.05440v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2302.05951v2.json ADDED
@@ -0,0 +1,559 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Fully Dynamic Exact Edge Connectivity in Sublinear Time",
3
+ "abstract": "Given a simple -vertex, -edge graph undergoing\nedge insertions and deletions, we give two new fully dynamic algorithms for exactly\nmaintaining the edge connectivity of in worst-case update time and\n amortized update time, respectively. Prior to our work,\nall dynamic edge connectivity algorithms assumed bounded edge connectivity, guaranteed approximate solutions, or were restricted to edge insertions only. Our results answer\nin the affirmative an open question posed by Thorup [Combinatorica\u201907].",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The edge connectivity of an undirected, unweighted graph is the minimum number of edges whose removal disconnects the graph . Finding the edge connectivity of a graph is one of the cornerstone problems in combinatorial optimization and dates back to the work of Gomory and Hu [23 ###reference_b23###] in 1961. Since then, a large body of research work has dealt with the question of obtaining faster algorithms for this problem in the classical sequential setting [12 ###reference_b12###, 45 ###reference_b45###, 26 ###reference_b26###, 15 ###reference_b15###, 55 ###reference_b55###, 37 ###reference_b37###, 36 ###reference_b36###, 38 ###reference_b38###, 27 ###reference_b27###, 42 ###reference_b42###, 6 ###reference_b6###, 16 ###reference_b16###, 44 ###reference_b44###, 17 ###reference_b17###, 41 ###reference_b41###]. This line of work culminated in a breakthrough result by Kawarabayashi and Thorup [38 ###reference_b38###] in 2015 who obtained a deterministic algorithm that runs in 111We use to hide poly-logarithmic factors. time on a -vertex, -edge graph, which was later improved by Henzinger, Rao, and Wang [27 ###reference_b27###] to . Edge connectivity has also been extensively studied in various models of computation including the parallel model [34 ###reference_b34###, 18 ###reference_b18###, 43 ###reference_b43###],\nthe distributed models [51 ###reference_b51###, 19 ###reference_b19###, 49 ###reference_b49###, 20 ###reference_b20###, 10 ###reference_b10###, 50 ###reference_b50###, 21 ###reference_b21###, 22 ###reference_b22###, 11 ###reference_b11###], the semi-streaming model [1 ###reference_b1###, 44 ###reference_b44###, 4 ###reference_b4###],\nand several query models [52 ###reference_b52###, 44 ###reference_b44###, 40 ###reference_b40###].\nAll these models admit non-trivial, if not near-optimal, algorithms\nfor exactly computing edge connectivity.\nWe study edge connectivity in the fully dynamic setting, where the underlying graph undergoes edge insertions and deletions, known as edge updates, and the goal is to maintain the edge connectivity of after each update with as small update time as possible. In contrast to the long line of research work in other computational models, the only known algorithm for the fully dynamic edge connectivity problem is the trivial solution of recomputing the edge connectivity from scratch after each update, which costs time per update. Thorup [56 ###reference_b56###] introduced this problem and gave a fully dynamic edge connectivity algorithm that supports fast updates as long as the edge connecitvity is upper bounded by some parameter , where is a small polynomial in . Concretely, his algorithm achieves worst-case time per edge update, and thus is slower than the trivial algorithm whenever . In spite of dynamic graph algorithms being a flourishing research field, prior to our work, there has been no progress on the fully dynamic edge connectivity problem in the last 15 years.\nIn this paper we give the first solutions with update time, answering in the affirmative an open question posed by Thorup [56 ###reference_b56###] of whether this is possible. More concretely, we show the following two results.\nGiven an undirected, unweighted -vertex, -edge graph , there is a fully dynamic randomized algorithm that processes an online sequence of edge insertions or deletions and maintains the edge connectivity of in\n worst-case update time with high probability.\nThe above randomized algorithm works against an adaptive adversary and achieves sub-linear update time as long as where where is some positive constant. We complement both points of this result by designing a second algorithm that is (i) deterministic and (ii) achieves sub-linear update times regardless of graph density.\nGiven an undirected, unweighted -vertex, -edge graph , there is a fully dynamic deterministic algorithm that processes an online sequence of edge insertions or deletions and maintains the edge connectivity of in amortized update time.\nBoth algorithms can also report the edges on a cut\nthat attains the edge connectivity of in time nearly proportional to the edge connectivity, with the caveat that the algorithm\nfrom Theorem 1.1 ###reference_theorem1### then only works against an oblivious adversary."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Our Techniques",
15
+ "text": "In this section, we discuss the main obstacles that explain the lack of progress on fully dynamic exact edge connectivity algorithms\nand point out the key ideas that enable us to achieve our results.\nBefore 2015, all near-linear time algorithms for exact\nedge connectivity reduced to computing a minimum 2-respecting cut of\na spanning tree [36 ###reference_b36###]. This problem involves setting up a sophisticated dynamic programming solution, and the question of whether this solution admits fast dynamic algorithms remains a notoriously\nhard open problem. For the somewhat easier problem of maintaining a minimum 1-respecting cut of a spanning tree [13 ###reference_b13###, 3 ###reference_b3###, 56 ###reference_b56###], state-of-the-art dynamic algorithms allow us to solve the problem efficiently.\nIn fact, this is used as a key subroutine in Thorup\u2019s algorithm [56 ###reference_b56###]\nthat -approximates edge connectivity in worst-case update time.\nIn a breakthrough work, Kawarabayashi and Thorup [38 ###reference_b38###] in 2015\nshowed a largely different approach to tackling the edge connectivity problem. Their key insight\nwas to introduce a notion of sparsification for edge connectivity in a subtle way: given an undirected, unweighted -vertex , one can contract\nedges of and obtain a graph such that (i) has only edges and (ii) \npreserves all non-singleton minimum cuts of .222A non-singleton minimum cut is a minimum cut where both sides of the cut contain at least vertices. Throughout, we call the graph a non-singleton minimum cut sparsifier (abbrv. NMC sparsifier). Since maintaining singleton\nminimum cuts boils down to maintaining the minimum degree of , and the latter can be easily achieved as undergoes edge updates, we can focus our attention to designing fully dynamic algorithms for maintaing the NMC sparsifier .\nIn the insertions-only setting, Goranci, Henzinger, and Thorup [24 ###reference_b24###] observed that\nan NMC sparsifier interacts well with edge insertions, as it satisfies a certain composability property. Specifically, they showed that given an NMC sparsifier of a graph and an edge insertion to , the graph remains an NMC sparsifier of . This was in turn combined with Henzinger\u2019s insertions-only algorithm [29 ###reference_b29###] for maintaining small edge connectivity and -connectivity certificates. Periodically invoking a static algorithm for computing NMC sparsifier in a black-box manner then led to a dynamic algorithm with poly-logarithmic amortized update time per edge insertion.\nWe may be tempted to employ the same approach for handling edge deletions. However, a short afterthought reveals that the crucial composability property we used for edge insertions completely fails for edge deletions. This suggests that restricting to edge deletions does not seem to help in the dynamic NMC sparsifier problem, so in this work we refocus our attention to the fully dynamic setting.\nWe devise two new fully dynamic NMC sparsifier algorithms which lead us to Theorems 1.1 ###reference_theorem1### and 1.2 ###reference_theorem2###,\nrespectively. The first one is randomized and is based on a dynamic variant of the random -out contraction technique leveraged by Ghaffari, Nowicki, and Thorup [22 ###reference_b22###] (Section 3 ###reference_###), whereas the second one is deterministic and builds upon the expander decomposition-based approach for computing edge connectivity by Saranurak [53 ###reference_b53###] (Section 4 ###reference_###). We note that the original construction of NMC sparsifiers [38 ###reference_b38###] is already quite involved in the static setting and seems difficult to adapt in the dynamic setting. In the paragraphs below, we give a succinct summary of the technical ideas behind both of our algorithms.\nKey to our randomized algorithm for dynamically maintaining an NMC sparsifier is the following construction: given a graph , for each vertex , sample two incident edges to independently, with replacement, and contract them to obtain the graph , which we call a random -out contraction of . Despite the fact that is known to have only vertices [22 ###reference_b22###], where is the minimum degree of , the number of edges in could potentially still be large, say , and thus inefficient for our purposes. The main technical component of our dynamic algorithm is to efficiently maintain a sparse -connectivity certificate of . Towards achieving this goal, we have to deploy a variety of algorithmic tools from sequential, parallel, and streaming algorithms, namely (i) sequential and parallel constructions of -connectivity certificates [46 ###reference_b46###, 10 ###reference_b10###], and (ii) constructing spanning forests in sub-linear time using linear -sampling sketches [9 ###reference_b9###, 33 ###reference_b33###]. A more detailed description of the algorithm can be found in Section 3 ###reference_###.\nOur deterministic algorithm follows the now-widespread and powerful algorithmic approach of employing expander decompositions for solving graph-based optimization problems. At a high level, an expander decomposition is a partitioning of a graph into well-connected clusters, whose expansion is controlled by a parameter , such that there are few inter-cluster edges left, say roughly . If , then Saranurak [53 ###reference_b53###] recently showed that contracting a carefully chosen vertex subset of each expander in the decomposition leads to a NMC sparsifier . Our main technical contribution is a simple, deletions-only algorithm for maintaining an expander decomposition (based on expander prunning [54 ###reference_b54###]), which in turn leads to a deletions-only algorithm for maintaining the NMC sparsifier . While expander pruning has been already used for dynamically maintaining other graph-based properties [25 ###reference_b25###, 5 ###reference_b5###], we believe that our construction is one of the simplest and may prove useful in future applications. We extend our deletions-only NMC algorithm to a fully dynamic one by keeping edge insertions \u201con the side\u201d and rebuilding periodically. Finally, for achieving our claimed sub-linear update time, our NMC sparsifier algorithm is run in \u201cparallel\u201d with the exact fully dynamic edge connectivity algorithm of [56 ###reference_b56###] which returns correct answers only for small edge connectivity. For further details we point the reader to Section 4 ###reference_###."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Related Work",
21
+ "text": "The study of approximation algorithms for the fully dynamic edge connectivity problem was initiated by Karger [35 ###reference_b35###] who gave a randomized algorithm that maintains a to edge connectivity in expected amortized time per edge operation. Karger and Thorup [57 ###reference_b57###]\nshowed a fully dynamic algorithm that -approximates\nedge connectivity in amortized updated time. Thorup [56 ###reference_b56###] improved the approximation factor to at the cost of increasing the update time to . However, his running time guarantees are worst-case instead of amortized.\nPrior to our work, all known fully dynamic algorithms for exactly computing the edge connectivity of take sub-linear time only when is small. In the same work [56 ###reference_b56###], Thorup also showed an exact fully dynamic algorithm with \nworst-case update time, which is sub-linear whenever .333Nevertheless, Thorup\u2019s result does not assume that \nis an undirected, unweighted graph. For being a small constant, edge connectivity can be maintained in amortized update time. Specifically, there were a series of refinements in the literature for designing fully dynamic algorithms for graph connectivity (i.e., checking whether ) [13 ###reference_b13###, 28 ###reference_b28###, 31 ###reference_b31###, 33 ###reference_b33###, 48 ###reference_b48###, 7 ###reference_b7###] and -edge connectivity (i.e., checking whether ) [14 ###reference_b14###, 30 ###reference_b30###, 31 ###reference_b31###, 32 ###reference_b32###]. When the underlying dynamic graph is guaranteed to remain planar throughout the whole sequence of online updates, Lacki and Sankowski [39 ###reference_b39###] gave an algorithm with worst-case update time per operation.\nPartially dynamic algorithms, i.e., algorithms that are restricted to either edge insertions or deletions only, have also been studied in the context of exact maintenance of edge connectivity. Henzinger [29 ###reference_b29###] designed an insertions-only algorithm with amortized update time. Recently, Goranci, Henzinger, and Thorup\n[24 ###reference_b24###] showed how to improve the update time to , thus removing the dependency on edge connectivity from the running time.\nTo summarize, all previous dynamic edge connectivity algorithms either maintain an approximation to , require that is small, handle edges insertions only, or are restricted to special family of graphs such as planar graphs. Hence, our results are the\nfirst fully dynamic exact edge connectivity algorithms that achieve sub-linear update times on general graphs."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "Preliminaries",
27
+ "text": "Let be an -vertex, -edge graph. For a set the volume of in is defined as , where denotes the degree of in . Let denote the minimum degree in . A cut is a subset of vertices where . A cut is non-singleton iff . For two disjoint sets , let be the set of edges with one endpoint in and the other in . Let . The edge connectivity in , denoted by , is the cut that minimizes .\nIt is a well-known fact that edge connectivity can be computed in near-linear time on the number of edges.\nLet be a weighted, undirected graph with edges. There is an algorithm that computes the edge connectivity of in time."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Randomized Algorithm with Update Time",
33
+ "text": "In this section we prove Theorem 1.1 ###reference_theorem1###. Our algorithm requires several tools from different works and we review them below."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Algorithmic Tools",
39
+ "text": "Let be a undirected simple graph. Let be a\nrandom -out subgraph of which is obtained from using the following procedure\nSet , where .\nFor each :\nSample from two incident\nedges , independently, with replacement.\nAdd and to .\nThe graph obtained\nby contracting all edges of is called a random -out\ncontraction. Ghaffari, Nowicki and Thorup [22 ###reference_b22###] showed that a random 2-out contraction reduces the number of nodes to whp, while preserving any fixed non-singleton nearly minimum cut with constant probability, as in the theorem below.\nA\nrandom -out contraction of a graph with vertices and minimum\ndegree has vertices, with high probability,\nand preserves any fixed non singleton minimum cut,\nfor any constant , with some constant probability\n.\nGiven a graph , a -connectivity certificate of is a subgraph of that preserves all cuts of size at most . Concretely, for any vertex set ,\n. Nagamochi and Ibaraki [46 ###reference_b46###] designed a sequential algorithm for computing a sparse -connectivity certificate in linear time. Below we shall review an algorithm that does not run in linear time but it\u2019s simpler and suffices for our purposes.\nInput: A graph with vertices, and\na parameter . \nOutput: A -connectivity certificate of .\nSet .\nFor :\nFind a spanning forest of .\nSet .\nReturn .\nGiven a graph with vertices\nand an integer parameter , Algorithm 1 ###reference_### returns\na -connectivity certificate of containing edges.\nObserve that Algorithm 1 ###reference_### for constructing a -connectivity certificate computes nested\nspanning forests. When is large, this long chain of dependency\nis too inefficient in the dynamic setting. Although Algorithm 1 ###reference_### will prove useful at some point in our final construction, we need to bypass this dependency issue. To this end, we will exploit an alternative\n-connectivity certificate construction by Daga et al. [10 ###reference_b10###] which was\ndeveloped in the context of distributed and parallel algorithms. We describe this construction in Algorithm 2 ###reference_###. The key advantage of this algorithm\nis that it reduces the -connectivity certificate problem to \ninstances of -connectivity certificate where . This suggests that we can afford using algorithms even with polynomial dependency on since is logarithmic on the number of nodes.\nInput: A graph with vertices, and\na parameter . \nOutput: A -connectivity certificate of .\nChoose where is a big enough constant and is an integer. Let .\nRandomly color each edge of using colors from .\nLet be a set of edges with color . Let .\nFor :\nApply Algorithm 1 ###reference_### to compute a -connectivity certificate of .\nReturn .\nGiven\na graph with vertices and an integer parameter ,\nAlgorithm 2 ###reference_### returns a subgraph of such that (a) contains edges, and (b) is a -connectivity\ncertificate of with high probability.\nA well-known tool in the streaming algorithms literature is the -sampling technique. This tool is particularly useful in the context of the following natural problem: given a -dimensional vector , we would like to construct a data structure supporting the following:\nif , it reports null,\nif , then it returns some with high probability,\nwhere consists of all non-zero entries of . In our concrete application, will correspond to a vertex of the graph and we will store the edges incident to in such a data structure, i.e., will be the edges incident to . By itself such a data structure is trivial: just keep an adjacency list for every vertex . However,\nfor the concrete implementation of the above data structure, we use a linear function, widely reffered to as a linear sketching transformation (aka linear sketch) , such that given for and for , we get that gives a data structure that (1) returns an edge incident to or (but not to both) and (2) can be computed in time.\nMore formally, in the theorem below we present the main result relating the -sampling technique to the above data structure problem and focusing on running time instead of space guarantees.\nFor any dimension parameter , there is a randomized algorithm for constructing a representation of a linear sketching transformation \nin time, such that for any vector ,\nwe can compute in time, and\ngiven , we can obtain a non-zero entry of in time with high probability.\nThe sketch can be represented using simple hash functions (see [9 ###reference_b9###]) and avoids explicitly representing as a matrix of size .\nThis is why it only takes to initialize .\nWe call an -sampling sketch\nof . Our algorithm will maintain the -sampling sketch\nof the row-vectors of the signed vertex-edge incidence matrix of a graph , which is defined as follows. Given a graph with vertices, the signed vertex-edge incidence\nmatrix of is\nLet denote the -th row of\n. We observe that one can efficiently compute and update the sketches for all .\nGiven a -vertex graph with edges, there is an algorithm to compute a linear transformation for all in\n time. Upon an edge insertion or deletion in , one can update the sketches for all in time.\nWhen computing ,\nwe only spend time by Theorem 3.4 ###reference_theorem4### (1).\nSince the incidence matrix contains only non-zeros, the first claim of the proposition follows. The second claim holds since (i) each edge update affects only two entries of and (ii) is a linear transformation.\nConcretely, let be the updated edge and let and be the elementary unit vectors with the non-zero entry only at and , respectively.\nWe start by computing and in time, and then proceed to evaluating and in time, where the sign depends on whether is inserted or deleted.\n\u220e\nThe sketches for all are particularly useful since for any given any set , they allow us to obtain an edge crossing the cut in time. This is better than the trivial approach which scans through all edges incident to and takes time in the worst case.\nMore formally, let for any vertex set . Now, by Theorem 3.4 ###reference_theorem4### each can be queried in time. Using the linearity of , we compute in time. Observing that non-entries of correspond exactly to the edges crossing the cut , we can obtain one of these edges from in time. This observation has proven useful for building a spanning forest of a graph in time444Note that this sub-linear in the size of the graph since has edges when give access to the sketches , . More precisely, it is implicit in previous works [2 ###reference_b2###, 33 ###reference_b33###] that if independent copies of the sketches are maintained, then a spanning forest can be constructed using Boruvka\u2019s algorithm, as summarized in the theorem below.\nLet be an -vertex graph. Let \nbe linear transformations for the -sampling sketch from\nTheorem 3.4 ###reference_theorem4### generated independently. Let be the sketches for all vertices and all . Then there is an algorithm to construct a spanning forest of \nin time with high probability.\nSuppose our goal is to maintain a data structure for a (not necessarily spanning) forest of a graph . We next review a result that allows us to a build a data structure on such that for any connected component of , we can compute in time, which is faster than the previous approach that yielded an time algorithm by explicitly summing . This construction is implicit in [33 ###reference_b33###] and can be alternatively obtained by combining Theorem 4.16 of [47 ###reference_b47###] with Theorem 3.4 ###reference_theorem4### and Proposition 3.5 ###reference_theorem5###.\nLet be a linear transformation for the -sampling sketch from Theorem 3.4 ###reference_theorem4###. There is a data structure that maintains an -vertex graph and a (not necessarily spanning) forest on the same vertex set that supports the following operations\ninsert or delete an edge in in time,\ninsert or delete an edge in (as long as remains a forest) in time, and\ngiven a pointer to a connected component of , return , where is the -th row of the incidence matrix of , in time."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "The Algorithm",
45
+ "text": "In this section, we show an algorithm for maintaining the edge connectivity of an -vertex dynamic graph undergoing edge insertions and deletions in worst-case update time with high probability, i.e., we prove Theorem 1.1 ###reference_theorem1###.\nLet be a graph undergoing edge insertions and deletions. We make the following two simplifying assumptions\nthe edge connectivity is attained at a non-singleton minimum cut of , and\nthe minimum degree of is between some fixed range , and we shall construct a data structure depending on .\nWe lift (1) by observing that if this assumption doesn\u2019t hold, then we have that the edge connectivity , and the minimum degree of can be maintained in a straightforward way. Assumption (2) can be lifted by constructing data structures\nfor the ranges for , and query the data structure for range whenever .\nBefore we proceed further, recall that is a random 2-out subgraph and is the random 2-out contraction of . By Theorem 3.1 ###reference_theorem1###, the minimum cut is preserved in with some positive constant probability. We can boost this to a high probability bound by repeating the whole algorithm times. As contains vertices\nwith high probability, throughout we will also assume that this is the case.\nDespite the size guarantee on the vertex set of , maintaining and running a static edge connectivity algorithm on is not enough as the number of edges in could potentially be large. This leads us to the main component of our data structure that instead maintains a -connectivity certificate of while supporting updates as well as querying in update time.\nLet be a dynamic graph and let be the random -out contraction of . Let be an integer parameter such that . There is a data structure that supports edges updates to (and thus to ) and gives query access to a -connectivity certificate of containing edges with high probability. The updates can be implemented in worst-case time,\nthe queries in worst-case time.\nWe claim that Theorem 3.8 ###reference_theorem8### proves Theorem 1.1 ###reference_theorem1###. To see this, recall that preserves all cuts of size at most in by the definition of -connectivity certificate. By our assumption , this implies that the minimum cut is preserved in and suggests that we can simply run a static edge connectivity algorithm on top of to find in time. Therefore, the rest of this section is devoted to proving Theorem 3.8 ###reference_theorem8###.\nThe first component of the data-structure is to maintain (i) a random 2-out subgraph of and (ii) a spanning forest of . By the definition of , (i) can be implemented in worst-case update time per edge update. For (ii), we use the dynamic spanning forest data structure of Kapron, King and Mountjoy [33 ###reference_b33###] that guarantees a worst-case update time. Since for each edge update to , there are only edge updates to , this can be implemented in worst-case time per edge update as well.\nWe use the following two-step approach to prove our result\nreducing the dynamic -connectivity certificate problem to the dynamic -connectivity certificate problem via the template presented in Algorithm 2 ###reference_###, and\nsolving the dynamic -connectivity certificate problem using the linear sketching tools developed in the previous section.\nWe follow the template of Algorithm 2 ###reference_### from Theorem 3.3 ###reference_theorem3###:\nSet and let and \nbe defined as in Algorithm 2 ###reference_###. Then, color each edge of by randomly choosing a color from . Let \nbe the subgraph of containing edges of color . We observe that\nall graphs can be maintained explicitly together with with in time per edge update. Similarly, let be the subgraph of the random -out contraction containing edges of color , i.e., .\nOur goal is to not explicitly maintain , but instead build a dynamic data structure with \nworst-case update time per edge update in that gives query access to a -connectivity certificate of in \ntime with high probability.\nWe claim that this suffices to prove Theorem 3.8 ###reference_theorem8###. To see this, note that for each we can query in time. Then we simply union all these certificates to compute in time, which bounds the query time. To bound the update time, note that the worst-case cost for maintaining these data structures is . By Theorem 3.3 ###reference_theorem3### it follows that (i) is indeed a -connectivity certificate of and (ii) the size of is at most , which completes the proof of Theorem 3.8 ###reference_theorem8###.\nLet us fix a color . Recall that our goal is to obtain a -connectivity\ncertificate of in time per query and per edge update.\nRecall that , which is the graph containing all color- edges of , is explicitly maintained. Let be a spanning\nforest of the random 2-out subgraph which we also maintain explicitely as discussed above. For all ,\nwe independently generate the linear transformation for a -sampling\nsketch using Theorem 3.4 ###reference_theorem4###. For each index pair\n, we build a data structure 555Here we deliberately omit the subscript from since the color was fixed in the beginning of the paragraph and this omission simplifies the presentation. and maintain whenever or changes\nusing Theorem 3.7 ###reference_theorem7###. Since for each edge update to , there are only edge updates to and , respectively, we can maintain all \u2019s in worst-case\nupdate time per edge update to . This completes the description of handling edge updates and its running time analysis.\nIt remains to\nshow how to query a -connectivity certificate \nof using in \ntime.\nThe main idea of our construction is the following simple but crucial observation. Each vertex \nin corresponds to a tree of as is contracted into . The latter holds by the definition of . Therefore, if we let and \ndenote the incidence matrices of and , respectively,\nthen we have . Now, using the data structures \nwe can retrieve for all components\n of , which in turn gives us \nfor all . Note that the total time to query for all\n and all is .\nNow, the sketches \nfor all and all allow us to compute -connectivity certificate\n of as shown below. Note that the algorithms modifies the sketches but we may revert the sketches back to their initial state after computing and returning .\nWe finally explain the procedure for querying . To this end, we follow the template of Algorithm 1 ###reference_###\nfrom Theorem 3.2 ###reference_theorem2###. Set to be\nthe temporary graph that we will work with. Consider the -th round of the algorithm where .\nWe compute a spanning forest of using the\nsketches \non vertices of in \ntime using Theorem 3.6 ###reference_theorem6###. Next, we update \nand also update the sketches for all\n so that they maintain information of graph \nand not of . This takes time because\n contains edges. This ends the -th round. After all rounds have been completed, we return\n which is a -connectivity certificate\nby Theorem 3.2 ###reference_theorem2###. Since there are iterations, the total query time is , what we wanted to show. Note that algorithm internally stores the edge connectivity value only and not the edges on a cut that attains the edge connectivity. Therefore, the adversary cannot reveal anything useful from querying this information, and thus it follows that our algorithm works against an adaptive adversary."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Deterministic Algorithm with Update Time",
51
+ "text": "In this section we prove Theorem 1.2 ###reference_theorem2###. Our algorithm requires several tools from different works and we review them below."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Algorithmic Tools",
57
+ "text": "The key to our approach is the notion of expander graphs. The conductance of an unweighted, undirected graph is defined by\nWe say that a graph is -expander if .\nNext, we introduce the notion of expander decomposition.\nLet be an undirected, unweighted graph and let be a parameter. A -expander decomposition of is a vertex-disjoint partitioning of such that\n, and\nfor each , .\nWe now review an efficient algorithm for finding expander decompositions.\nLet be an undirected, uniweighted graph and let be a parameter. There is an algorithm Expander that in time finds a -expander decomposition.\nThe next result allows us to turn static expander decompositions into their dynamic variants, as shown in Section 4.2 ###reference_###\nLet be an undirected, unweighted -expander. Given an online sequence of edge deletions in , there is an algorithm that maintains a pruned set satisfying the following properties; let and be the graph and the set after the -th deletion. For ,\nand ,\nand , and\nis a -expander.\nThe total time for updating is .\nOur algorithm relies on sparsifiers that preserve non-singleton cuts of simple graphs.\nLet be an undirected, unweighted graph. A multi-graph is a non-singleton minimum cut sparsifiers (abbrv. NMC sparsifier) of if preserves all non-singleton minimum cuts of , i.e., for all cuts with and ,\nWe say that is of size if .\nKawarabayashi and Thorup [38 ###reference_b38###] showed that undirected, simple graphs admit NMC sparsifiers of size . They also designed a deterministic time algorithm for computing such sparsifiers. We will construct NMC sparsifiers that are based on expander decompositions, following the work of Saranurak [53 ###reference_b53###], as we can turn them into a dynamic data structure. To do so, we need to formally define the procedures of trimming and shaving vertex subsets of a graph.\nLet by any vertex subset in . Define to be the set obtained by the following procedure: while there exists a vertex with , remove from . Let .\nObserve that the trimming procedure recursively removes a vertex with few connections inside the current set while the shaving procedure removes all vertices with few connections inside the initial set . Saranurak [53 ###reference_b53###] showed that we can construct NMC sparsifiers by applying triming and shaving to each cluster in the expander decomposition. We formally summarize his construction in the lemma below.\nLet be an undirected, simple graph with edges, and let , where is the minimum degree in and some positive constant. Let\nLet be the graph obtained from by contracting every set . Then is an NMC sparsifier of size for . The running time for computing is ."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Decremental Expander Decomposition",
63
+ "text": "In this section we show that the expander pruning procedure from Theorem 4.3 ###reference_theorem3### allows us to design a dynamic algorithm for maintaining an expander decomposition under edge deletions. While the theorem below is already implicit in other works leveraging the power of expander decompositions [25 ###reference_b25###, 5 ###reference_b5###] (albeit with slightly different guarantees and variations depending on the specific application), here we give a simple, self-contained version that suffices to solve the edge connectivity problem.\nGiven an unweighted, undirected graph with edges and a parameter , there is a decremental algorithm that supports an online sequence of up to edge deletions and maintains a -expander decomposition in total update time.\nWe initialize our data structure by (i) constructing a -expander decomposition of the initial graph using Theorem 4.2 ###reference_theorem2###, where is a parameter, and (ii) starting a pruning data-structure for each expander using Theorem 4.3 ###reference_theorem3###. We also maintain a counter that denotes the number of edge deletions inside the cluster . Initially, for each . If the total number of edge deletions exceeds , our data-structure terminates.\nWe next show how to handle edge deletions. Consider the deletion of edge from . If is an inter-cluster edge in , then we simply remove it from since its removal does not affect the expansion of any of the clusters in . Otherwise, is an intra-cluster edge, and let be the unique cluster that contains . First, we increase the counter by . Next we compare the number of deletions in the cluster relative to the number of deletions the pruning procedure can handle.\nConcretely, if , we pass the deletion of to the pruning data structure . Let , resp., be the pruned set that maintains after, resp. before the deletion . We define to be the set of singleton clusters, and then replace in with . The last step can be thought of as including every vertex in as a singleton cluster in and removing these vertices from the current expander .\nHowever, if , then we declare every vertex in the current cluster to be a singleton cluster. Specifically, we remove from and for each , add to . Note that the latter implies that all vertices that belonged to the original cluster are included as singletons in the current expander decomposition. This completes the description of the procedure for deleting an edge.\nWe next show that the above algorithm correctly maintains an expander decomposition under edge deletions while paying a small constant factor in the expansion guarantee of each cluster and in the number of inter cluster edge.\nThe decremental algorithm maintains a -expander decomposition.\nLet be expander decomposition that the algorithm maintains for the current graph .\nOur first goal is to show that for each , . Observe that by construction each cluster in can either be (i) a singleton, (ii) a pruned cluster (i.e., a cluster that is formed by removing vertices from the original cluster) or (iii) an original cluster from the initial expander decomposition. If a cluster is a singleton, then the expansion bound trivially holds. If we have a type (ii) cluster, then by expander pruning (Theorem 4.3 ###reference_theorem3###, Property 3 ###reference_i3###), it follows that , where is the current graph. Finally, for a type (iii) cluster, the initial expander decomposition (Theorem 4.2 ###reference_theorem2###) gives that . Combining the above cases, leads to the expansion bound we were after.\nWe now bound the number of inter cluster edges in . Recall that initially, the expander decomposition has at most inter-cluster edges (Theorem 4.2 ###reference_theorem2###). During the handling of edge deletions, the algorithm introduces new inter-cluster edges when vertices from the pruned set are included as singletons. Thus, our ultimate goal is to bound the volume of the pruned set with the number of edge deletions in a cluster. To this end, let be a cluster in . We distinguish two cases. If then, then by expander pruning (Theorem 4.3 ###reference_theorem3###, Property 2 ###reference_i2###) we have that the maintained pruned set satisfies . However, if , then by construction, the pruned set is the entire original cluster . By rearranging the inequality in the last condition, we get . Combining the above bounds, we get that at any time during our decremental algorithm, the volume of the maintained pruned set of a cluster satisfies .\nSumming this over all clusters in the expander decomposition , we have the number of the new inter-cluster edges is bounded by\nwhere the penultimate inequality follows from the fact the number of edge deletions to is bounded by by the assumption of the lemma. Thus, the number of inter-cluster edges increases by a constant multiplicative factor, which concludes the proof of the lemma.\n\u220e\nWe next bound the running time of our decremental algorithm.\nThe decremental expander decomposition runs in total update time.\nThe running time of the algorithm is dominated by (1) the time required to compute the initial expander decomposition and (2) the total time to perform expander pruning on each cluster of this decomposition. By Theorem 4.2 ###reference_theorem2###, (1) is bounded by . By Theorem 4.3 ###reference_theorem3###, the pruning process on a cluster can be implemented in . Summing over all the clusters in the expander decomposition and recalling that they form a partition of , we get that the running time of (2) is bounded by\nBringing together (1) and (2) proves the claim of the lemma.\n\u220e"
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "Fully Dynamic NMC sparsifier",
69
+ "text": "In this section present a fully dynamic algorithm for maintaining a NMC sparsifier of undirected, simple graphs."
70
+ },
71
+ {
72
+ "section_id": "4.3.1",
73
+ "parent_section_id": "4.3",
74
+ "section_name": "4.3.1 Decremental NMC sparsifier",
75
+ "text": "We start by showing that the decremental expander decomposition (Theorem 4.7 ###reference_theorem7###) almost immediately yields a decremental algorithm for maintaining a NMC sparsifier.\nMore specifically, we show the following theorem in this subsection.\nGiven an unweighted, undirected graph with edges and a parameter satisfying for some positive constant , there is a decremental algorithm that supports an online sequence of up to edge deletions and maintains a NMC sparsifier of size in total update time.\nLet be parameter with for some positive constant . Our data-structure internally maintains an expander decomposition under edge deletions DecExpander (Theorem 4.7 ###reference_theorem7###). Let be the expander decomposition of the initial graph from DecExpander. Let and . We define to be the graph obtained from by contracting every set . As we will shortly see, will correspond to a NMC sparsifier of . This suggests that in order to maintain such a sparsifier under edge deletions we need to efficiently maintain the sets and for every cluster in the current expander decomposition . We achieve this by keeping track of the following counters:\nthe degree of each vertex in , and\nthe degree for all and all , i.e., the degree of vertex restricted to the cluster .\nNote that both degree values can be computed for each vertex in the initial graph by performing a graph traversal.\nNow, consider the deletion of an edge from . We first decrement the value of both counters and by one to account for the deletion of . Then we pass this deletion to the data-structure . This in turn reports a subset of vertices that are pruned out of a cluster due to the deletion of . At this point observe that the decremental expander decomposition algorithm already has updated with respect to . Thus it remains to update the sets and respectively.\nFor each we do the following. First, note that when , we don\u2019t need to do anything since asserts that cannot belong to the contracted set . If , then we remove from and potentially by invoking the subprocedure defined as follows:\nRemove\nSet , and set .\nIf then\nSet , and\nFor every neighbour :\nSet .\nIf then\n\nIf and then\nSet , and .\nProcedure Uncontract simply reverts the operation of contracting into some cluster . It can also be interpreted as adding the vertex together with its incident edges in to the current sparsifier . This completes the description of the algorithm.\nThe next lemma shows that the algorithm maintains a sparsifier that preserves non-singleton minimum cuts exactly.\nThe decremental algorithm correctly maintains a NMC sparsifier of size .\nWe begin by showing that is a NMC sparsifier of some current graph . To this end, let be the graph after the -th deletion and let be the sparsifier after the data-structure has processed the -th deletion. To prove that is a NMC sparsifier of it suffices to show that (i) is -expander decomposition of (ii) , and (iii) . To see why this is true, note that by assumption of Theorem 4.10 ###reference_theorem10###, and apply Lemma 4.6 ###reference_theorem6### with the parameter .\nIf , recall that by construction is a -expander decomposition of the initial graph , and . Since the parameter satisfies , we get the graph obtain by contracting every set is a NMC sparsifier of .\nIf , inductively assume that have been correctly maintained until the -st edge deletion. By Theorem 4.7 ###reference_theorem7###, we already know that is a -expander decomposition of the graph . Thus it remains to argue about the correctness of and .\nLet be the set of vertices pruned out of a cluster that the data-structure returns upon the -th edge deletion. To prove that the update of and with respect to is correct, by Definition 4.5 ###reference_theorem5###, consider the following invariants\nfor all .\n.\nFor every vertex with , note that our subprocedure removed from all vertices for which , and thus the invariant (1) holds for the vertices that are left in . Moreover, as already pointed out in [8 ###reference_b8###], is unique so the order in which vertices are removed does not matter. Similarly, by construction we have that subprocedure detects all vertices in that do not satisfy invariant (2). It follows that and are maintained correctly, which in turn implies that and are also correct.\nThe guarantee on the size of the sparsifier follows directly from Lemma 4.6 ###reference_theorem6###.\n\u220e\nWe next study the running time of our algorithm.\nThe decremental algorithm for maintaining a NMC sparsifier runs in total update time.\nThe running time of the algorithm is dominated by (1) the time to maintain a decremental expander decomposition , (2) the total time to maintain and and (3) the cost of performing vertex uncontractions. By Theorem 4.7 ###reference_theorem7###, (1) is bounded by . To bound (2), we can implement subprocedure for a vertex in time, excluding the recursive calls to its neighbours. Since the updates from and are decremental (as they consist of either edge deletions or vertex deletions), once a vertex leaves a set or , it can never join back. Hence, it follows that (2) is bounded by . Similarly, a vertex can be uncontracted at most once, and this operation can also be implemented in time, giving a total runtime of for (3). Bringing (1), (2) and (3) together proves the claim of the lemma.\n\u220e"
76
+ },
77
+ {
78
+ "section_id": "4.3.2",
79
+ "parent_section_id": "4.3",
80
+ "section_name": "4.3.2 Extension to Fully Dynamic NMC sparsifier",
81
+ "text": "We follow a widespread approach in data structures for turning a decremental algorithm into a fully dynamic algorithm and apply it to our problem for maintaining a NMC sparsifier.\nAt a high level, our approach uses a decremental algorithm for maintaining a NMC sparsifier of the graph and handles edge insertions by keeping them \u201con the side\u201d. It crucially relies on the fact that adding an edge to the sparsifier yields a sparsifier for the new graph augmented by that edge. To make sure that the size of the sparsifier remains small after these edge augmentations, we restart our decremental algorithm from scratch after the number of updates exceeds a predefined threshold. This leads to the following result.\nGiven an unweighted, undirected graph with edges and a parameter satisfying for some positive constant , there is a fully dynamic algorithm that maintains a NMC sparsifier of size in amortized time per edge insertion or deletion.\nOur data structure subdivides the sequence of edge updates into phases of length , where , satisfying for some positive constant . Our algorithm maintains\nthe set of edges that represents the edges inserted since the beginning of a phase that have not been subsequently deleted.\nAt the beginning of each phase, we initialize (i) the decremental algorithm DecSparsifier (Theorem 4.10 ###reference_theorem10###) to maintain a NMC sparsifier of the current graph , and (ii) set .\nLet be an edge update to . If is an edge insertion to , we add it to the set . If is deleted from , we consider two cases: If , we simply delete from . If , we pass the deletion of to to update the sparsifier . We maintain as the sparsifier of the current graph. This completes the description of the algorithm.\nWe next show that our fully dynamic algorithm maintains a correct NMC sparsifier at any time.\nThe fully dynamic algorithm correctly maintains a NMC sparsifier of size .\nLet be the current graph, where is the graph at the beginning of a phase, is the set of edges deleted from , and is the of edges inserted since the beginning of a phase that have not been subsequently deleted. Let by the sparsifier our data-structure maintains.\nBy Theorem 4.13 ###reference_theorem13###, we know that is a NMC sparsifier of . We claim that is a NMC sparsifier of . To see this, consider the case when , where . Once proving this simpler case, our general claim follows follows by induction. As is a contraction of (Lemma 4.6 ###reference_theorem6###), there is a vertex mapping assigning nodes that are contracted together to a single node in . We distinguish two cases. If , then increases a non-singleton minimum cut in by at most one. Since the edge is present both in and , it follows preserves all non-singleton minimum cuts of . If , i.e., both endpoints of are contracted into a single vertex in , then we claim that cannot participate in any non-singleton minimum cut in . Suppose for contradiction that the latter holds. Then there exists a non-singleton minimum cut in such that\nwhere the above equality uses the fact that is a cut edge. Since a non-singleton minimum cut can increase by at most when adding a single edge, it follows that is a non-singleton minimum cut in . Since is NMC sparsifier of , we have that must also be non-singleton minimum cut in . Let be the supervertex in containing and . It follows , which contradicts the fact that is NMC sparsifier of .\nTo bound the size of , observe that (1) is of size at any time (Theorem 4.10 ###reference_theorem10###), and (2) . Therefore, is of size \n\u220e\nThe fully dynamic algorithm for maintaining a NMC sparsifier runs in amortized time per edge insertion or deletion.\nThe total update to maintain a decremental sparsifier is (Theorem 4.10 ###reference_theorem10###) under the condition that the number of deletions is smaller then . Our data-structure makes sure that the number of updates within a phase never exceeds . Charging the total update time to these updates, we get an amortized update time of .\n\u220e"
82
+ },
83
+ {
84
+ "section_id": "4.4",
85
+ "parent_section_id": "4",
86
+ "section_name": "The Algorithm",
87
+ "text": "In this section we show an algorithm for Theorem 1.2 ###reference_theorem2###. Our main idea is to run in \u201cparallel\u201d a variant of the fully algorithm for maintaining a NMC sparsifier (Theorem 4.13 ###reference_theorem13###) and the exact fully dynamic edge connectivity algorithm due to Thorup [56 ###reference_b56###] which is efficient whenever the edge connectivity is polylogarithmic or a small polynomial in the number of vertices. We also maintain a carefully chosen threshold edge connectivity value which tells us when to switch between the two algorithms.\nWe start by observing that the fully dynamic algorithm for maintaining a NMC sparsifier of a graph (Theorem 4.13 ###reference_theorem13###) gives the following simple algorithm for edge connectivity: (i) maintain the minimum degree of the current graph , (ii) after each edge update compute on the graph (Theorem 2.1 ###reference_theorem1###), and (iii) set the edge connectivity of the current graph to be . The following corollary is an immediate consequence of Theorem 4.13 ###reference_theorem13###.\nGiven an unweighted, undirected graph with edges and a parameter , there is a fully dynamic algorithm for maintaining an edge connectivity estimate in amortized time per edge insertion or deletion. If , for some positive constant , then the edge connectivty estimate is correct, i.e., .\nNext we review the result of Thorup [56 ###reference_b56###] concerning efficient maintenance of small edge connectivity.\nGiven an unweighted, undirected graph with edges, and a parameter , there is a fully dynamic algorithm for maintaining an edge connectivity estimate in worst-case time per edge insertion or deletion. If , then the edge connectivity estimate is correct, i.e., .\nWe now have all the tools to present our sub-linear fully dynamic edge connectivity algorithm, which proceeds as follows. Let be a threshold value on the edge connectivity to be determined shortly, where is a parameter. We run\n(1) the fully dynamic algorithm from Theorem 4.17 ###reference_theorem17### with parameter , and\n(2) the fully dynamic algorithm from Corollary 4.16 ###reference_theorem16### with parameter .\nWe extend both algorithms and to perform a test on how edge connectivity of the current graph compares to the threshold value after the algorithm that is currently being used to answer queries has processed an edge update. These extensions allow us to switch between these two algorithms, so the queries we answer regarding are correct.\nFirst, observe that both algorithms internally explicitly maintain . We proceed as follows\nSuppose is currently being used to answer queries. If after an update operation operation, then we do not switch. Otherwise (i.e., ), we switch to for the next operation.\nSuppose is currently being used to answer queries. If after an update operation, then we do not switch. Otherwise (i.e., ), we switch to for the next operation.\nWe next prove the correctness. It suffices to verify that our parameter requirements from Theorem 4.17 ###reference_theorem17### and Corollary 4.16 ###reference_theorem16### are satisfied whenever we use one of the algorithms to answer queries. Let be the current graph. Note that if was at most before an update operation, we used algorithm , which works correctly, even then reaches after that operation. If was at least before the operation, we use which works correctly even if drops to after the operation as . In either case we have that\n. This completes the correctness proof.\nThe running time is bounded as follows. By Theorem 4.17 ###reference_theorem17### and , (1) supports edge updates in worst-case time. By Corollary 4.16 ###reference_theorem16###, (2) guarantees a amortized time per update. Bringing these running times together, we get that the amortized time per edge update is\nBalancing the two terms in the above expression, we get , which in turn implies that the amortized update time of our algorithm is . This completes the proof of Theorem 1.2 ###reference_theorem2###."
88
+ },
89
+ {
90
+ "section_id": "5",
91
+ "parent_section_id": null,
92
+ "section_name": "Concluding remarks and open problems",
93
+ "text": "We showed two sub-linear algorithms for exactly maintaining edge connectivity in fully dynamic graphs. The main idea behind both algorithms was to maintain sparsifiers that preserve non-singleton cuts dynamically, and this was achieved by leveraging the power of random 2-out contractions and expander decompositions in the context of edge connectivity.\nOur work leaves several natural open problems.\nCan our update time for dynamically maintaining exact edge connectivity be improved? We remark that a closer examination of our result based on expander decompositions reveals that an improvement to Thorup\u2019s result [56 ###reference_b56###] for bounded edge connectivity (specifically, improving the polynomial dependency on the edge connectivity) would immediately lead to an improved running time. It would be very interesting to investigate whether this can be achieved.\nIs there a fully dynamic algorithm for -approximating edge connectivity in update time? The best-known algorithm due to Thorup achieves update time, and even going beyond this barrier remains an important open problem in dynamic graph algorithms."
94
+ }
95
+ ],
96
+ "appendix": [],
97
+ "tables": {},
98
+ "image_paths": {},
99
+ "validation": true,
100
+ "references": [
101
+ {
102
+ "1": {
103
+ "title": "Graph sparsification in the semi-streaming model.",
104
+ "author": "Kook Jin Ahn and Sudipto Guha.",
105
+ "venue": "In International Colloquium on Automata, Languages, and\nProgramming (ICALP), pages 328\u2013338, 2009.",
106
+ "url": null
107
+ }
108
+ },
109
+ {
110
+ "2": {
111
+ "title": "Analyzing graph structure via linear measurements.",
112
+ "author": "Kook Jin Ahn, Sudipto Guha, and Andrew McGregor.",
113
+ "venue": "In Symposium on Discrete Algorithms (SODA), pages 459\u2013467,\n2012.",
114
+ "url": null
115
+ }
116
+ },
117
+ {
118
+ "3": {
119
+ "title": "Maintaining information in fully dynamic trees with top trees.",
120
+ "author": "Stephen Alstrup, Jacob Holm, Kristian De Lichtenberg, and Mikkel Thorup.",
121
+ "venue": "ACM Transactions on Algorithms (TALG), 1(2):243\u2013264, 2005.",
122
+ "url": null
123
+ }
124
+ },
125
+ {
126
+ "4": {
127
+ "title": "A simple semi-streaming algorithm for global minimum cuts.",
128
+ "author": "Sepehr Assadi and Aditi Dudeja.",
129
+ "venue": "In Symposium on Simplicity in Algorithms (SOSA), pages\n172\u2013180, 2021.",
130
+ "url": null
131
+ }
132
+ },
133
+ {
134
+ "5": {
135
+ "title": "Fully-dynamic graph sparsifiers against an adaptive adversary.",
136
+ "author": "Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, Danupon\nNanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun.",
137
+ "venue": "In International Colloquium on Automata, Languages, and\nProgramming (ICALP), volume 229 of LIPIcs, pages 20:1\u201320:20, 2022.",
138
+ "url": null
139
+ }
140
+ },
141
+ {
142
+ "6": {
143
+ "title": "A simple algorithm for minimum cuts in near-linear time.",
144
+ "author": "Nalin Bhardwaj, Antonio Molina Lovett, and Bryce Sandlund.",
145
+ "venue": "In Scandinavian Symposium and Workshops on Algorithm Theory\n(SWAT), pages 12:1\u201312:18, 2020.",
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "7": {
151
+ "title": "A deterministic algorithm for balanced cut with applications to\ndynamic connectivity, flows, and beyond.",
152
+ "author": "Julia Chuzhoy, Yu Gao, Jason Li, Danupon Nanongkai, Richard Peng, and\nThatchaphol Saranurak.",
153
+ "venue": "In Symposium on Foundations of Computer Science (FOCS), pages\n1158\u20131167, 2020.",
154
+ "url": null
155
+ }
156
+ },
157
+ {
158
+ "8": {
159
+ "title": "A new algorithm for decremental single-source shortest paths with\napplications to vertex-capacitated flow and cut problems.",
160
+ "author": "Julia Chuzhoy and Sanjeev Khanna.",
161
+ "venue": "In Symposium on Theory of Computing (STOC), pages 389\u2013400,\n2019.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "9": {
167
+ "title": "A unifying framework for -sampling algorithms.",
168
+ "author": "Graham Cormode and Donatella Firmani.",
169
+ "venue": "Distributed and Parallel Databases, 32(3):315\u2013335, 2014.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "10": {
175
+ "title": "Distributed edge connectivity in sublinear time.",
176
+ "author": "Mohit Daga, Monika Henzinger, Danupon Nanongkai, and Thatchaphol Saranurak.",
177
+ "venue": "In Symposium on Theory of Computing (STOC), pages 343\u2013354,\n2019.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "11": {
183
+ "title": "Distributed weighted min-cut in nearly-optimal time.",
184
+ "author": "Michal Dory, Yuval Efron, Sagnik Mukhopadhyay, and Danupon Nanongkai.",
185
+ "venue": "In Symposium on Theory of Computing (STOC), pages 1144\u20131153,\n2021.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "12": {
191
+ "title": "Flows in networks.",
192
+ "author": "LR Ford and DR Fulkerson.",
193
+ "venue": "1962.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "13": {
199
+ "title": "Data structures for on-line updating of minimum spanning trees, with\napplications.",
200
+ "author": "Greg N Frederickson.",
201
+ "venue": "SIAM Journal on Computing, 14(4):781\u2013798, 1985.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "14": {
207
+ "title": "Ambivalent data structures for dynamic 2-edge-connectivity and k\nsmallest spanning trees.",
208
+ "author": "Greg N Frederickson.",
209
+ "venue": "SIAM Journal on Computing, 26(2):484\u2013538, 1997.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "15": {
215
+ "title": "A matroid approach to finding edge connectivity and packing\narborescences.",
216
+ "author": "Harold N. Gabow.",
217
+ "venue": "Journal of Computer and System Sciences, 50(2):259\u2013273, 1995.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "16": {
223
+ "title": "Minimum cut in o(m log n) time.",
224
+ "author": "Pawel Gawrychowski, Shay Mozes, and Oren Weimann.",
225
+ "venue": "In International Colloquium on Automata, Languages, and\nProgramming (ICALP), pages 57:1\u201357:15, 2020.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "17": {
231
+ "title": "A note on a recent algorithm for minimum cut.",
232
+ "author": "Pawe\u0142 Gawrychowski, Shay Mozes, and Oren Weimann.",
233
+ "venue": "In Symposium on Simplicity in Algorithms (SOSA), pages 74\u201379,\n2021.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "18": {
239
+ "title": "Parallel minimum cuts in near-linear work and low depth.",
240
+ "author": "Barbara Geissmann and Lukas Gianinazzi.",
241
+ "venue": "In Symposium on Parallelism in Algorithms and Architectures\n(SPAA), pages 1\u201311, 2018.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "19": {
247
+ "title": "Distributed minimum cut approximation.",
248
+ "author": "Mohsen Ghaffari and Fabian Kuhn.",
249
+ "venue": "In International Symposium on Distributed Computing (DISC),\npages 1\u201315, 2013.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "20": {
255
+ "title": "Congested clique algorithms for the minimum cut problem.",
256
+ "author": "Mohsen Ghaffari and Krzysztof Nowicki.",
257
+ "venue": "In Symposium on Principles of Distributed Computing (PODC),\npages 357\u2013366, 2018.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "21": {
263
+ "title": "Massively parallel algorithms for minimum cut.",
264
+ "author": "Mohsen Ghaffari and Krzysztof Nowicki.",
265
+ "venue": "In Symposium on Principles of Distributed Computing (PODC),\npages 119\u2013128, 2020.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "22": {
271
+ "title": "Faster algorithms for edge connectivity via random 2-out\ncontractions.",
272
+ "author": "Mohsen Ghaffari, Krzysztof Nowicki, and Mikkel Thorup.",
273
+ "venue": "In Symposium on Discrete Algorithms (SODA), pages 1260\u20131279,\n2020.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "23": {
279
+ "title": "Multi-terminal network flows.",
280
+ "author": "Ralph E Gomory and Tien Chung Hu.",
281
+ "venue": "Journal of the Society for Industrial and Applied Mathematics,\n9(4):551\u2013570, 1961.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "24": {
287
+ "title": "Incremental exact min-cut in polylogarithmic amortized update time.",
288
+ "author": "Gramoz Goranci, Monika Henzinger, and Mikkel Thorup.",
289
+ "venue": "ACM Transactions on Algorithms (TALG), 14(2):1\u201321, 2018.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "25": {
295
+ "title": "The expander hierarchy and its applications to dynamic graph\nalgorithms.",
296
+ "author": "Gramoz Goranci, Harald R\u00e4cke, Thatchaphol Saranurak, and Zihan Tan.",
297
+ "venue": "In Symposium on Discrete Algorithms (SODA), pages 2212\u20132228,\n2021.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "26": {
303
+ "title": "A faster algorithm for finding the minimum cut in a directed graph.",
304
+ "author": "Jianxiu Hao and James B. Orlin.",
305
+ "venue": "J. Algorithms, 17(3):424\u2013446, 1994.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "27": {
311
+ "title": "Local flow partitioning for faster edge connectivity.",
312
+ "author": "Monika Henzinger, Satish Rao, and Di Wang.",
313
+ "venue": "SIAM J. Comput., 49(1):1\u201336, 2020.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "28": {
319
+ "title": "Randomized fully dynamic graph algorithms with polylogarithmic time\nper operation.",
320
+ "author": "Monika R Henzinger and Valerie King.",
321
+ "venue": "Journal of the ACM (JACM), 46(4):502\u2013516, 1999.",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "29": {
327
+ "title": "A static 2-approximation algorithm for vertex connectivity and\nincremental approximation algorithms for edge and vertex connectivity.",
328
+ "author": "Monika Rauch Henzinger.",
329
+ "venue": "J. Algorithms, 24(1):194\u2013220, 1997.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "30": {
335
+ "title": "Fully dynamic 2-edge connectivity algorithm in polylogarithmic time\nper operation.",
336
+ "author": "Monika Rauch Henzinger and Valerie King.",
337
+ "venue": "SRC Technical Note, 4, 1997.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "31": {
343
+ "title": "Poly-logarithmic deterministic fully-dynamic algorithms for\nconnectivity, minimum spanning tree, 2-edge, and biconnectivity.",
344
+ "author": "Jacob Holm, Kristian De Lichtenberg, and Mikkel Thorup.",
345
+ "venue": "Journal of the ACM (JACM), 48(4):723\u2013760, 2001.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "32": {
351
+ "title": "Dynamic bridge-finding in o (log2 n) amortized time.",
352
+ "author": "Jacob Holm, Eva Rotenberg, and Mikkel Thorup.",
353
+ "venue": "In Symposium on Discrete Algorithms (SODA), pages 35\u201352. SIAM,\n2018.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "33": {
359
+ "title": "Dynamic graph connectivity in polylogarithmic worst case time.",
360
+ "author": "Bruce M Kapron, Valerie King, and Ben Mountjoy.",
361
+ "venue": "In Symposium on Discrete algorithms (SODA), pages 1131\u20131142,\n2013.",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "34": {
367
+ "title": "Global min-cuts in rnc, and other ramifications of a simple min-cut\nalgorithm.",
368
+ "author": "David R. Karger.",
369
+ "venue": "In Symposium on Discrete Algorithms (SODA), pages 21\u201330, 1993.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "35": {
375
+ "title": "Using randomized sparsification to approximate minimum cuts.",
376
+ "author": "David R. Karger.",
377
+ "venue": "In Symposium on Discrete Algorithms (SODA), pages 424\u2013432,\n1994.",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "36": {
383
+ "title": "Minimum cuts in near-linear time.",
384
+ "author": "David R. Karger.",
385
+ "venue": "Journal of the ACM, 47(1):46\u201376, 2000.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "37": {
391
+ "title": "A new approach to the minimum cut problem.",
392
+ "author": "David R. Karger and Clifford Stein.",
393
+ "venue": "J. ACM, 43(4):601\u2013640, 1996.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "38": {
399
+ "title": "Deterministic edge connectivity in near-linear time.",
400
+ "author": "Ken-ichi Kawarabayashi and Mikkel Thorup.",
401
+ "venue": "J. ACM, 66(1):4:1\u20134:50, 2019.",
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "39": {
407
+ "title": "Min-cuts and shortest cycles in planar graphs in o (n loglogn) time.",
408
+ "author": "Jakub Lacki and Piotr Sankowski.",
409
+ "venue": "In European Symposium on Algorithms, pages 155\u2013166. Springer,\n2011.",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "40": {
415
+ "title": "On the cut dimension of a graph.",
416
+ "author": "Troy Lee, Tongyang Li, Miklos Santha, and Shengyu Zhang.",
417
+ "venue": "arXiv preprint arXiv:2011.05085, 2020.",
418
+ "url": null
419
+ }
420
+ },
421
+ {
422
+ "41": {
423
+ "title": "Deterministic mincut in almost-linear time.",
424
+ "author": "Jason Li.",
425
+ "venue": "In Symposium on Theory of Computing (STOC), pages 384\u2013395,\n2021.",
426
+ "url": null
427
+ }
428
+ },
429
+ {
430
+ "42": {
431
+ "title": "Deterministic min-cut in poly-logarithmic max-flows.",
432
+ "author": "Jason Li and Debmalya Panigrahi.",
433
+ "venue": "In Symposium on Foundations of Computer Science (FOCS), pages\n85\u201392, 2020.",
434
+ "url": null
435
+ }
436
+ },
437
+ {
438
+ "43": {
439
+ "title": "Work-optimal parallel minimum cuts for non-sparse graphs.",
440
+ "author": "Andr\u00e9s L\u00f3pez-Mart\u00ednez, Sagnik Mukhopadhyay, and Danupon Nanongkai.",
441
+ "venue": "arXiv preprint arXiv:2102.06565, 2021.",
442
+ "url": null
443
+ }
444
+ },
445
+ {
446
+ "44": {
447
+ "title": "Weighted min-cut: sequential, cut-query, and streaming algorithms.",
448
+ "author": "Sagnik Mukhopadhyay and Danupon Nanongkai.",
449
+ "venue": "In Symposium on Theory of Computing (STOC), pages 496\u2013509,\n2020.",
450
+ "url": null
451
+ }
452
+ },
453
+ {
454
+ "45": {
455
+ "title": "Computing edge-connectivity in multigraphs and capacitated graphs.",
456
+ "author": "Hiroshi Nagamochi and Toshihide Ibaraki.",
457
+ "venue": "SIAM J. Discret. Math., 5(1):54\u201366, 1992.",
458
+ "url": null
459
+ }
460
+ },
461
+ {
462
+ "46": {
463
+ "title": "A linear-time algorithm for finding a sparse k-connected spanning\nsubgraph of ak-connected graph.",
464
+ "author": "Hiroshi Nagamochi and Toshihide Ibaraki.",
465
+ "venue": "Algorithmica, 7(1):583\u2013596, 1992.",
466
+ "url": null
467
+ }
468
+ },
469
+ {
470
+ "47": {
471
+ "title": "Dynamic spanning forest with worst-case update time: adaptive, las\nvegas, and o (n1/2-)-time.",
472
+ "author": "Danupon Nanongkai and Thatchaphol Saranurak.",
473
+ "venue": "In Symposium on Theory of Computing (STOC), pages 1122\u20131129,\n2017.",
474
+ "url": null
475
+ }
476
+ },
477
+ {
478
+ "48": {
479
+ "title": "Dynamic minimum spanning forest with subpolynomial worst-case update\ntime.",
480
+ "author": "Danupon Nanongkai, Thatchaphol Saranurak, and Christian Wulff-Nilsen.",
481
+ "venue": "In Symposium on Foundations of Computer Science (FOCS), pages\n950\u2013961, 2017.",
482
+ "url": null
483
+ }
484
+ },
485
+ {
486
+ "49": {
487
+ "title": "Almost-tight distributed minimum cut algorithms.",
488
+ "author": "Danupon Nanongkai and Hsin-Hao Su.",
489
+ "venue": "In International Symposium on Distributed Computing (DISC),\npages 439\u2013453, 2014.",
490
+ "url": null
491
+ }
492
+ },
493
+ {
494
+ "50": {
495
+ "title": "Small cuts and connectivity certificates: A fault tolerant approach.",
496
+ "author": "Merav Parter.",
497
+ "venue": "arXiv preprint arXiv:1908.03022, 2019.",
498
+ "url": null
499
+ }
500
+ },
501
+ {
502
+ "51": {
503
+ "title": "Fast computation of small cuts via cycle space sampling.",
504
+ "author": "David Pritchard and Ramakrishna Thurimella.",
505
+ "venue": "ACM Transactions on Algorithms (TALG), 7(4):1\u201330, 2011.",
506
+ "url": null
507
+ }
508
+ },
509
+ {
510
+ "52": {
511
+ "title": "Computing exact minimum cuts without knowing the graph.",
512
+ "author": "Aviad Rubinstein, Tselil Schramm, and S Matthew Weinberg.",
513
+ "venue": "arXiv preprint arXiv:1711.03165, 2017.",
514
+ "url": null
515
+ }
516
+ },
517
+ {
518
+ "53": {
519
+ "title": "A simple deterministic algorithm for edge connectivity.",
520
+ "author": "Thatchaphol Saranurak.",
521
+ "venue": "In Symposium on Simplicity in Algorithms (SOSA), pages 80\u201385,\n2021.",
522
+ "url": null
523
+ }
524
+ },
525
+ {
526
+ "54": {
527
+ "title": "Expander decomposition and pruning: Faster, stronger, and simpler.",
528
+ "author": "Thatchaphol Saranurak and Di Wang.",
529
+ "venue": "In Symposium on Discrete Algorithms (SODA), pages 2616\u20132635,\n2019.",
530
+ "url": null
531
+ }
532
+ },
533
+ {
534
+ "55": {
535
+ "title": "A simple min-cut algorithm.",
536
+ "author": "Mechthild Stoer and Frank Wagner.",
537
+ "venue": "J. ACM, 44(4):585\u2013591, 1997.",
538
+ "url": null
539
+ }
540
+ },
541
+ {
542
+ "56": {
543
+ "title": "Fully-dynamic min-cut.",
544
+ "author": "Mikkel Thorup.",
545
+ "venue": "Combinatorica, 27(1):91\u2013127, 2007.",
546
+ "url": null
547
+ }
548
+ },
549
+ {
550
+ "57": {
551
+ "title": "Dynamic graph algorithms with applications.",
552
+ "author": "Mikkel Thorup and David R Karger.",
553
+ "venue": "In Scandinavian Workshop on Algorithm Theory (SWAT), pages\n1\u20139. Springer, 2000.",
554
+ "url": null
555
+ }
556
+ }
557
+ ],
558
+ "url": "http://arxiv.org/html/2302.05951v2"
559
+ }
20240322/2302.07433v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2304.07696v2.json ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Learning-Based One-Bit Maximum Likelihood Detection for Massive MIMO Systems: Dithering-Aided Adaptive Approach",
3
+ "abstract": "In this paper, we propose a learning-based detection framework for uplink massive multiple-input and multiple-output (MIMO) systems with one-bit analog-to-digital converters.\nThe learning-based detection only requires counting the occurrences of the quantized outputs of -1 and +1 for estimating a likelihood probability at each antenna.\nAccordingly, the key advantage of this approach is to perform maximum likelihood detection without explicit channel estimation which has been one of the primary challenges of one-bit quantized systems.\nHowever, due to the quasi-deterministic reception in the high signal-to-noise ratio (SNR) regime, one-bit observations in the high SNR regime are biased to either or , and thus, the learning requires excessive training to estimate the small likelihood probabilities.\nTo address this drawback, we propose a dither-and-learning technique to estimate likelihood functions from dithered signals.\n First, we add a dithering signal to artificially decrease the SNR and then infer the likelihood function from the quantized dithered signals by using an SNR estimate derived from a deep neural network-based estimator which is trained offline.\nWe extend our technique by developing an adaptive dither-and-learning method that updates the dithering power according to the patterns observed in the quantized dithered signals.\nThe proposed framework is also applied to channel-coded MIMO systems by computing a bit-wise and user-wise log-likelihood ratio from the refined likelihood probabilities.\nSimulation results validate the performance of the proposed methods in both uncoded and coded systems.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Massive MIMO systems for sub-6 GHz wireless communications [2 ###reference_b2###, 3 ###reference_b3###] and millimeter wave (mmWave) communications [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###] have been considered as one of the emerging technologies for future communications because of the remarkable improvements in terms of spectral efficiency and capacity gains [7 ###reference_b7###].\nAs wireless communication systems continue to grow in popularity and become increasingly important, there is a growing need to investigate communication systems that are not only reliable and high-performing, but also energy-efficient for various future wireless applications such as vehicle-to-everything, internet-of-things, extended reality, and smart grid [8 ###reference_b8###, 9 ###reference_b9###].\nThe small wavelength of mmWave signals and the reduced antenna spacing in mmWave systems enable the installation of more antennas per unit area. Each of these antennas is connected to a dedicated radio frequency (RF) chain equipped with a pair of high-precision data converters which can unlock enhanced spatial coverage and improved signal processing capabilities.\nHowever, the use of a large number of high-resolution analog-to-digital converters (ADCs) at receivers results in prohibitively huge power consumption, which becomes the main bottleneck in the practical deployment because a high-resolution ADC is particularly power-hungry as the power consumption of an ADC tends to scale up exponentially with the number of quantization bits.\nTo overcome the circuit power issue, deploying low-precision ADCs has been considered as a promising low-power solution over the past years [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###].\nAs an extreme case of the low-resolution data converters, the use of one-bit data converters has emerged and become particularly attractive due to the ability to dramatically enhance power efficiency, lower hardware cost, and simplify analog processing of receivers [15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###].\nBecause of the strong nonlinearity, data detection and channel estimation with one-bit data converters are known to be more challenging; however, the use of massive antenna arrays can alleviate the performance loss\n[23 ###reference_b23###, 24 ###reference_b24###].\nNevertheless, when conventional signal processing algorithms are applied directly to low-resolution systems, significant performance losses can occur due to the severe nonlinear distortions that low-resolution ADCs cause.\nState-of-the-art one-bit detection, beamforming, and channel estimation techniques have been developed in the recent decades [17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###, 21 ###reference_b21###, 25 ###reference_b25###, 26 ###reference_b26###].\nLow-complexity symbol-level beamforming methods for one-bit quantized systems were developed for quadrature-amplitude-modulation (QAM) constellations [17 ###reference_b17###].\nTaking into account the heavily quantized signals and antenna correlations, an iterative multiuser detection powered by a message-passing de-quantization algorithm was devised in [18 ###reference_b18###].\nIn [19 ###reference_b19###], a high-complexity one-bit ML detection and low-complxity zero-forcing (ZF)-type detection methods were developed.\nIn terms of MIMO detectors, by converting the ML estimation problem in [19 ###reference_b19###] to convex optimization, the optimal maximum-likelihood (ML) detector was introduced and the near-ML detector was also proposed by transforming the ML detection problem into a tractable convex optimization problem [20 ###reference_b20###].\nSuccessive-interference-cancellation one-bit receivers that can be applied to modern channel coding techniques was presented in [21 ###reference_b21###].\nMachine learning techniques were also employed for one-bit detection [27 ###reference_b27###, 28 ###reference_b28###, 29 ###reference_b29###].\nIt was shown in [27 ###reference_b27###] that support vector machines can be used for efficient channel estimation and data detection with one-bit quantized observations.\nIn [28 ###reference_b28###], the conventional orthogonal frequency division multiplexing precoder and decoder are replaced with artificial neural networks to enable unsupervised autoencoder-based detection.\nThe authors in [29 ###reference_b29###] combined a linear estimator based on the Bussgang decomposition and a model-based deep neural network (DNN) approach to make data detection with one-bit ADCs adaptive to the current channel.\nAlthough the aforementioned state-of-the-art one-bit detectors provide high detection performance, the detection methods require the estimation of channel state information (CSI) which is one of the key challenges in one-bit quantized communication systems.\nAccordingly, numerous one-bit ADC channel estimation methods have been developed such as least-squares (LS), ML, and Bussgang decomposition-based methods [20 ###reference_b20###, 30 ###reference_b30###].\nCombined with antenna-wise non-zero thresholding for one-bit data quantizers, the majorization-minimization-based ML channel estimator was proposed in [25 ###reference_b25###].\nIn [26 ###reference_b26###], it was shown that Bussgang decomposition-based channel estimator can provide reliable performance for high-order constellations in one-bit ADC systems.\nThe authors in [31 ###reference_b31###] utilized supervised deep learning in developing a mapping from the one-bit quantized measurements to the wireless channels.\nThe authors in [32 ###reference_b32###] derived the lower bounds on the performance of the channel estimation in one-bit MIMO systems\nconsidering various mmWave channel models.\nThese recent advancements in channel estimation schemes for one-bit quantized signals, however, still suffer degradation in estimation accuracy compared with high-precision ADC systems.\nDithering has found application in one-bit ADC systems for various purposes.\nIn [33 ###reference_b33###], dithering served the purpose of mitigating correlations within spatial quantization errors for sub-wavelength spatial sampling.\nEssentially, the application of dithering before quantization was intended to decorrelate distortion errors\u2014a crucial aspect for achieving ideal performance through linear processing.\nIn [34 ###reference_b34###], it was demonstrated that a Gaussian-type dither can enhance the effective bit-width of one-bit ADCs, thereby aiding in the reduction of estimation bias for channel estimation.\nOther one-bit ADC works incorporating dithering signals also acknowledge the utility of dithered quantizers [35 ###reference_b35###].\nFor instance, in the case of a Gaussian prior on channel coefficients, the use of the linear minimum mean squared error estimate of the channel as a dither signal is demonstrated to be effective in practice [36 ###reference_b36###].\nAs another research direction, learning-based data detection techniques have recently been investigated to remove or minimize the requirement for explicit channel estimation in one-bit ADC MIMO systems [37 ###reference_b37###, 38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###].\nThe authors in [37 ###reference_b37###] applied sphere decoding to the one-bit quantized systems and showed that the detection complexity can be reduced while achieving near-optimal performance.\nViewing the one-bit ADC systems as a classification problem, various supervised-learning-based data detection techniques were provided by estimating effective channels and learning the non-linear system response [38 ###reference_b38###].\nIn [39 ###reference_b39###], however, a channel estimation was done to initialize likelihood functions for ML detection, and a learning-based likelihood function was used for post-update of the likelihood functions.\nIn contrast, the authors in [40 ###reference_b40###] used an estimated channel to generate noisy training pilots and developed an expectation-maximization algorithm that facilitates the likelihood probability learning process.\nUnlike previous learning-based approaches that focused on developing detection mechanisms based on estimated channels, we focus on applying one-bit ML detection and learning likelihood functions without channel estimation.\nThen we propose a novel dithering-based learning to overcome the problem of the learning process with the limited amount of training.\nWe remark that the primary goals associated with the dithering signal in previous works [33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###, 36 ###reference_b36###] differ from the specific objective of our work."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Contributions",
15
+ "text": "In this work, we explore a learning-based ML detection approach that replaces a one-bit channel estimation stage with a counting-based learning process for an uplink multiuser MIMO systems with one-bit ADCs.\nThe contributions of this work are summarized as\nfollows:\nWe propose a dither-and-learning technique to infer the likelihood functions from dithered signals. Such an approach significantly reduces the number of zero-valued likelihood functions experienced by naive learning-based one-bit detection.\nAfter the dithering process, we obtain a preferable statistical pattern in the one-bit quantized output sequences with moderate sign changes thanks to the reduced SNR.\nThen a denoising phase retrieves the actual likelihood functions without the impact of the dithering noise.\nThe proposed method allows estimating the likelihood functions with a reasonable training length by drawing meaningful sign patterns in the quantized output sequence.\nTo further improve learning accuracy, we develop an adaptive dither-and-learning technique for adjusting each antenna element\u2019s dithering power according the patterns observed\nin the quantized dithered signals.\nSince the performance of the proposed dithering-based learning algorithm is affected by the dithering power, the proposed feedback-based adaptive algorithm effectively adjusts the dithering noise power depending on the pattern of the one-bit quantized outputs.\n A DNN-based SNR estimation method is also developed to facilitate the denoising phase of the dithering-based learning in the practical systems.\nTo further apply the proposed learning-based scheme to modern communication frameworks rather than being limited to hard-output detection, we compute the log-likelihood ratio (LLR), i.e., soft output, which is then fed into a channel-decoder.\nNoting that the LLR needs to be defined with respect to an individual binary bit of each user, we separate the index set of all possible symbol vectors into two disjoint subgroups and compare the sum of the likelihood probabilities over the two subgroups.\nSimulation results validate that, in contrast to the conventional learning-based one-bit ML detectors and other channel estimation-based one-bit data detectors, the proposed detectors can achieve comparable performance to the optimal one-bit ML detection that operates with perfect CSI and exhibit more reliable detection performance in both uncoded and coded simulation cases.\nThis paper is organized as follows.\nIn Section II ###reference_###, we introduce the uplink MIMO signal model and the optimal one-bit ML detection rule.\nSection III ###reference_### provides a counting-based one-bit ML detection strategy that does not require channel estimation.\nIn Section IV-A ###reference_###, we propose the learning-based ML detection, using dithering noise to relax the limitation of the counting-based approach.\nSection IV-B ###reference_### explores the adaptation of the dithering noise variance\nand Section IV-C ###reference_### delivers a DNN-based SNR estimation needed for the de-noising stage.\nWe extend the proposed ML mechanism to channel-coded communication systems in Section V ###reference_###.\nIn Section VI ###reference_###, the proposed detection methods are evaluated.\nSection VII ###reference_### concludes the paper.\nNotation: is a matrix and is a column vector.\n and denote the transpose operation of matrix and column vector, respectively.\nWe denote as the th element of .\nWith mean and variance , we generate a real Gaussian distribution and a complex Gaussian distribution using and , respectively.\n creates a diagonal matrix that has \u2019s as its diagonal entries.\n and are a one vector and zero vector, respectively.\n denotes the identify matrix.\n and take the real and imaginary part of , respectively.\n denotes the indicator function which outputs 1 if is true, and 0 otherwise.\n and are the probability and expectation operators, respectively."
16
+ },
17
+ {
18
+ "section_id": "2",
19
+ "parent_section_id": null,
20
+ "section_name": "II System Model",
21
+ "text": "In this section, we describe the uplink MIMO system model and the optimal one-bit ML detection rule which is feasible in the case of perfect CSI."
22
+ },
23
+ {
24
+ "section_id": "2.1",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-A Signal Model",
27
+ "text": "We consider uplink multiuser MIMO communication systems where the base station (BS) equipped with receive antennas concurrently communicates with single-antenna users.\nWe suppose in the context of massive MIMO systems.\nEach antenna element has its own dedicated RF chain as well as individual in-phase and quadrature one-bit ADCs.\nThe wireless channel follows a block fading model whose channel matrix is invariant for coherent time slots.\nWe then split the uplink transmission into a training phase with time slots and a data transmission phase with slots, i.e., .\nDuring the training phase, each user transmits up to pilot symbols.\nWe use to denote the number of possible pilot symbol combinations of users.\n When all users adopt an -ary constellation, we have , e.g., for binary phase shift keying at all users.\nIn the considered system, each pilot symbol combination is transmitted times, which implies that the condition of is necessary to capture the characteristics of all possible combinations.\n\nAccordingly, to reduce the overall training overhead, it is desirable to reduce both and .\nIn this paper, we focus on improving the learning-based one-bit ML detection performance by reducing the training repetition for each candidate vector, i.e. , and we leave the problem of reducing as a future work.\nThe set of the constellation points of -ary QAM scheme is represented by , from which is generated where is the complex-valued QAM data symbol of the th user at time .\nWe assume that has zero mean and unit variance, i.e., and .\nA symbol vector , denotes the collection of the transmitted signals from users at time .\nWe consider each user to adopt -ary QAM constellation and thus, the total number of possible symbol vectors becomes which is the cardinality of .\nAssuming that the transmitted symbols from users are concurrently received and jointly processed at the BS, the received analog complex baseband signal vector at time can be represented as\nwhere is the complex-valued channel matrix between the BS and users, whose th column vector, i.e., , indicates the uplink channel vector defined for the propagation from all user to the th antenna element of the BS.\nThe transmit power is denoted as , and the additive white complex Gaussian noise vector follows where is the AWGN noise variance.\nHere, we define the SNR as\nThen, each real and imaginary component of the received signals in (1 ###reference_###) is quantized by one-bit ADCs which only reveal the sign of the signals, i.e., either or .\nThe complex-valued quantized signal can be represented as\nwhere is an element-wise one-bit data quantizer which returns if the input is positive, or otherwise.\nThe received signal in the complex-vector expression can be rewritten in a real-valued vector representation as\nwhere\nwhere is the real-valued noise vector.\nAccordingly, we also convert the one-bit quantized signal into a real-vector form as\nwhich is composed of real-valued observations of either or .\nThroughout the paper, we consider to have antennas to denote the real-valued ports for ease of notation, i.e., the th antenna in the real-value representation corresponds to ."
28
+ },
29
+ {
30
+ "section_id": "2.2",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-B One-Bit ML Detection with CSI",
33
+ "text": "We first introduce the conventional yet optimal one-bit ML detection in the case of perfect CSI.\nWe define the index set of all possible symbol vectors as and use to denote the th pilot symbol vector in a real-vector form.\nLet with denote the matrix of likelihood functions whose scalar entry means the probability that the th antenna component receives when the users transmit the th symbol vector .\nAssuming uncorrelated antennas, the likelihood probability of the one-bit quantized signal vector for a given channel and transmit symbol vector is given as\nWe remark that such an assumption for the uncorrelated antenna is valid for massive MIMO systems for sub-6GHz communications.\nFor some wideband systems such as millimeter wave communications, the assumption may not hold due to the strong line-of-sight channel.\nAccordingly, incorporating the antenna correlation structure in the learning-based one-bit ML detection problem would be a desirable future research direction.\nRecall that the one-bit observation becomes (or ) when the th element of (7 ###reference_###) is positive (or negative).\nSince the noise follows a Gaussian distribution , the likelihood function for the th antenna element of the quantized observation with the perfect CSI can be computed as\nwhere represents the cumulative distribution function of a standardized Gaussian distribution, and\nis the effective noiseless output of the th antenna in real-value representation when transmitting the th symbol vector.\nUsing equation (19 ###reference_###), the one-bit ML detection rule can be obtained as [20 ###reference_b20###, 38 ###reference_b38###]\nThe detected real-valued symbol vector is then defined as = which can be mapped to as detected QAM symbols by performing the reverse operation of (13 ###reference_###).\nAssuming an equal probability of for all possible symbol vectors, the ML detection in (23 ###reference_###) is identical with the optimal maximum a posteriori probability detection.\nWe note that the optimal ML detection in (23 ###reference_###) requires perfect CSI for computing (20 ###reference_###).\nThe channel estimation, however, can be greatly burdensome in massive MIMO systems and much less accurate for receivers employing one-bit ADCs.\nIn this regard, it is desirable to perform the optimal detection without requiring explicit channel estimation in one-bit massive MIMO systems.\n Note that the maximum number of users for multiuser MIMO in the uplink communications is specified as users (one layer per user case) in the 3GPP standard [41 ###reference_b41###].\nIn addition, -layer multiuser MIMO is commonly considered in practice.\nAccordingly, although reducing the search space is a desirable research direction in the implementation perspective, it is considered to be feasible to perform the current detection."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III Preliminary:\nNaive One-bit ML Detection without CSI",
39
+ "text": "Now, we outline a direct learning-based one-bit ML detection strategy that does not require channel estimation.\nAlthough this approach still requires training sequences, the learning principle is greatly simpler than the one-bit channel estimation, thereby providing robust detection performance.\nEach pilot symbol vector is transmitted times throughout the pilot transmission of length .\nThe BS aims to approximate the true likelihood probability by observing the frequency of and during the transmission of the th symbol vector as\nwhere and is the one-bit observation.\nThe operation in (24 ###reference_###) counts the number of \u2019s at the th antenna element out of the consecutive observations triggered by .\nSince each observation follows independent and identically distributed (IID) Bernoulli variable with a probability of , can be interpreted as a binomial distribution averaged by , , which approaches as goes infinity.\nWe note that the amount of training can be cut in half by making\n and symmetrical about the origin for , i.e., .\nBy (21 ###reference_###) and (22 ###reference_###), we can establish that\nTherefore, we can accommodate for and , hence training for is considered to be redundant.\n\nAfter learning the likelihood functions, the BS obtains the estimate of the likelihood probability for a given data signal as\nand the receiver can perform the ML detection presented in (23 ###reference_###) by searching the best index that maximizes (29 ###reference_###) over , which yields the symbol vector with the highest likelihood of transmission, given the observed one-bit quantized measurements.\nAlthough such one-bit ML approaches can provide a near-optimal detection performance with simple function learning techniques, they may suffer from critical performance degradation due to a limited amount of training which results in a zero-valued likelihood function (equivalently, also one-valued likelihood function), called under-trained likelihood function.\nFor the training of the th symbol vector, the th output provides different realizations of .\nAccordingly, in order to prevent the pair of from becoming under-trained,\nthe sign of where needs to change\nat least once out of pilot transmissions.\nHowever, at the high SNR regime, the sign change occurs with low probability, which leads to the quantized outputs at each antenna observed repeatedly to be either all \u2019s or all \u2019s due to the low power of the aggregate noise.\nThis phenomenon results in obtaining a number of zero-valued empirical likelihood functions in (24 ###reference_###), e.g., .\nIn other words, the one-bit quantized observations at the high SNR regime become quasi-deterministic such that it is difficult to observe a change in the sign of the quantized output sequences during the transmissions of the symbol vector .\nThe problem of the under-trained likelihood function is stated in the following remark:\nUnder-trained likelihood functions cause a significant degradation of the ML detection which uses (24 ###reference_###).\nFirstly, the likelihood computation in (29 ###reference_###) can be completely negated by any zero probability.\nSecondly, when the SNR is very high, the quantized output becomes\nThere can exist some symbols whose quantized outputs are equal without the noise, which means, , .\nIn this case, if all likelihood functions are under-trained, both the likelihood probabilities of and computed from (29 ###reference_###) are highly likely to become in the very high SNR, which fails the detection for such symbols.\nA similar issue also occurs for conventional one-bit ADC systems with true likelihood functions in the very high SNR.\nFor the learning-based approach, however, it happens even in the medium SNR due to the under-trained likelihood functions.\nThese problems are the key motivation of our work to prevent the under-trained functions with a short pilot length for any SNR regime.\n###figure_1### Fig. 1 ###reference_### illustrates the symbol error rates (SERs) of the optimal one-bit ML detection and the naive approach with the number of training samples for receive antennas, users, and -QAM constellation with respect to the SNR.\n\nRegarding the statistics of the channel, we consider Rayleigh fading with zero mean and unit variance.\n\nIt is observed that although we increase the number of pilot signals, the naive approach starts to become increasingly problematic at the medium to high SNR since the under-trained likelihood functions start to appear more frequently as the SNR increases.\nTherefore, this critical limitation of the naive learning-based approach needs to be resolved to deploy the one-bit ADC systems in practice.\nThe main challenge lies in ensuring the robustness of learning-based ML detection to the training duration across all SNR ranges."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "IV Adaptive Statistical Learning without CSI",
45
+ "text": "In this section, we present an adaptive learning-based ML detection method for uplink MIMO systems with one-bit ADCs in order to closely achieve the optimal CSI-aware ML detection performance without suffering the error floor of the naive learning approach observed in Fig. 1 ###reference_### and without the need for explicit channel estimation.\nBeing identical to the maximum a posteriori estimation, the ML estimation is optimal in minimizing the probability of detection error when all possible transmit symbols have an equal probability of being transmitted.\nAccordingly, the proposed method can attain near-optimal detection performance without requiring explicit channel estimation."
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "IV-A Dither-and-Learning",
51
+ "text": "To resolve the problem of the under-trained likelihood functions, we propose the dither-and-learning method that can learn the likelihood functions with a reasonable training length .\nAs shown in Fig. 2 ###reference_###, the BS appends antenna-wise dithering signals to the analog baseband received signal during the training phase.\nAfter dithering, the quantization input at time in the real-vector form becomes\nWe let denote the variance of the real-valued dithering signal at the th antenna and consider where .\nThe distribution of the dithering signal is controlled at the BS.\nThe dithered and quantized signal associated at time becomes\nBy injecting the dithering signal into the unquantized signal , we allow the dithered signal to cross the decision threshold within a limited amount of learning, thereby avoiding under-trained likelihood functions and facilitating the acquisition of statistical patterns.\nThe dithering signal is used only for the training purpose as stated in Remark 3 ###reference_ark3###.\nThe artificial dithering signals are added during the training phase to promote the change of sign of received signals for a given pilot symbol within pilot transmissions.\nAs described in Remark 2 ###reference_ark2###, it is important to capture a change of sign within pilot signals.\nBy adding the dithering noise, we obtain different realizations of .\nThen needs to be less and also larger than at least once out of observations to prevent from being under-trained.\nSuch an event is less likely for the non-dithered naive approach when the noise variance is small.\nAccordingly, by adding the dithering signal in the proposed method,\nthis event is expected to occur more often compared with the non-dithered naive approach, and hence can be apparently shortened in the perspective of avoiding under-trained likelihood probabilities.\nWe further note that the dithered signal then is denoised to adjust the trained likelihood function to the true SNR, and the dithering noise is not present during the data transmission phase.\n###figure_2### As a next step, the BS computes the estimated likelihood function for the dithered signals as in (24 ###reference_###) for .\nWithout loss of generality, let us fix for ease of explanation.\nThen, offers an estimate of the actual likelihood functions as shown in (21 ###reference_###) with increased noise power:\nSince the dithering-aided counting in (34 ###reference_###) approximates (35 ###reference_###) that includes the impact of dither signal, we plunge into the denoising stage to extract the information of desired signal and channel only without the dithering signals.\nAssuming (equivalently, SNR) is known at the BS, the BS can find the estimate of in (22 ###reference_###) by leveraging (34 ###reference_###).\nSuch denoising is computed as\n###figure_3### Finally, the BS uses \nto approximate the true (non-dithered) likelihood function as\nSince the likelihood function of the dithered signal in (34 ###reference_###) is much less likely to have zero probability compared with that of the non-dithered case, the BS can learn the majority of the likelihood functions with a reasonable training length.\nWhen we observe zero likelihood functions after the dither-and-learning process, we set a very small probability that is lower than any of the non-zero likelihood functions, i.e.,\nwhere ,\n indicates the index set of zero-valued likelihood functions for and , and is the index set of non-zero likelihood functions for and .\nFor the proposed dither-and-learning method, intuitively, the power of dithering signals affects the learning performance as stated in Remark 4 ###reference_ark4###.\nThe level of dithering power is important as insufficient dithering power continues to trigger under-trained likelihood functions, and excessive dithering power hinders recovering the symbol information, leading noise term dominant.\nBased on Remark 4 ###reference_ark4###, we additionally propose an adaptive method for updating the dithering power in the subsequent section."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "IV-B Adaptive Dithering Power Update",
57
+ "text": "Using a fixed dithering variance does not suitably adjust the reception mechanism, and this behavior can cause two fundamental problems:\n1) when the dithering power is low and the SNR remains high, it is highly probable to have undesirably many under-trained likelihood functions and\n2) with high dithering power, although the dither-and-learning procedure successfully prevents the under-trained likelihood functions, the estimate of the effective output in (36 ###reference_###) cannot be accurate due to the large randomness of the dithering signals.\nIn this respect, the BS has to properly determine dithering power considering the system environment.\nTo this end, we empirically update the dithering power by leveraging feedback based on the behavior of received observations and propose the adaptive dither-and-learning (ADL) method that fits the dithering power into a suitable range.\nWe depict the illustration of the proposed ADL method in Fig. 3 ###reference_###.\nRather than using up all pilot signals at once, we first divide the signals of each pilot symbol vector into disjoint sub-blocks in which each sub-block accommodates training samples where is assumed to be a multiple of .\nThen, the th dithered and quantized sub-block observed at the th antenna when transmitting can be represented as\nwhere and denotes the dithered observation at the th antenna at time for the th pilot symbol vector .\nWhen the received training sequence turns out to be either or for the th antenna, the dither power is regarded to be lower than the desirable dithering power for at the th antenna in the currently configured system.\nIn such a case, we increase the dithering noise variance of the th antenna for the next sub-block by step size 111Here, we determine through random search. To further optimize the dithering variance or , we may choose to apply hyperparameter tuning techniques such as Bayesian optimization [42 ###reference_b42###]. We shall leave it for a future work., i.e.,\nwhere is the indicator function defined for the th antenna, i.e., if or , and otherwise.\nThe indicator allows that the subsequent training sequence is more likely to observe the sign change within quantized outputs thanks to the increased perturbation.\n\nNote that the antenna-wise in-place operation in (40 ###reference_###) is performed over sub-blocks and is initialized for every symbol vector.\nUpon completing all sub-blocks, the likelihood probability of symbol vector is determined by computing the mean of the likelihood probabilities for all sub-blocks associated with symbol vector .\nAlgorithm 1 ###reference_### summarizes the adaptive dither-and-learning (ADL) process.\nWe note that the fixed dither-and-learning method in Section IV-A ###reference_### is the special case of the ADL method with .\nWe also remark that the ADL method prevents not only the under-trained likelihood functions but also the undesirably large fluctuations of the received signals since the dithering power update is supervised by the BS to fit into the appropriate SNR region based on the observations."
58
+ },
59
+ {
60
+ "section_id": "4.3",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-C SNR Estimation",
63
+ "text": "In spite of the properly managed dithering power, the computation of likelihood probabilities using the denoising process in (36 ###reference_###) requires knowledge of the SNR or equivalently, the AWGN noise variance .\nTo address this, we also present the supervised learning approach to estimate the SNR using a DNN, as illustrated in Fig. 4 ###reference_###.\nDuring the offline training phase, we collect training data points where is the th one-bit quantized observations and is the true SNR at time .\nOnce sufficient samples are collected, the BS selects a portion of the data points as training samples and performs the supervised learning with as inputs and as outputs to be estimated.\nAssuming that there exist hidden layers, the estimated SNR is represented as the scalar output of the neural network expressed as\nwhere each intermediate vector in the DNN is defined as for with the initial point defined as .\nHere, is an element-wise activation function such as rectified linear unit or sigmoid functions.\nThe DNN is updated by minimizing the estimation error, i.e., via backpropagation, hence estimates the SNR by extracting meaningful information of the one-bit observations such as statistical pattern and the number of \u2019s or \u2019s.\nThroughout the paper, the DNN-based SNR estimation employs four hidden layers with output dimension of .\nThe rectified linear unit (ReLU), i.e., is applied to each intermediate vector.\nWe note that the ReLU makes the last regression layer garner a non-negative scalar which is used for back-propagation via the Adam optimizer."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Extension to Channel Coding",
69
+ "text": "Even though the one-bit ML detection has attractive aspects, we are still confined to the uncoded hard-decision scenarios.\nModern communication frameworks should be paired with channel coding that exhibits an impressive gain and performance calibration; however, soft outputs are needed for the decoding perspective.\nIn this section, we first introduce a frame structure to use channel coding, after that we describe how to generate soft metrics from the previously trained likelihood functions.\n###figure_4###"
70
+ },
71
+ {
72
+ "section_id": "5.1",
73
+ "parent_section_id": "5",
74
+ "section_name": "Frame Structure",
75
+ "text": "For a channel-coded communication framework, we first assume that a (, ) binary code with the code rate of is used throughout the paper.\nAt the beginning of the framework, each user then generates uncoded binary messages of length , denoted as .\nBy encoding the binary messages with the pre-arranged channel coding scheme, we have the codeword for , which is denoted as .\nUpon generating the codeword, each user combines pieces of binary information together to map the binary bits into an -ary QAM symbol, and then the transmitted symbol of the th user at time slot is represented as\nwhere is the constellation mapping function from binary bits to -ary QAM symbols and where means the number of channel uses for a data subframe of each user by mapping bits into a symbol. The overall communication structure is illustrated in Fig. 3 ###reference_###.\nEach subframe of the data transmission phase is composed of the channel uses, and the data transmission phase consists of the subframes, i.e., ."
76
+ },
77
+ {
78
+ "section_id": "5.2",
79
+ "parent_section_id": "5",
80
+ "section_name": "Soft Metric",
81
+ "text": "In Section IV ###reference_###, we presented how to produce the likelihood probability utilizing the repeated transmissions with pilot signals per possible symbol vector and the ADL technique.\nFurthermore, from the calculated likelihood probabilities, we can compute a likelihood ratio for a given data payload observation .\nWe note that the one-bit observation at the th time slot is held accountable for the LLR computation of the positions of each user; as a result, the LLR needs to be calculated based on the user-wise and bit-wise operation.\nTo this end, regarding the th bit of the th user\u2019s QAM symbol, we separate the index set of all possible symbol vectors into two non-overlapping subgroups as follows:\nwhere and denotes the th element of which is the QAM symbol of user .\nConsequently, each subset in (43 ###reference_###) is crafted to separate indices into two disjoint sets in terms of the th bit of the th user\u2019s bit sequence that corresponds to .\nBy the definition of (43 ###reference_###), we have and for any and .\nNote that the subsets are defined regardless of current observations and computed only once when the set of system parameters is configured.\nLeveraging the two separated subgroups and the pre-determined likelihood probabilities for the given observation, the corresponding LLR of the th bit of the th user at time can be represented as\nwhere , , is from the definition of LLR, is from Bayes\u2019 rule with the equiprobability of and , (c) comes from the definition of sets defined in (43 ###reference_###), and is from the equiprobability of and the ML detection rule in (19 ###reference_###).\nFinally, the collected LLRs associated with the th user, i.e., , are conveyed to a channel decoder to recover the th user\u2019s message .\nTherefore, the ADL-based estimates of the likelihood functions can be successfully used for computing the LLR of the channel decoder."
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "VI Simulation Results",
87
+ "text": "In this section, we evaluate the performance of the proposed learning-based method in terms of the number of under-trained likelihood functions, the symbol error probability (SER) for the uncoded communication systems, and the frame error probability (FER) for the coded communication systems.\nWe consider Rayleigh fading model whose each element follows .\nWe initialize the dithering variance as and the increment as for all BS antennas in the ADL case."
88
+ },
89
+ {
90
+ "section_id": "6.1",
91
+ "parent_section_id": "6",
92
+ "section_name": "VI-A Under-trained Likelihood Functions",
93
+ "text": "###figure_5### Fig. 5 ###reference_### shows the average number of under-trained likelihood functions, i.e., , out of antennas over the wide range of simulated SNR levels considering antennas, users, and 4-QAM.\nFor the learning-based detectors, we use and compare the naive learning and the ADL methods with .\nRecall that the ADL method with reduces to the case that uses identical and fixed dithering power without adaptation.\nAs the SNR increases, the number of under-trained likelihood functions for the non-dithering case rapidly approaches .\nFor the ADL case with , i.e., fixed dithering power, however, the number of under-trained likelihood functions much slowly increases with the SNR and converges to around thanks to the dithering effect.\nIn addition, for the ADL method with a non-trivial split factor, the number of under-trained likelihood functions increases only to and when and , respectively.\nSince the ADL method decides whether to increase the dithering noise depending on the realization of each sub-block, we can further optimize the learning procedure in terms of the number of under-trained likelihood functions.\nIf we properly increase , each antenna is more likely to avoid zero-valued likelihood functions.\nAs a result, with the adaptive dithering, the proposed algorithm can estimate much more valid likelihood functions, thereby increasing the detection accuracy."
94
+ },
95
+ {
96
+ "section_id": "6.2",
97
+ "parent_section_id": "6",
98
+ "section_name": "VI-B Uncoded Communication System: Symbol Error Rate",
99
+ "text": "To evaluate the data detection performance of the proposed methods in the multiuser massive MIMO system, we compare the following detection methods:\nNaive learning-based one-bit ML\nADL-based one-bit ML (proposed)\nADL-based one-bit ML with estimated SNR (proposed)\nMinimum-Center-Distance (MCD) [38 ###reference_b38###]\nOne-bit ZF with perfect CSI [19 ###reference_b19###]\nOne-bit ML with perfect CSI (optimal one-bit detection)\nOne-bit ML with estimated CSI\nInfinite-bit ML with perfect CSI (optimal detection)\n###figure_6### We note that the learning-based methods: 1) Naive one-bit ML, 2) ADL one-bit ML, 3) ADL one-bit ML with estimated SNR, and 4) MCD, do not require explicit channel estimation; however, the other methods either assume perfect CSI or estimated CSI at the BS.\nThe learning-based methods transmit pilot signals per each training symbol vector, which requires pilot signals in total.\nAccordingly, we consider that the conventional one-bit ML detection with an estimated channel also uses pilot signals to estimate the channel.\nIn our simulations, the one-bit channel estimation method developed in [20 ###reference_b20###] is adopted to provide the estimated CSI.\nFor readability of the curves, we compare MCD for the 16-QAM case shown in Fig. 9 ###reference_###.\n###figure_7### Fig. 6 ###reference_### presents the SER results for antennas, users, pilot signals, and 4-QAM.\nAs expected from Fig. 5 ###reference_###, the naive-learning approach shows the catastrophic result from the medium to high SNR due to the large number of zero-valued likelihood functions.\nThe one-bit ZF detection which applies the pseudo-inverse matrix of the perfectly-known channel matrix onto the one-bit observations shows the large performance degradation with the error floor at the medium and high SNR regime.\nThe one-bit ML detection with the one-bit estimated channels shows a larger deviation from the optimal one-bit ML detection with perfect CSI as the SNR increases due to the channel estimation error.\nUnlike the above benchmarks, the proposed ADL one-bit ML methods closely follow the SER performance curve of the optimal one-bit ML case by avoiding under-trained likelihood functions as shown in Fig. 5 ###reference_### and learning the likelihood functions with high accuracy.\nIn addition, the proposed ADL method with has around gain over the ADL method with fixed dithering power, i.e., , which demonstrates the gain of adaptive dithering based on the feedback.\nWe can also notice that the performance gap between the ADL method with the perfect SNR and the ADL with the estimated SNR is marginal.\nThis observation validates the fact that the offline supervised SNR learning can successfully capture the observation pattern to estimate the SNR required for the de-noising phase in the ADL method.\nLastly, we observe that the optimal one-bit ML detection with achieves similar target SER, e.g., to , as the infinite-resolution ML detection with antennas.\nBy deploying more receive antennas coupled with the low-cost one-bit ADCs, we can compensate for the severe non-linearity loss caused by one-bit ADCs and achieve higher detection performance than the infinite-bit ADC system in the low to medium SNR regime.\nFig. 7 ###reference_### shows the SER results for users where the rest of the simulation parameters remain the same as Fig. 6 ###reference_###.\nWe can observe that the overall SER trend of the evaluated methods is similar to the case of .\nThe naive approach starts to suffer at the medium SNR and the channel estimation-based method that uses the same amount of training resources underperforms compared to the proposed ADL methods.\nOverall, we can reaffirm that having a non-trival is beneficial.\n###figure_8### ###figure_9### Fig. 8 ###reference_### shows the SER performance of the one-bit ML algorithms for different training length, with BS antennas, users, and 4-QAM.\nWe first observe that both the naive learning-based one-bit ML and the conventional one-bit ML with the estimated channel still show the noticeable performance degradation from the proposed methods for both the short and long training lengths, .\nThis implies that to achieve the optimal one-bit ML performance, it is necessary to use a great number of training symbols for the naive learning-based one-bit ML and the conventional one-bit ML with estimated channels.\nIn contrast, the proposed ADL-based one-bit ML detection offers robust performance in terms of training length.\nIn particular, the SER improvement of increasing to for the ADL method with is about dB which is small compared with that for the ADL method with .\nTherefore, we can claim that the proposed ADL method is more beneficial for the system with the limited amount of pilot signals, and using proper adaptation stages further improves the detection performance.\nWe can also find out that the ADL case with and achieves almost the same performance as the case and , which emphasizes that adaptive learning can effectively reduce the amount of training sequences.\nFig. 9 ###reference_### shows the SER performance for antennas, users, and 16-QAM.\nWe use training symbols for the learning-based approaches.\nIt is remarkable that the proposed ADL method still offers a robust detection performance whereas the one-bit ZF with perfect CSI and the one-bit ML with the estimated CSI present largely degraded detection performance.\nAlthough the MCD method shows the lower SER than the other benchmarks, the performance gap from the proposed method is not trivial and increases with the SNR.\nIn this regard, the simulation results demonstrate that the proposed method outperforms the state-of-the-art one-bit detection methods, is more robust to communication environments,\nand requires shorter training sequences."
100
+ },
101
+ {
102
+ "section_id": "6.3",
103
+ "parent_section_id": "6",
104
+ "section_name": "VI-C Coded Communication System: Frame Error Rate",
105
+ "text": "We consider the MIMO configuration with antennas, users, and 4-QAM.\nAs a sophisticated channel coding, we adopt a rate-1/2 polar code of length 128, i.e., and a list decoding with list size 8 is used for the decoding procedure of the polar code.\nIn the coded communication system, we also extend the naive learning-based one-bit ML detection to the coded system and compare the following methods:\nNaive learning-based one-bit ML\nADL-based one-bit ML (proposed)\nOne-bit successive cancellation soft-output (OSS) [21 ###reference_b21###]\nFor the ADL methods, we allocate a total of pilot signals to each symbol vector.\nUnlike the learning-based methods, the OSS detector assumes perfect CSI to compute LLRs.\nAccordingly, it can be regarded as an FER lower bound, and we include it for providing the performance guideline.\nRecall that to use state-of-the-art channel codes, we calculate LLRs using the likelihood probabilities derived by each method.\n###figure_10### Fig. 10 ###reference_### illustrates the FER of the channel-coded systems.\nThe naive learning one-bit detection no longer experiences the tragic reverse trend shown in the uncoded systems; however, the performance gap from the proposed method grows up as SNR increases.\nIn addition, the FER of the ADL method with split factor is placed between that of the OSS detector and the ADL method with , thereby showing the advantage over the ADL with fixed dithering power.\nAgain, the ADL method with can achieve the improvement owing to the fact that the ADL method can accurately learn the likelihood probabilities by avoiding zero-valued likelihood functions even with the limited amount of training sequences.\nIn summary, although the performance of the naive learning-based approach is devastated by the under-trained probabilities in the uncoded system, the likelihood probability in (29 ###reference_###) is still capable of being computed with the under-trained likelihood functions for the LLR defined in (44 ###reference_###) for the coded systems.\nRegarding the probability learning accuracy, however, the proposed ADL method can perform better than the naive learning approach, thereby increasing the performance gap with the SNR."
106
+ },
107
+ {
108
+ "section_id": "6.4",
109
+ "parent_section_id": "6",
110
+ "section_name": "VI-D Millimeter Wave Channel Case",
111
+ "text": "To evaluate the proposed algorithm for a mmWave channel, we adopt a geometric channel model whose number of channel paths is .\nIn simulations, we assume that users have the same number of paths for simplicity.\nNoting that is typically small due to the limited scattering nature of mmWave signals, the correlated mmWave channel propagated from user to the BS is expressed as [43 ###reference_b43###]\nwhere and correspond to the complex path gain and the azimuth angle of arrival (AoA) of the th path bewteen the BS and the th user, respectively.\nParametrized by an azimuth angle , is the array response vector (ARV) for uniform linear array which is the collection of evenly spaced phase shifts defined as\nwhere denotes the signal wavelength and is the antenna spacing.\nThe complex-valued channel from users to the BS is then whose th row is .\nFig. 11 ###reference_### shows the SER results for antennas, users, pilot signals, and 4-QAM when the aforementioned geometric channel with is considered.\nFor the ADL methods, we use and .\nWe can observe that the naive approach still suffers due to the destructive under-trained likelihood functions.\nAs the SNR increases, the one-bit ML detection with the one-bit estimated channels exhibits a larger deviation from the optimal one-bit ML detection with perfect CSI due to the channel estimation error.\nOn the other hand, the two ADL methods follow the optimal one-bit ML detector (only for the uncorrelated channel case) in the simulated SNRs and setting the non-trivial split factor of can make the ADL improve even further.\nAlthough compared to the SER performance of the uncorrelated channels in Fig. 6 ###reference_###, the overall SER is degraded due to the assumption of uncorrelated channels, the proposed methods still achieve a meaningful SER.\n###figure_11###"
112
+ },
113
+ {
114
+ "section_id": "7",
115
+ "parent_section_id": null,
116
+ "section_name": "VII Conclusion",
117
+ "text": "In this paper, we proposed the statistical learning-based ML detection method for uplink massive MIMO communication systems with one-bit ADCs.\nSince the performance of learning-based one-bit detection approaches can be severely degraded when the number of training samples is insufficient, the proposed method handled such challenges by injecting dithering noise to facilitate the acquisition of statistical patterns.\nWithout requiring explicit channel knowledge, the dither-and-learning method performed one-bit ML detection through learning likelihood functions at each antenna.\nThe proposed method was more robust to the number of training symbols because the adaptive randomness triggers moderate fluctuation in the change of signs of the training sequence, thereby successfully extracting the statistical pattern of one-bit quantized signals.\nWe further adapted dithering power to fit the BS into the appropriate SNR region in accordance with observations.\nIn addition, DNN-based SNR estimation process for denoising and extension to channel-coded systems were also proposed for more practical scenarios.\nSimulation results validated the detection performance of the proposed method in terms of the training amount, SER, and FER.\nTherefore, the proposed method can be a potential low-power and low-complexity solution for 6G applications."
118
+ }
119
+ ],
120
+ "appendix": [],
121
+ "tables": {},
122
+ "image_paths": {
123
+ "1": {
124
+ "figure_path": "2304.07696v2_figure_1.png",
125
+ "caption": "Figure 1: Symbol error rate simulation results of the optimal one-bit ML detection with full CSI against naive learning-based one-bit ML detection for Nr=32subscript\ud835\udc41\ud835\udc5f32N_{r}=32italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 32 receive antennas, Nu=3subscript\ud835\udc41\ud835\udc623N_{u}=3italic_N start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 3 users, 4-QAM, and N\ud835\uddcd\ud835\uddcb\u2208{10,100,1000}subscript\ud835\udc41\ud835\uddcd\ud835\uddcb101001000N_{\\sf tr}\\in\\{10,100,1000\\}italic_N start_POSTSUBSCRIPT sansserif_tr end_POSTSUBSCRIPT \u2208 { 10 , 100 , 1000 } pilot signals.",
126
+ "url": "http://arxiv.org/html/2304.07696v2/x1.png"
127
+ },
128
+ "2": {
129
+ "figure_path": "2304.07696v2_figure_2.png",
130
+ "caption": "Figure 2: Illustration of the base station architecture with one-bit ADCs for t\u2208{(k\u22121)\u2062N\ud835\uddcd\ud835\uddcb+1,\u2026,k\u2062N\ud835\uddcd\ud835\uddcb}\ud835\udc61\ud835\udc581subscript\ud835\udc41\ud835\uddcd\ud835\uddcb1\u2026\ud835\udc58subscript\ud835\udc41\ud835\uddcd\ud835\uddcbt\\in\\{\\left(k-1\\right)N_{\\sf tr}+1,\\ldots,kN_{\\sf tr}\\}italic_t \u2208 { ( italic_k - 1 ) italic_N start_POSTSUBSCRIPT sansserif_tr end_POSTSUBSCRIPT + 1 , \u2026 , italic_k italic_N start_POSTSUBSCRIPT sansserif_tr end_POSTSUBSCRIPT } for the training the k\ud835\udc58kitalic_kth symbol vector. Signals after ADCs are in real-value representation.\nDuring the pilot transmission phase, dithering signals are added before the quantization block.\nBased on the feedback information, the statistics of the dithering signal is updated.",
131
+ "url": "http://arxiv.org/html/2304.07696v2/x2.png"
132
+ },
133
+ "3": {
134
+ "figure_path": "2304.07696v2_figure_3.png",
135
+ "caption": "Figure 3: A communication data frame with a pilot transmission and a data transmission phases.",
136
+ "url": "http://arxiv.org/html/2304.07696v2/x3.png"
137
+ },
138
+ "4": {
139
+ "figure_path": "2304.07696v2_figure_4.png",
140
+ "caption": "Figure 4: Illustration of the supervised offline training of the SNR using deep neural networks.\nThe networks are updated in the direction of reducing estimation errors.",
141
+ "url": "http://arxiv.org/html/2304.07696v2/x4.png"
142
+ },
143
+ "5": {
144
+ "figure_path": "2304.07696v2_figure_5.png",
145
+ "caption": "Figure 5: The number of under-trained likelihood functions among 2\u2062Nr2subscript\ud835\udc41\ud835\udc5f2N_{r}2 italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT likelihood functions for Nu=4subscript\ud835\udc41\ud835\udc624N_{u}=4italic_N start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 4 users, 4-QAM, Nr=32subscript\ud835\udc41\ud835\udc5f32N_{r}=32italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 32 antennas, and N\ud835\uddcd\ud835\uddcb=45subscript\ud835\udc41\ud835\uddcd\ud835\uddcb45N_{\\sf tr}=45italic_N start_POSTSUBSCRIPT sansserif_tr end_POSTSUBSCRIPT = 45 pilot signals with Rayleigh channels.\nThe proposed adaptive dither-and-learning (ADL) method divides the training period into Ns\u2208{1,3,5}subscript\ud835\udc41\ud835\udc60135N_{s}\\in\\{1,3,5\\}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2208 { 1 , 3 , 5 } sub-blocks for the feedback-driven update\nof dithering power.",
146
+ "url": "http://arxiv.org/html/2304.07696v2/x5.png"
147
+ },
148
+ "6": {
149
+ "figure_path": "2304.07696v2_figure_6.png",
150
+ "caption": "Figure 6: Symbol error rate simulation results with Nu=4subscript\ud835\udc41\ud835\udc624N_{u}=4italic_N start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 4 users, Nr=32subscript\ud835\udc41\ud835\udc5f32N_{r}=32italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 32 receive antennas, N\ud835\uddcd\ud835\uddcb=45subscript\ud835\udc41\ud835\uddcd\ud835\uddcb45N_{\\sf tr}=45italic_N start_POSTSUBSCRIPT sansserif_tr end_POSTSUBSCRIPT = 45 training signals, and 4-QAM constellation scheme.\nThe proposed adaptive dither-and-learning (ADL) uses Ns\u2208{1,3}subscript\ud835\udc41\ud835\udc6013N_{s}\\in\\{1,3\\}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2208 { 1 , 3 } split factors.",
151
+ "url": "http://arxiv.org/html/2304.07696v2/x6.png"
152
+ },
153
+ "7": {
154
+ "figure_path": "2304.07696v2_figure_7.png",
155
+ "caption": "Figure 7: Symbol error rate simulation results with Nu=6subscript\ud835\udc41\ud835\udc626N_{u}=6italic_N start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 6 users, Nr=32subscript\ud835\udc41\ud835\udc5f32N_{r}=32italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 32 receive antennas, N\ud835\uddcd\ud835\uddcb=45subscript\ud835\udc41\ud835\uddcd\ud835\uddcb45N_{\\sf tr}=45italic_N start_POSTSUBSCRIPT sansserif_tr end_POSTSUBSCRIPT = 45 training signals, and 4-QAM constellation scheme.\nThe proposed adaptive dither-and-learning (ADL) uses Ns\u2208{1,3}subscript\ud835\udc41\ud835\udc6013N_{s}\\in\\{1,3\\}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2208 { 1 , 3 } split factors.",
156
+ "url": "http://arxiv.org/html/2304.07696v2/x7.png"
157
+ },
158
+ "8": {
159
+ "figure_path": "2304.07696v2_figure_8.png",
160
+ "caption": "Figure 8: Symbol error rate simulation results with Nu=4subscript\ud835\udc41\ud835\udc624N_{u}=4italic_N start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 4 users, Nr=32subscript\ud835\udc41\ud835\udc5f32N_{r}=32italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 32 receive antennas, N\ud835\uddcd\ud835\uddcb\u2208{45,90}subscript\ud835\udc41\ud835\uddcd\ud835\uddcb4590N_{\\sf tr}\\in\\{45,90\\}italic_N start_POSTSUBSCRIPT sansserif_tr end_POSTSUBSCRIPT \u2208 { 45 , 90 } training signals, and 4-QAM constellation.\nThe proposed adaptive dither-and-learning (ADL) uses Ns\u2208{1,3}subscript\ud835\udc41\ud835\udc6013N_{s}\\in\\{1,3\\}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2208 { 1 , 3 } split factors.",
161
+ "url": "http://arxiv.org/html/2304.07696v2/x8.png"
162
+ },
163
+ "9": {
164
+ "figure_path": "2304.07696v2_figure_9.png",
165
+ "caption": "Figure 9: Symbol error rate results with Nu=3subscript\ud835\udc41\ud835\udc623N_{u}=3italic_N start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 3 users, Nr=64subscript\ud835\udc41\ud835\udc5f64N_{r}=64italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 64 BS antennas, N\ud835\uddcd\ud835\uddcb=45subscript\ud835\udc41\ud835\uddcd\ud835\uddcb45N_{\\sf tr}=45italic_N start_POSTSUBSCRIPT sansserif_tr end_POSTSUBSCRIPT = 45 pilot signals, and 16-QAM constellation.\nThe proposed adaptive dither-and-learning (ADL) method divides the training period into Ns\u2208{1,3}subscript\ud835\udc41\ud835\udc6013N_{s}\\in\\{1,3\\}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2208 { 1 , 3 } sub-blocks.",
166
+ "url": "http://arxiv.org/html/2304.07696v2/x9.png"
167
+ },
168
+ "10": {
169
+ "figure_path": "2304.07696v2_figure_10.png",
170
+ "caption": "Figure 10: Frame error rate results for Nu=4subscript\ud835\udc41\ud835\udc624N_{u}=4italic_N start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 4 users, Nr=32subscript\ud835\udc41\ud835\udc5f32N_{r}=32italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 32 BS antennas, N\ud835\uddcd\ud835\uddcb=45subscript\ud835\udc41\ud835\uddcd\ud835\uddcb45N_{\\sf tr}=45italic_N start_POSTSUBSCRIPT sansserif_tr end_POSTSUBSCRIPT = 45, 4-QAM constellation, and a polar code of rate 1/2121/21 / 2 where (\u03ba,\u03b7)=(64,128)\ud835\udf05\ud835\udf0264128(\\kappa,\\eta)=(64,128)( italic_\u03ba , italic_\u03b7 ) = ( 64 , 128 ).\nThe proposed adaptive dither-and-learning (ADL) method learns the likelihood probability with split factor Ns\u2208{1,3}subscript\ud835\udc41\ud835\udc6013N_{s}\\in\\{1,3\\}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2208 { 1 , 3 }.\nThe one-bit successive-cancellation soft-output (OSS) detector is valid in the case of perfect CSI.",
171
+ "url": "http://arxiv.org/html/2304.07696v2/x10.png"
172
+ },
173
+ "11": {
174
+ "figure_path": "2304.07696v2_figure_11.png",
175
+ "caption": "Figure 11: Symbol error rate simulation results with Nu=4subscript\ud835\udc41\ud835\udc624N_{u}=4italic_N start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT = 4 users, Nr=32subscript\ud835\udc41\ud835\udc5f32N_{r}=32italic_N start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT = 32 receive antennas, N\ud835\uddcd\ud835\uddcb=45subscript\ud835\udc41\ud835\uddcd\ud835\uddcb45N_{\\sf tr}=45italic_N start_POSTSUBSCRIPT sansserif_tr end_POSTSUBSCRIPT = 45 training signals, and 4-QAM constellation scheme with geometric channels.\nThe proposed adaptive dither-and-learning (ADL) uses \u03c3i2=\u03c1/2superscriptsubscript\ud835\udf0e\ud835\udc562\ud835\udf0c2\\sigma_{i}^{2}=\\rho/2italic_\u03c3 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = italic_\u03c1 / 2, \u0394=\u03c1/3\u0394\ud835\udf0c3\\Delta=\\rho/3roman_\u0394 = italic_\u03c1 / 3, and Ns\u2208{1,3}subscript\ud835\udc41\ud835\udc6013N_{s}\\in\\{1,3\\}italic_N start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2208 { 1 , 3 } split factors.",
176
+ "url": "http://arxiv.org/html/2304.07696v2/x11.png"
177
+ }
178
+ },
179
+ "validation": true,
180
+ "references": [],
181
+ "url": "http://arxiv.org/html/2304.07696v2"
182
+ }
20240322/2305.10061v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2305.13802v3.json ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Online Open-set Semi-supervised Object Detection with Dual Competing Head",
3
+ "abstract": "Open-set semi-supervised object detection (OSSOD) task leverages practical open-set unlabeled datasets that comprise both in-distribution (ID) and out-of-distribution (OOD) instances for conducting semi-supervised object detection (SSOD). The main challenge in OSSOD is distinguishing and filtering the OOD instances (i.e., outliers) during pseudo-labeling since OODs will affect the performance. The only OSSOD work employs an additional offline OOD detection network trained solely with labeled data to solve this problem. However, the limited labeled data restricts the potential for improvement. Meanwhile, the offline strategy results in low efficiency. To alleviate these issues, this paper proposes an end-to-end online OSSOD framework that improves performance and efficiency: 1) We propose a semi-supervised outlier filtering method that more effectively filters the OOD instances using both labeled and unlabeled data. 2) We propose a threshold-free Dual Competing OOD head that further improves the performance by suppressing the error accumulation during semi-supervised outlier filtering. 3) Our proposed method is an online end-to-end trainable OSSOD framework. Experimental results show that our method achieves state-of-the-art performance on several OSSOD benchmarks compared to existing methods. Moreover, additional experiments show that our method is more efficient and can be easily applied to different SSOD frameworks to boost their performance.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Semi-supervised learning (SSL) significantly improves the performance of various image recognition tasks by utilizing a large amount of available unlabeled data [25 ###reference_b25###, 1 ###reference_b1###, 26 ###reference_b26###, 27 ###reference_b27###, 31 ###reference_b31###]. Object detection performance has also greatly benefited from SSL, leading to the proposal of various semi-supervised object detection (SSOD) methods [20 ###reference_b20###, 13 ###reference_b13###, 12 ###reference_b12###, 30 ###reference_b30###, 28 ###reference_b28###, 32 ###reference_b32###].\nHowever, current SSOD methods are under a strong assumption that the unlabeled and labeled data are from the same label space. This assumption is somewhat unrealistic in practical situations because the unlabeled dataset in the real world usually faces the open-set problem, which means there are OOD samples as shown in Fig. 1 ###reference_###(a). Object detectors in current SSOD methods will mistakenly classify OOD samples as ID classes and ultimately degrade the performance.\n###figure_1### Some works [31 ###reference_b31###, 23 ###reference_b23###] have been proposed to tackle this problem in the image classification task. However, these methods are difficult to apply to object detection tasks directly since image classification is an image-level task, but object detection is a more challenging instance-level task.\nThe paper of [19 ###reference_b19###] is the first to tackle the open-set problem in object detection and name it as the OSSOD task. They tried to apply existing OOD detection methods directly but found that their performance was not satisfactory. Then they proposed the first OSSOD method by training a separate OOD detection network with labeled data to distinguish and filter out OOD instances for the SSOD framework during pseudo-labeling.\nAlthough they have improved the performance on open-set unlabeled data, there are still some challenges that need to be addressed: First, they only use labeled data to train the OOD detection network. However, in the OSSOD task, real OOD instances only exist in unlabeled data. The lack of real OODs results in suboptimal performance, as shown in Fig. 1 ###reference_###(b-1). Second, they need a manual threshold for the OOD detection network to filter out OOD instances. It is time-consuming to search for the best threshold for each dataset. Third, their OOD detection network needs to be trained separately from the object detector and requires an additional backbone, which is inefficient considering the training process and network complexity.\nTo address the above issues, we propose a novel OSSOD method: 1) We propose a semi-supervised outlier filtering strategy to improve OOD filtering ability by leveraging both labeled and unlabeled data. 2) We further identify the error accumulation problem: the mispredictions in pseudo-labels accumulate during semi-supervised outlier filtering. As shown in Fig. 1 ###reference_###(b-2), once the OOD instances are mispredicted as ID (the blue cats), the decision boundary expands to misclassify more OOD labels. To tackle this, we propose the Dual Competing OOD (DCO) head, which mitigates this issue with two sub-heads that form a competitive relationship during semi-supervised learning as shown in Fig. 1 ###reference_###(b-3) and further improves the performance.\nMeanwhile, the DCO head does not require any manual threshold for filtring OOD instances. 3) We render the entire OSSOD framework online end-to-end trainable.\nThe experimental results on several benchmarks show that our method can achieve state-of-the-art OSSOD performance. Meanwhile, our method can be easily applied to other SSOD frameworks to boost their performance. In summary, this paper presents the following contributions:\nWe propose a semi-supervised outlier filtering strategy, which improves the OSSOD accuracy by better utilizing the unlabeled data.\nWe further identify and mitigate the error accumulation problem in semi-supervised outlier filtering by the threshold-free Dual Competing OOD head.\nThe above two components constitute an online end-to-end OSSOD framework. Our proposed method achieves state-of-the-art performance on several OSSOD benchmarks and can be applied to other SSOD frameworks."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Semi-supervised object detection",
21
+ "text": "Semi-Supervised Object Detection (SSOD) methods aim to improve object detection performance\nwith unlabeled data. Some basic SSOD technologies are transferred from semi-supervised image classification tasks such as data augmentation [1 ###reference_b1###], teacher-student framework [26 ###reference_b26###], and exponential moving average (EMA) [27 ###reference_b27###]. Recent SSOD research addresses unique object detection problems, such as class-wise imbalance [20 ###reference_b20###], localization reliability [14 ###reference_b14###, 30 ###reference_b30###, 12 ###reference_b12###], dynamic thresholding [28 ###reference_b28###], and using dense learnable regions over hard pseudo-labels [32 ###reference_b32###]. However, these methods neglect the presence of OOD instances in open-set unlabeled data. It has been shown that pseudo-labels containing OODs lead to the semantic expansion problem and affect the performance [19 ###reference_b19###]."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Open-set semi-supervised learning",
27
+ "text": "Most of the open-set semi-supervised learning (OSSL) methods [15 ###reference_b15###, 6 ###reference_b6###] focus on image classification tasks. Yu et al. [31 ###reference_b31###] proposed a multi-task curriculum learning method to select ID samples from unlabeled data by alternatively estimating the OOD score for unlabeled images and training the network. Saito et al. [23 ###reference_b23###] relies on the one-vs-all OOD detection method to filter OOD samples after pseudo-labeling and use a consistency regularization loss to learn more effective representations. However, these methods are incompatible with object detection tasks: The main difference is that each image contains one object in the image classification task but contains a variable number of objects in the object detection task. Moreover, the number of detected objects in each image is also variable during training. As a result, we cannot maintain a fixed number of OOD scores as [31 ###reference_b31###] or augment the image several times for each object to compute the consistency regularization loss such as [23 ###reference_b23###] considering the complexity. OSSL methods also take some techniques from the OOD detection task [7 ###reference_b7###, 11 ###reference_b11###, 18 ###reference_b18###, 2 ###reference_b2###]. However, OOD detection aims to train on a large number of labeled ID data to distinguish OOD samples, which is different from the OSSL setting.\nLiu et al. [19 ###reference_b19###] proposed the only work of OSSL on the object detection task: the outliers are filtered by a pre-trained OOD detection network. However, the OOD detection network is trained separately and only with labeled data. We further improved the accuracy and efficiency."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Open-set object detection",
33
+ "text": "The open-set object detection (OSOD) task focuses on detecting both ID and unknown OOD objects. Early approaches use dropout sampling [21 ###reference_b21###] to reduce open-set errors. OWOD [9 ###reference_b9###] utilizes the energy score to discern known and unknown classes. OpenDet [4 ###reference_b4###] separates ID and OOD samples by identifying high/low-density regions in the latent space. The OSOD task is different from the OSSOD task in that OSOD seeks to enhance the accuracy of both ID and OOD classes, while OSSOD focuses on the performance of ID classes and prevents the detrimental effects caused by distracting OOD objects. Meanwhile, these methods also rely on substantial labeled data for training."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Preliminary",
39
+ "text": "OSSOD task aims to solve the open-set problem in SSOD. Thus, OSSOD methods are based on SSOD frameworks. The SSOD task assumes that the object detector is trained on both labeled dataset and unlabeled dataset . A common pipeline is setting two detectors: student and teacher models. The teacher model generates pseudo-labels for unlabeled data . The generated pseudo-labels are then selected by a manually set threshold on the classification confidence score. Then, the student model is jointly trained with labeled data and pseudo-labeled unlabeled data. The total loss function is defined as:\nwhere and denote the loss function for training with labeled data and unlabeled data, respectively. Each consists of classification and regression losses in object detection tasks. controls the weight of learning with unlabeled data and denotes the thresholding process. During training, the teacher model is updated by the student\u2019s parameters using the exponential moving average (EMA) method to get a more stable result.\n###figure_2###"
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Method",
45
+ "text": "We apply the Unbiased Teacher [20 ###reference_b20###], which follows the preliminary, as our baseline SSOD method. Fig. 2 ###reference_### illustrates the entire structure of our framework: Our proposed Dual Competing OOD (DCO) head is added to the object detector to filter the OOD instances in the pseudo-labels for SSOD. The DCO head is trained with both labeled and unlabeled data using our semi-supervised outlier filtering strategy. Our framework is online end-to-end trainable. In this section, we first introduce the semi-supervised filtering strategy. Then, we introduce the details of our DCO head."
46
+ },
47
+ {
48
+ "section_id": "4.1",
49
+ "parent_section_id": "4",
50
+ "section_name": "Semi-supervised outlier filtering",
51
+ "text": "The previous method [19 ###reference_b19###] trains the outlier filtering network with labeled data only. However, the real OOD instances only exist in the unlabeled data in the OSSOD setting. Thus, we aim to further utilize the unlabeled data to improve the filtering ability. To achieve this, we introduce an OOD detection head into the object detector to filter OOD instances (We can use either previous OOD detection head structures or our DCO head). The head takes the feature of proposals after ROI-pooling as input and predicts the probability of each sample belonging to ID or OOD classes. We train the head with both labeled and unlabeled data in a semi-supervised way.\nTraining on labeled data. Since the labeled data provide reliable supervision, our OOD detection head also relies on training with the annotations from labeled data. Following [19 ###reference_b19###], we use the proposals from the RPN network with high overlap to the ground-truth as ID instances and those proposals with low overlap as OOD instances to train the OOD detection head. The overlap threshold here is consistent with the one used in distinguishing foreground and background in the original object detection task. For each image, we collect a fixed number of instances to form a batch: we first collect all the ID instances and then randomly gather OOD instances until the batch size is complete.\nTraining on unlabeled data. When training on the unlabeled data, we first get the pseudo-labeled instances from the original detection heads and label them as ID or OOD regarding the prediction of our OOD detection head. Then we use these instances to train the student\u2019s OOD detection head. It is worth noting that we can get real OOD instances and more ID instances from unlabeled data to train the head in this way. Thus, the OOD head can be exposed to a broader range of distribution characteristics present in the unlabeled data, thereby improving the performance. The parameters of the OOD detection head are updated using the EMA during SSOD, thus, our method also benefits from the stable ID and OOD predictions from the teacher model\u2019s OOD detection head. To make the training stable, we also sample background proposals from unlabeled data to maintain the fixed batch size mentioned above."
52
+ },
53
+ {
54
+ "section_id": "4.2",
55
+ "parent_section_id": "4",
56
+ "section_name": "Dual Competing OOD head",
57
+ "text": "###figure_3### We experimentally find that two problems arise during semi-supervised outlier filtering when we directly apply the OOD detection head structure in the pioneering OSSOD method[19 ###reference_b19###]: 1) As shown in Fig. 3 ###reference_###, the OOD detection head will inevitably generate incorrect predictions, such as labeling OOD instances as ID. If such pseudo-labels are used for semi-supervised outlier filtering, the model will gradually accumulate more errors. 2) A threshold for distinguishing ID and OOD instances is needed for the previous OOD detection head. And it is time-consuming to find the proper threshold for different datasets. We propose the DCO head to solve these problems and further improve the performance.\nAnalysis of error accumulation problem. Applying the previous OOD detection head will cause the error accumulation problem because there is no mechanism to recheck whether a prediction is correct once it has a confidence score above the threshold. Therefore, we aim to add an additional module to constrain the original OOD detection head during the entire training process as shown in Fig. 3 ###reference_###.\nDCO head structure. Our DCO head consists of two sub-classifiers: the positive head is used for our proposed semi-supervised outlier filtering, while the negative head is used for constraining the positive head. The two heads are both classifiers ( ID classes and one OOD class) with the same structure. When determining whether a sample is ID or OOD, the two heads form a competitive relationship:\nSuppose we have an instance with class prediction from the object detector\u2019s classification head. It will be determined as ID only when its confidence score of the class in the positive head surpasses the confidence score of the OOD class in the negative head :\nWith this structure, no additional threshold is needed to filter OOD instances.\nInput: Labeled data: , Unlabeled data: \nOutput: Parameters of teacher and student model .\nCompeting training strategy. We propose a competing training strategy for the DCO head. Specifically, both heads are trained with the cross-entropy loss. When training with labeled data, both the positive and the negative heads share the same label since the labeled data is reliable. When training with unlabeled data, the positive head will follow the semi-supervised learning scheme to use the pseudo-labels from its own prediction. However, the negative head will treat all the instances as OOD since they are not inherently reliable. The overall loss is as follows:\n\\linenomathAMS\nwhere denotes the cross-entropy loss, denotes the labeled and unlabeled instances in a single batch with batch size , respectively. is the provided label of . is the pseudo-label from the DCO head for . is the OOD label for negative head.\nWith our DCO head, the negative head will have high OOD confidence scores for all pseudo-labels, especially for those unseen OOD objects that significantly differ from the ID instances. Therefore, even if the positive head mispredicts an OOD instance as an ID class, the negative head can still prevent this mistake since the corresponding OOD confidence score can also be high. The experimental results prove the effectiveness of our DCO head.\nWe combine with the loss function of our based SSOD framework to train the model:\n,\nwhere controls the weight of . Our entire training process can be described as in Alg. 1 ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "5",
61
+ "parent_section_id": null,
62
+ "section_name": "Experiments",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "5.1",
67
+ "parent_section_id": "5",
68
+ "section_name": "Datasets and evaluation metrics",
69
+ "text": "Our method is evaluated on the COCO-Open and COCO-OpenImages datasets proposed by the pioneering work of OSSOD [19 ###reference_b19###]. We also evaluate our method on the newly introduced COCO-VOC dataset.\nCOCO-Open. We randomly select 20/40/60 classes as ID classes and the remaining as OOD classes in the MS-COCO 2017 [17 ###reference_b17###] dataset with 80 classes. The training set is divided into ID, MIX, and OOD sets by splitting the classes. The images in the ID set contain only instances of ID classes. The images in the OOD set contain only instances of OOD classes. The images in the MIX set contain both instances of ID and OOD classes. We then randomly sample images with annotations from the ID set as the labeled dataset. The rest of the ID set and other sets are combined as the open-set unlabeled dataset. For evaluation, we use all the images in the MS-COCO 2017 validation set but delete the annotations of OOD instances.\nCOCO-OpenImages. We also evaluate our method on a large-scale dataset, using the entire MS-COCO 2017 as the labeled ID dataset and OpenImagesv5 [10 ###reference_b10###] as the open-set unlabeled dataset. OpenImagesv5 contains 1.7M images with 601 classes. Classes not present in MS-COCO are considered as OOD classes. For evaluation, we use the entire MS-COCO 2017 validation set.\nCOCO-VOC. The Pascal-VOC 2012 dataset [3 ###reference_b3###] consists of 20 classes, all of which fall within the 80 classes of the COCO dataset. We employ the Pascal VOC training set as our labeled data and the MS-COCO training set as our unlabeled data. For evaluation, we use both the MS-COCO and Pascal-VOC validation sets.\nEvaluation metrics. We use the standard mean Average Precision (mAP) to evaluate the object detection performance and the area under the ROC curve\n(AUROC) to evaluate the OOD detection performance. To calculate AUROC for object detection, we label all detection results as either ID or OOD classes, depending on whether their IoU score with the annotations (containing only ID instances) exceeds 0.5."
70
+ },
71
+ {
72
+ "section_id": "5.2",
73
+ "parent_section_id": "5",
74
+ "section_name": "Baseline methods",
75
+ "text": "We mainly compare our method with the first OSSOD work [19 ###reference_b19###] (referred to as offline OSSOD for convenience). This work is based on the SSOD framework Unbiased Teacher (UT) [20 ###reference_b20###]. We also apply some OOD detection and open-set object detection methods for ablation studies, including OE [8 ###reference_b8###], Energy [18 ###reference_b18###], OVA-Net [24 ###reference_b24###], VOS [2 ###reference_b2###], and OpenDet [4 ###reference_b4###]."
76
+ },
77
+ {
78
+ "section_id": "5.3",
79
+ "parent_section_id": "5",
80
+ "section_name": "Implementation details",
81
+ "text": "For a fair comparison, we mainly use UT [20 ###reference_b20###] as the basic SSOD framework, which uses Faster R-CNN [22 ###reference_b22###] with Feature Pyramid Network (FPN) [16 ###reference_b16###] and ResNet-50 [5 ###reference_b5###] backbone. We keep the same hyper-parameter settings with UT and offline OSSOD, including the learning rate, SSOD thresholds, training schedule, etc. The only new hyper-parameter of our work is the weight of the OOD detection loss . We set it to 0.1. The other hyper-parameters are reported in the appendix. The whole framework is based on Detectron2 [29 ###reference_b29###]."
82
+ },
83
+ {
84
+ "section_id": "5.4",
85
+ "parent_section_id": "5",
86
+ "section_name": "Experiments on OSSOD benchmarks",
87
+ "text": "Varying number of ID classes and labeled images. We evaluate our method by using various numbers of ID classes (20/40/60) and labeled images (1000/2000/4000). We run each experiment 3 times and report the standard deviation. The results in Table 1 ###reference_### and Table 2 ###reference_### show that our method consistently outperforms the offline OSSOD method across various settings. In half of the cases, our improvement based on UT is more than double that of the previous method. Details of the selected ID classes are provided in the appendix. Meanwhile, we find that as the number of ID classes increases to 60, the improvement of OSSOD methods tends to decrease. This can be attributed to that with the fixed total class number, when the number of ID classes increases, the model will acquire strong class-wise distinguishing abilities. And the impact of a small number of OOD classes naturally diminishes. Similarly, when the number of ID classes is small, our OSSOD method leads to more substantial improvement.\nEffect of different unlabeled data combinations.\nWe further show the effectiveness of our method using different combinations of unlabeled data. We use COCO-Open with 40 ID classes and 4000 labeled images. Then, we consider different unlabeled data combinations of ID, ID+MIX, and ID+MIX+OOD sets. The results in Fig. 4 ###reference_###(a) show that 1) we once again demonstrate that OOD samples are detrimental to the SSOD task, as the performance of UT continuously decreases when introducing more OOD instances, while OSSOD methods can alleviate this problem. 2) With the increase of OOD instances, the performance of the previous OSSOD method also declined, which suggests that our method is more robust. Meanwhile, although there is no ID foreground in the OOD set, it can provide additional backgrounds to enhance the effectiveness of the object detector. This might be the reason for the slight improvement of our method from ID+MIX to ID+MIX+OOD.\nComparsion on the large-scale dataset.\nMoreover, we show the effectiveness of our method on the large-scale data combination of MS-COCO and OpenImagesv5. We apply DINO pre-trained weight in this experiment following the offline OSSOD, while we use ImageNet pre-trained weight in other experiments.\nThe result in Table 3 ###reference_### shows that our method can also significantly improve the performance and achieve state-of-the-art under this challenging task.\n###figure_4### ###table_1### ###table_2###"
88
+ },
89
+ {
90
+ "section_id": "5.5",
91
+ "parent_section_id": "5",
92
+ "section_name": "Ablation studies and analysis",
93
+ "text": "Ablation study of semi-supervised outlier filtering.\nWe show the benefit of mining more instances from unlabeled data by semi-supervised outlier filtering in Table 4 ###reference_###. The performance of our positive head trained with labeled data only (23.86 mAP) is compared with that trained using both labeled and unlabeled data (25.01 mAP). Note that the positive head is actually of the same structure as the head in the offline OSSOD. We also apply the same OOD score and threshold with offline OSSOD when using positive head only. We also find that applying previous OOD detection-related methods results in relatively lower performance, which aligns with the conclusions drawn in offline OSSOD. This may be because these methods are designed to be trained with abundant labeled data, thus, they are unsuitable for the OSSOD task with limited labeled data. For evaluating these methods, we either utilize officially provided values or employ their value-finding methods to set the thresholds if needed. we also analyze that a higher AUROC does not always ensure a better detection performance, as undetected ID objects(false negative) are not reflected in the AUROC. As a result, using the Energy score gains only 21.00 mAP with 79.47 AUROC, since most of its detection results are false positives with high OOD confidence scores.\nEffectiveness of the DCO head. While our method outperforms the previous method with only the positive head using semi-supervised outlier filtering, we find that incorporating our proposed DCO head can further enhance performance. As shown in the last three columns in Table 4 ###reference_###, applying the entire DCO head with both positive and negative heads yields the best performance among all tested methods. We also observe that solely using the negative head results in unstable during the later stages of training.\nFurther analysis of the DCO head. We further analyze the effectiveness of our DCO head by monitoring the number of ID and OOD pseudo-labels during training. We sample 1000 images from the unlabeled set in COCO-Open with their ID label annotations (these annotations were not used during training). Pseudo-labels having an IoU score over 0.5 with the annotations are considered as ID boxes, otherwise OOD boxes. As shown in Fig. 4 ###reference_###(b), compared with the positive or negative head only, our DCO head will gradually generate fewer OOD boxes during training but keep a large number of ID boxes. This occurs as the negative head gradually identifies OOD instances with increasing confidence throughout the training process. This phenomenon matches the purpose of designing the DCO head, thus confirming its effectiveness. This experiment is conducted on COCO-Open with 40 ID classes and 4000 labeled images."
94
+ },
95
+ {
96
+ "section_id": "5.6",
97
+ "parent_section_id": "5",
98
+ "section_name": "Additional experiments",
99
+ "text": "###table_3### ###table_4### ###figure_5### ###table_5### More SSOD frameworks. We apply our method to two other SSOD frameworks, SoftTeacher [30 ###reference_b30###] and Pseco [12 ###reference_b12###]. The results in Table 5 ###reference_### show that our method can boost the performance of these SSOD frameworks on the OSSOD task by over 1.0 mAP.\nMore open-set datasets. Additionally, we evaluate our method on another open-set dataset combination: VOC-COCO. The results in Tab. 6 ###reference_### show that our method can also improve the detection performance on this new benchmark. Note that the Pascal-VOC validation set contains ID instances only, and the MS-COCO validation set contains both ID and OOD instances. Thus, the effectiveness of our method is more significant on the MS-COCO validation set.\nEfficiency of our method. Offline OSSOD needs three-step training: 1) train an object detector with labeled data only. 2) train an additional OOD detection network using the proposals from the pre-trained detector. 3) train the SSOD framework with the frozen OOD detection network. Our method only needs to train the entire network once and can converge within the same training iteration. The additional DCO head only consists of two classification heads. Meanwhile, this head can be removed after training. Therefore, our method is more efficient. Table 7 ###reference_### summarizes the training speed and the GPU memory consumption on the same device of the previous offline method and ours. Our method needs only 0.62 training time and less memory.\nVisulization results. We show some visualization results of ID and OOD pseudo-labels during training in Fig. 5 ###reference_###. The results are selected from unlabeled data with a detection confidence score over 0.7. Thus they will all be selected as SSOD training instances if there is no OOD filtering. However, the OOD instances in orange boxes will be filtered with our methods. To demonstrate the confidence scores for the positive and negative heads, we only visualized one detection result per image. Actually, there may be other detection results in the image as well. More visualization results are available in the appendix."
100
+ },
101
+ {
102
+ "section_id": "6",
103
+ "parent_section_id": null,
104
+ "section_name": "Limitations and future direction",
105
+ "text": "Although we improved the performance by directly removing the detected OOD instances, these instances could potentially serve as useful samples for further training the model, thereby enhancing its detection capabilities. Meanwhile, exploring the distinctions among OOD instances could also be a potential direction, as these instances originally belong to different categories."
106
+ },
107
+ {
108
+ "section_id": "7",
109
+ "parent_section_id": null,
110
+ "section_name": "Conclusions",
111
+ "text": "In this paper, we proposed an online end-to-end trainable OSSOD framework with semi-supervised outlier filtering for utilizing unlabeled data and the Dual Competing OOD head to tackle the error accumulation problem. Experimental results on several benchmarks demonstrate that the proposed method achieves better performance compared with the state-of-the-art OSSOD method. We also conducted ablation studies to validate the effectiveness of each component of our method. And we further show the flexibility of our methods on other SSOD frameworks and open-set datasets. With our proposed method, we can leverage more existing unlabeled data to improve the performance of the model without the need for additional manual filtering OOD instances."
112
+ }
113
+ ],
114
+ "appendix": [],
115
+ "tables": {
116
+ "1": {
117
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T1.9\" style=\"width:433.6pt;height:92.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(48.7pt,-10.4pt) scale(1.28999931545565,1.28999931545565) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T1.9.9\">\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T1.9.9.10.1\"># of ID classes</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T1.9.9.10.2\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T1.9.9.10.3\">40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T1.9.9.10.4\">60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.3.3.3.4\">UT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.1.1.1.1\">19.06(0.6)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.2.2.2.2\">21.52(1.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.3.3.3.3\">20.55(0.5)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.6.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.4\">offline OSSOD</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.4.4.4.1\">19.45(0.4) <span class=\"ltx_text\" id=\"S5.T1.4.4.4.1.1\" style=\"color:#0080FF;\">(+0.39)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.5.5.5.2\">24.07(1.1) <span class=\"ltx_text\" id=\"S5.T1.5.5.5.2.1\" style=\"color:#0080FF;\">(+2.55)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T1.6.6.6.3\">22.40(0.1) <span class=\"ltx_text\" id=\"S5.T1.6.6.6.3.1\" style=\"color:#0080FF;\">(+1.85)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.9.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.4\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.7.7.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.7.7.7.1.1\">21.09(0.1) <span class=\"ltx_text\" id=\"S5.T1.7.7.7.1.1.1\" style=\"color:#0080FF;\">(+2.03)</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.8.8.8.2.1\">25.57(0.3) <span class=\"ltx_text\" id=\"S5.T1.8.8.8.2.1.1\" style=\"color:#0080FF;\">(+4.05)</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.9.9.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.9.9.9.3.1\">22.47(0.3) <span class=\"ltx_text\" id=\"S5.T1.9.9.9.3.1.1\" style=\"color:#0080FF;\">(+1.92)</span></span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T1.11.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S5.T1.12.2\" style=\"font-size:90%;\">mAP results on COCO-Open under different ID class numbers with 4000 labeled images.</span></figcaption>\n</figure>",
118
+ "capture": "Table 1: mAP results on COCO-Open under different ID class numbers with 4000 labeled images."
119
+ },
120
+ "2": {
121
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.9\" style=\"width:433.6pt;height:89.9pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(43.2pt,-9.0pt) scale(1.24862253845205,1.24862253845205) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T2.9.9\">\n<tr class=\"ltx_tr\" id=\"S5.T2.9.9.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.9.9.10.1\"># of labeled imgs</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.9.9.10.2\">1000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.9.9.10.3\">2000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T2.9.9.10.4\">4000</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.3.4\">UT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.1.1.1.1\">13.92(0.7)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.2.2.2.2\">15.70(0.7)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.3.3.3.3\">19.06(0.6)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.6.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.4\">offline OSSOD</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.4.4.4.1\">14.47(1.0) <span class=\"ltx_text\" id=\"S5.T2.4.4.4.1.1\" style=\"color:#0080FF;\">(+0.55)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.5.5.5.2\">17.06(0.2) <span class=\"ltx_text\" id=\"S5.T2.5.5.5.2.1\" style=\"color:#0080FF;\">(+1.36)</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.6.6.6.3\">19.83(0.3) <span class=\"ltx_text\" id=\"S5.T2.6.6.6.3.1\" style=\"color:#0080FF;\">(+0.77)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.9.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.9.9.4\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.7.7.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.7.7.7.1.1\">15.70(0.8) <span class=\"ltx_text\" id=\"S5.T2.7.7.7.1.1.1\" style=\"color:#0080FF;\">(+1.78)</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.8.8.8.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.8.8.8.2.1\">18.54(0.7) <span class=\"ltx_text\" id=\"S5.T2.8.8.8.2.1.1\" style=\"color:#0080FF;\">(+2.84)</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.9.9.9.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.9.9.9.3.1\">21.09(0.1) <span class=\"ltx_text\" id=\"S5.T2.9.9.9.3.1.1\" style=\"color:#0080FF;\">(+2.03)</span></span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.11.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.12.2\" style=\"font-size:90%;\">mAP results on COCO-Open under different labeled image numbers with 20 ID classes.</span></figcaption>\n</figure>",
122
+ "capture": "Table 2: mAP results on COCO-Open under different labeled image numbers with 20 ID classes."
123
+ },
124
+ "3": {
125
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T3.1\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.1.1.2\">Method</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.1.1.3\">Labeled</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.1.1.4\">Unlabeled</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T3.1.1.1\">mAP \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.2.1\">Fully-supervised</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.2.2\">COCO</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.2.3\">None</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.2.4\">40.90</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.1\">UT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.2\">COCO</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.3\">OpenImagesv5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T3.1.3.4\">41.81</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.1\">offline OSSOD</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.2\">COCO</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.3\">OpenImagesv5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T3.1.4.4\">43.14</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.5.1\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.5.2\">COCO</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.5.3\">OpenImagesv5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T3.1.5.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.1.5.4.1\">44.13</span></td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.3.1.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S5.T3.4.2\" style=\"font-size:90%;\">Experimental results on COCO-OpenImages.</span></figcaption>\n</figure>",
126
+ "capture": "Table 3: Experimental results on COCO-OpenImages."
127
+ },
128
+ "4": {
129
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T4.2\">\n<tr class=\"ltx_tr\" id=\"S5.T4.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T4.2.2.3\">OOD data</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T4.2.2.4\">Methods</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T4.1.1.1\">mAP\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T4.2.2.2\">AUROC\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.3.1\" rowspan=\"7\"><span class=\"ltx_text\" id=\"S5.T4.2.3.1.1\"><span class=\"ltx_text\" id=\"S5.T4.2.3.1.1.1\"></span> <span class=\"ltx_text\" id=\"S5.T4.2.3.1.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S5.T4.2.3.1.1.2.1\">\n<span class=\"ltx_tr\" id=\"S5.T4.2.3.1.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.3.1.1.2.1.1.1\">labeled</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.2.3.1.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.3.1.1.2.1.2.1\">only</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S5.T4.2.3.1.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.3.2\">Energy</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.3.3\">21.00</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.3.4\">79.47</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.4.1\">OE</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.4.2\">22.55</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.4.3\">71.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.5.1\">OVA-Net</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.5.2\">23.65</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.5.3\">78.61</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.6.1\">VOS</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.6.2\">21.20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.6.3\">72.72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.7.1\">OpenDet</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.7.2\">20.53</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.7.3\">67.67</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.8.1\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text\" id=\"S5.T4.2.8.1.1\" style=\"background-color:#E6E6E6;\">positive head (ours)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.8.2\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text\" id=\"S5.T4.2.8.2.1\" style=\"background-color:#E6E6E6;\">23.86</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.8.3\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text\" id=\"S5.T4.2.8.3.1\" style=\"background-color:#E6E6E6;\">72.84</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.9.1\">offline OSSOD</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.9.2\">24.40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.9.3\">76.26</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T4.2.10.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T4.2.10.1.1\"><span class=\"ltx_text\" id=\"S5.T4.2.10.1.1.1\"></span> <span class=\"ltx_text\" id=\"S5.T4.2.10.1.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S5.T4.2.10.1.1.2.1\">\n<span class=\"ltx_tr\" id=\"S5.T4.2.10.1.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.10.1.1.2.1.1.1\">labeled</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.2.10.1.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.10.1.1.2.1.2.1\">&amp;</span></span>\n<span class=\"ltx_tr\" id=\"S5.T4.2.10.1.1.2.1.3\">\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.10.1.1.2.1.3.1\">unlabeled</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S5.T4.2.10.1.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.10.2\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text\" id=\"S5.T4.2.10.2.1\" style=\"background-color:#E6E6E6;\">positive head (ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.10.3\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text\" id=\"S5.T4.2.10.3.1\" style=\"background-color:#E6E6E6;\">25.01</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T4.2.10.4\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text\" id=\"S5.T4.2.10.4.1\" style=\"background-color:#E6E6E6;\">77.40</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.11.1\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text\" id=\"S5.T4.2.11.1.1\" style=\"background-color:#E6E6E6;\">negative head (ours)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.11.2\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text\" id=\"S5.T4.2.11.2.1\" style=\"background-color:#E6E6E6;\">25.32</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T4.2.11.3\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text\" id=\"S5.T4.2.11.3.1\" style=\"background-color:#E6E6E6;\">75.83</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.2.12.1\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text\" id=\"S5.T4.2.12.1.1\" style=\"background-color:#E6E6E6;\">DCO head (ours)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.2.12.2\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.2.12.2.1\" style=\"background-color:#E6E6E6;\">25.70</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T4.2.12.3\" style=\"background-color:#E6E6E6;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.2.12.3.1\" style=\"background-color:#E6E6E6;\">80.06</span></td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T4.4.1.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S5.T4.5.2\" style=\"font-size:90%;\">Experimental results on different OOD detection methods on COCO-Open with 40 ID classes, 4000 labeled images.</span></figcaption>\n</figure>",
130
+ "capture": "Table 4: Experimental results on different OOD detection methods on COCO-Open with 40 ID classes, 4000 labeled images."
131
+ },
132
+ "5": {
133
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T5.2\">\n<tr class=\"ltx_tr\" id=\"S5.T5.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T5.2.2.3\">Methods</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T5.1.1.1\">mAP\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T5.2.2.4\">Methods</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T5.2.2.2\">mAP\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.2.3.1\">SoftTeacher</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T5.2.3.2\">20.94</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.2.3.3\">Pseco</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.2.3.4\">21.90</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.2.4.1\">SoftTeacher+ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T5.2.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T5.2.4.2.1\">21.95 <span class=\"ltx_text\" id=\"S5.T5.2.4.2.1.1\" style=\"color:#0080FF;\">(+1.01)</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.2.4.3\">Pseco+ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.2.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T5.2.4.4.1\">22.70 <span class=\"ltx_text\" id=\"S5.T5.2.4.4.1.1\" style=\"color:#0080FF;\">(+0.80)</span></span></td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T5.4.1.1\" style=\"font-size:90%;\">Table 5</span>: </span><span class=\"ltx_text\" id=\"S5.T5.5.2\" style=\"font-size:90%;\">SoftTeacher and Pseco as SSOD frameworks with our method on COCO-Open with 40 ID classes and 4000 labeled images.</span></figcaption>\n</figure>",
134
+ "capture": "Table 5: SoftTeacher and Pseco as SSOD frameworks with our method on COCO-Open with 40 ID classes and 4000 labeled images."
135
+ },
136
+ "6": {
137
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T6\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T6.3\">\n<tr class=\"ltx_tr\" id=\"S5.T6.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T6.3.3.4\">Methods</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T6.1.1.1\">mAP-COCO\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T6.3.3.3\">mAP-VOC\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.3.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.3.4.1\">UT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.3.4.2\">28.82</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T6.3.4.3\">81.13</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.3.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T6.3.5.1\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T6.3.5.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.3.5.2.1\">30.29 <span class=\"ltx_text\" id=\"S5.T6.3.5.2.1.1\" style=\"color:#0080FF;\">(+1.47)</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T6.3.5.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.3.5.3.1\">81.64 <span class=\"ltx_text\" id=\"S5.T6.3.5.3.1.1\" style=\"color:#0080FF;\">(+0.51)</span></span></td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T6.5.1.1\" style=\"font-size:90%;\">Table 6</span>: </span><span class=\"ltx_text\" id=\"S5.T6.6.2\" style=\"font-size:90%;\">mAP results on the VOC-COCO benchmark.</span></figcaption>\n</figure>",
138
+ "capture": "Table 6: mAP results on the VOC-COCO benchmark."
139
+ },
140
+ "7": {
141
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T7\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T7.1\">\n<tr class=\"ltx_tr\" id=\"S5.T7.1.2\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S5.T7.1.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T7.1.2.2\">Step</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T7.1.2.3\">\n<span class=\"ltx_text\" id=\"S5.T7.1.2.3.1\"></span> <span class=\"ltx_text\" id=\"S5.T7.1.2.3.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S5.T7.1.2.3.2.1\">\n<span class=\"ltx_tr\" id=\"S5.T7.1.2.3.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.2.3.2.1.1.1\">Time (iter*s/iter)</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S5.T7.1.2.3.3\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T7.1.2.4\">\n<span class=\"ltx_text\" id=\"S5.T7.1.2.4.1\"></span> <span class=\"ltx_text\" id=\"S5.T7.1.2.4.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S5.T7.1.2.4.2.1\">\n<span class=\"ltx_tr\" id=\"S5.T7.1.2.4.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.2.4.2.1.1.1\">GPU Memory (GB)</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S5.T7.1.2.4.3\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T7.1.3.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T7.1.3.1.1\">Offline OSSOD</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T7.1.3.2\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T7.1.3.3\">40k*0.21=8.4k</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T7.1.3.4\">50.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.1.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.4.1\">2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.4.2\">40k*0.29=11.6k</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.4.3\">57.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.1.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.5.1\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.5.2\">100k*0.35=35.0k</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.5.3\">59.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.1.6\">\n<td class=\"ltx_td\" id=\"S5.T7.1.6.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.6.2\">Total</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.6.3\">55.0k</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.6.4\">59.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T7.1.1.2\">Ours</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T7.1.1.3\">1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T7.1.1.1\">\n<span class=\"ltx_text\" id=\"S5.T7.1.1.1.2\"></span> <span class=\"ltx_text\" id=\"S5.T7.1.1.1.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S5.T7.1.1.1.1.1\">\n<span class=\"ltx_tr\" id=\"S5.T7.1.1.1.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S5.T7.1.1.1.1.1.1.1\">100k*0.34=34.0k <span class=\"ltx_text\" id=\"S5.T7.1.1.1.1.1.1.1.1\" style=\"color:#0080FF;\">(0.62)</span></span></span>\n</span></span> <span class=\"ltx_text\" id=\"S5.T7.1.1.1.3\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"S5.T7.1.1.4\">56.5</td>\n</tr>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T7.3.1.1\" style=\"font-size:90%;\">Table 7</span>: </span><span class=\"ltx_text\" id=\"S5.T7.4.2\" style=\"font-size:90%;\">Trainging time and GPU memory consumption.</span></figcaption>\n</figure>",
142
+ "capture": "Table 7: Trainging time and GPU memory consumption."
143
+ }
144
+ },
145
+ "image_paths": {
146
+ "1": {
147
+ "figure_path": "2305.13802v3_figure_1.png",
148
+ "caption": "Figure 1: (a) The data setting of the OSSOD task. (b) 1) The previous OSSOD method trained the model with only labeled data. 2) We first improve the performance by our semi-supervised outlier filtering method but face the error accumulation problem: The mispredicted OODs make the decision boundary expand to misclassify more samples. 3) We further propose the Dual Competing OOD head to alleviate the error accumulation and result in better performance.",
149
+ "url": "http://arxiv.org/html/2305.13802v3/x1.png"
150
+ },
151
+ "2": {
152
+ "figure_path": "2305.13802v3_figure_2.png",
153
+ "caption": "Figure 2: The framework of our method. Top: Our DCO head is added to the detector for filtering OODs in the pseudo-labels during training. We propose the semi-supervised outlier filtering strategy to improve the filtering ability. Bottom-left: Training strategy of our DCO head, the pseudo-labeled ID/OODs are used for training the positive head (Note that wrong pseudo-label exists). We label all the unlabeled instances as OOD for training the negative head. Bottom-right: OOD filtering using the DCO head. Two heads compete with each other to decide on ID or OOD. In this case, dog is the ID class, and cat is the OOD class.",
154
+ "url": "http://arxiv.org/html/2305.13802v3/x2.png"
155
+ },
156
+ "3": {
157
+ "figure_path": "2305.13802v3_figure_3.png",
158
+ "caption": "Figure 3: Left: The error accumulation problem with only one OOD detection head. Right: The principle of our DCO head for preventing the problem. In this case, dog is the ID class, and cat is the OOD class. The dashed line represents the decision boundary.",
159
+ "url": "http://arxiv.org/html/2305.13802v3/x3.png"
160
+ },
161
+ "4": {
162
+ "figure_path": "2305.13802v3_figure_4.png",
163
+ "caption": "Figure 4: (a) Performance under different data combinations. (b) The number of ID (left) and OOD (right) pseudo-labeled boxes per image during training for different heads. Pos and neg denote our positive and negative heads respectively.",
164
+ "url": "http://arxiv.org/html/2305.13802v3/x4.png"
165
+ },
166
+ "5": {
167
+ "figure_path": "2305.13802v3_figure_5.png",
168
+ "caption": "Figure 5: Visualization results of pseudo-labels and related scores from the DCO head (pos: the positive head; neg: the negative head). The instances are predicted as ID (blue) or OOD (orange) by comparing the two scores.",
169
+ "url": "http://arxiv.org/html/2305.13802v3/x5.png"
170
+ }
171
+ },
172
+ "validation": true,
173
+ "references": [],
174
+ "url": "http://arxiv.org/html/2305.13802v3"
175
+ }
20240322/2306.03111v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2306.04337v2.json ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A STUDY ON THE IMPACT OF SELF-SUPERVISED LEARNING ON AUTOMATIC DYSARTHRIC SPEECH ASSESSMENT",
3
+ "abstract": "Automating dysarthria assessments offers the opportunity to develop practical, low-cost tools that address the current limitations of manual and subjective assessments.\nNonetheless, the small size of most dysarthria datasets makes it challenging to develop automated assessment.\nRecent research showed that speech representations from models pre-trained on large unlabelled data can enhance Automatic Speech Recognition (ASR) performance for dysarthric speech.\nWe are the first to evaluate the representations from pre-trained state-of-the-art Self-Supervised models across three downstream tasks on dysarthric speech: disease classification, word recognition and intelligibility classification, and under three noise scenarios on the UA-Speech dataset.\nWe show that HuBERT is the most versatile feature extractor across dysarthria classification, word recognition, and intelligibility classification, achieving respectively accuracy compared to classical acoustic features.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Dysarthria is caused by a lack of articulatory control and muscle weakness, which affect speech rate, dynamic amplitudes and pitches, and how the spoken word is pronounced.\nAll of these contribute to unintelligible speech and difficulty understanding due to the inaccurate articulation of phonemes and abnormal speech patterns [1 ###reference_b1###].\nDysarthria classification has become increasingly important in diagnosing the disorder, determining the best treatment options, and conducting speech therapy sessions as needed [2 ###reference_b2###]. Nonetheless, obtaining dysarthric speech samples is usually challenging, as most datasets contain a small number of speakers. Furthermore, there is limited research on how well these assessments can provide specific performance insights for individual patients.\nSelf-Supervised learning (SSL) in speech processing enables learning from large, unlabeled datasets, enhancing the understanding of diverse speech patterns [3 ###reference_b3###].\nWhile recent research has shown that SSL approaches can outperform supervised ones in dysarthric Automatic Speech Recognition [4 ###reference_b4###, 5 ###reference_b5###], it has been not evaluated for other dysarthric assessments: disease, word and intelligibility classification and under various noise patterns.\nThis research gap underscores the need to examine SSL approaches in different dysarthric assessments and environment conditions. Understanding the types of impairments and their patterns better can aid in developing better tools for identifying disorders and their traits.\nOur main contribution is the evaluation of representations extracted by Self-Supervised models trained on large scale healthy speech under three noise scenarios, across three classification tasks.\nWe propose a tool to empirically evaluate different representations (e.g., acoustics and self-supervised methods) and classifiers (e.g., Logistic Regression (LR) and Multi-Layer Perceptron (MLP)), on binary and multi-class tasks (e.g., disease, word, and intelligibility classification).\nTo simulate real-world recording collection scenarios, we experiment under three settings: default, noise addition, and noise reduction.\nOur tool can provide insight on which features can facilitate the identification of disorders and their characteristics.\nFor instance, for the severity classification under these three scenarios, we showed that SSL feature extractors pretrained on healthy speech can be applied on dysarthric speech and still provide good performance."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Dysarthria Automatic Assessments",
15
+ "text": "Dysarthria intelligibility assessment is typically performed in two stages [6 ###reference_b6###].\nThe training stage involves building a computational model based on patients\u2019 speech samples and their respective speech intelligibility classes. After training the model, one can identify classes of speakers with unknown intelligibility levels, by comparing their acoustic features with those used during training. Reference-free intelligibility assessment approaches focus on developing classification models without any prior understanding of healthy speech. Instead, they focus on extracting acoustic features believed to be highly correlated with intelligibility [7 ###reference_b7###].\nMeanwhile, reference-based approaches utilize healthy speech data (e.g., ASR-based approaches) to determine the characteristics of intelligible speech and use them as a basis for estimating the level of intelligibility [4 ###reference_b4###, 8 ###reference_b8###].\nSuch approaches exploit the fact that ASR systems trained only on healthy speech perform poorly on dysarthric speech and that the performance of ASR systems deteriorates with the severity of dysarthric speech."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methodology",
21
+ "text": "###table_1### ###figure_1###"
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Overview",
27
+ "text": "The quality of recordings, representations used, and classification algorithm can influence the effectiveness of automated dysarthria assessments. Therefore, we aim to develop an interpretable tool that facilitates understanding the outputs of such assessments.\nAn overview of the proposed tool is provided in Figure 1 ###reference_###, which can be easily adapted to extract various features, followed by multiple classification algorithms.\nThen, our tool aggregates the results per patient to verify the assessment results' reliability.\nAggregation outputs could be interpreted as intelligibility classes, such as low, mid, and high levels, and could provide clinicians with an interpretable classification of the speaker's intelligibility."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "Experimental Setting",
33
+ "text": "Dataset.\nThe UA-Speech [9 ###reference_b9###] dataset contains recordings from 13 healthy control speakers and 15 dysarthric speakers.\nThe vocabulary includes 455 distinct words with ten digits, 26 radio alphabets, 19 computer commands, 100 common words, and 300 uncommon words.\nSpeakers are divided into four different categories based on the severity of the condition, namely high (H), mid (M), low (L), and very low (VL).\nHand-crafted Features.\nWe extracted acoustic measure of articulation, voice and prosody using PRAAT. [10 ###reference_b10###].\nExamples include the mean harmonic-to-noise ratio (HNR), the fraction of locally unvoiced frames, the number of voice breaks, degree of voice breaks, the mean and standard deviation of pitch, jitter, and shimmer, and cepstral peak prominence (CPP) [11 ###reference_b11###].\nSelf-supervised Features.\nWe used pre-trained Self-Supervised Feature extractors provided by the SUPERB benchmark\n111https://github.com/s3prl/s3prl ###reference_github.com/s3prl/s3prl###: wav2vec2, Modified CPC [12 ###reference_b12###], and HuBERT.\nDysarthria Classification.\nWe treated the Dysarthria classification as a binary classification task; a participant is either in the control group, class 0, or the dysarthria group, class 1.\nWord Classification.\nDysarthria patients are more likely to be able to utter isolated words rather than continuous sentences [4 ###reference_b4###].\nIsolated word recognition converts the input speech command into the corresponding text format [13 ###reference_b13###].\nKeyword spotting involves detecting specific words or phrases within longer spoken sentences or utterances [14 ###reference_b14###].\nWe designed this task as a multi-classification task with individual words.\nWe considered only words not identified as uncommon in the original dataset.\nIntelligibility Classification.\nDysarthria can vary in severity, leading to speeches of different degrees of intelligibility [15 ###reference_b15###].\nWe considered five classes: the four directly available from the UA-Speech dataset, and one to represent control speakers.\nClassification Models & Evaluation Metric.\nWe compared the performance of Logistic Regression (LR) and Multi-Layer Perceptron (MLP) classifiers. We reported results in terms of accuracy using a Leave-One-Speaker-Out (LOSO) approach.\nWe evaluated the performance at the recording level.\nFor instance, an audio sample is classified as dysarthric or not.\n###figure_2###"
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Results",
39
+ "text": "[Default Setting] Q1. How reliable are classifiers when confronted with unknown speakers?\nPreliminaries.\nHealthcare professionals can benefit from classification tasks to better understand and manage speech difficulties associated with dysarthria.\nWhile the literature references existing attempts at automated assessment that attain high levels of accuracy, their generalizability might be biased due to their evaluation methodology [6 ###reference_b6###].\nHuang et al., [16 ###reference_b16###] show that most studies on automated dysarthria assessment achieve high accuracy, ranging from 75.1% to 93.97%.\nThe models are often trained and evaluated on the same speakers or only one unseen speaker for each target class [17 ###reference_b17###], potentially leading to biased results due to the model's ability to identify the speaker rather than dysarthria-related information.\n\nSetup.\nWe assessed the classifiers' performance against speakers absent during training.\nThe UA-Speech dataset's raw recordings are categorized by patient ID to ensure no overlap between the training and testing sets. Thus, all the recordings from a speaker are either in the training or test set.\nWe used hand-crafted features, including acoustic features, and various representations obtained from self-supervised models like HuBERT, wav2vec2, and Modified CPC for feature extraction.\n\nResults.\nModels trained with self-supervised representations outperformed models trained on acoustic features for all dysarthria assessment tasks (Table 1 ###reference_###).\nThe results are obtained without fine-tuning the self-supervised feature extractors, making them a promising direction for automated dysarthria assessment.\nThe HuBERT and wav2vec2 representations demonstrated a word recognition accuracy ranging from 56.5% to 70.6%, surpassing the acoustic models that achieved only 9.6% accuracy. Similarly, these models showed accuracies (around 63%), significantly higher than acoustic models (45.5% for LR and 45.1% for MLP) in the intelligibility task.\nTo better understand the reliability of the assessment at the patient level, we propose a tool in Section 3.1 ###reference_###, which allows for a detailed analysis of the predictions (Figure 2 ###reference_###).\nOne can adapt the proposed tool to other feature extractors and classification models.\nThis tool can provide a more interpretable assessment per patient, and lead to personalized treatments.\n\n\n[Noise Reduction] Q2. What impact does enhancing the recordings have over the different tasks?\nPreliminaries.\nWe considered a scenario under which we enhance the Default dataset.\nWe considered speech restoration, a process that aims at restoring degraded speech signals to their original quality [18 ###reference_b18###].\nFor instance, speech is typically surrounded by background noise, blurred by reverberation in the room, or recorded with low-quality equipment.\nAmbient noise from clinical clicks or other artifacts may be present in dysarthric recordings.\nSetup.\nOur objective was to enhance the recordings by applying one of the speech enhancement approaches and evaluate the models' performance in such scenarios.\nWe generate a new version of the dataset after applying `VoiceFixer' [19 ###reference_b19###], a method that attempts to remove multiple distortions simultaneously.\nWe apply resampling to \n\n before extracting the representations to ensure that the sampling rates match between the input signal and the feature extractor.\n\nResults.\nAcross all tasks, feature extractors and classifiers, the performance decreased (Table 1 ###reference_###).\nNonetheless, self-supervised models still outperformed acoustic models (52.3% to 55.5% versus 46.3% for LR and 39.3% for MLP) in intelligibility task for instance.\nWe looked deeper into patient-level variations using our proposed visualization tool to display the model's predictions for the speaker intelligibility task.\nAcoustic features-based models predicted most participants as the control intelligibility group (Figure 2 ###reference_###).\nFurthermore, the enhancement tool led to partial speech segment removal in some recordings.\nAs such, the systematic use enhancement tools requires additional care if used as a preprocessing step in automated assessment pipelines.\n\n\n[Noise Addition] Q3. Are models trained on the Default dataset biased by patterns related to the noise in recordings?\nPreliminaries.\nSome recordings in the default settings of the dataset have different noise levels and clicking sounds.\nThus, one could argue that the good performance observed on the disease classification task is due to patterns associated with external factors from the recording rather than speech-related information.\nTo confirm whether the feature extractors and models can leverage information specific to classes, we conduct audio sample mixing to exacerbate such an effect.\n\nSetup.\nWe generated a new version of the dataset to determine whether the models perform well because they leverage specific noise patterns or if they extract information from the speech itself.\nFirst, we obtained a single background noise sample from the WHAM dataset [20 ###reference_b20###].\nThen, we mixed every audio recording from control patients with that noise pattern. Under such a scenario, feature extractors and models able to extract that singular noise pattern would achieve higher performance.\n\nResults.\nAll combinations of features extractors and models achieved higher accuracy on the classification task (Table 1 ###reference_###).\nWith noise addition, self-supervised models demonstrated great performance, with accuracies close to 99.7% in disease detection (binary task) and maintaining accuracies above 62.9%, compared to the acoustic models' 53.5% for LR and 53.2% for MLP in intelligibility (multi-label task). For word recognition, HuBERT maintained high performance (around 70%), slightly higher than wav2vec2 and Modified CPC.\nTherefore, all feature extractors and models could leverage such a pattern when enforcing a singular pattern across control patients."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Conclusions and Future Work",
45
+ "text": "Self-Supervised Learning in Dysarthria.\nSelf-supervised representations, such as HuBERT, wav2vec2, and Modified CPC, demonstrated higher performance in all dysarthria evaluation tasks (i.e., disease detection, word recognition, and intelligibility). As such, feature extractor trained on large scale healthy speech datasets can be leveraged for smaller dataset with dysarhtic speech.\n\nPatient-level Inspection.\nWe proposed a tool to inspect the predictions at a patient level.\nGiven a reliable classifier, one can inspect the different predictions for a given patient and determine whether the predictions indicate a mix of classes or if a class is overwhelmingly represented.\n\nClasses Imbalance.\nFor the Intelligibility Severity assessment, there is a major class imbalance concerning the number of recordings, i.e., (VL), (L), (M), (H + C) or (H) and (C).\nFurthermore, the intelligibility labels are coarse.\nFor instance, patients intelligibility score between and are grouped together.\nThe imbalance and coarse labels make it challenging to obtain fine-grained predictions.\n\nLimitations and Future Work.\nFuture work would benefit from exploring Whisper representations as well, and fine-tuning the feature extractors on dysarthric data.\nNonetheless, despite not requiring labels, Self-Supervised pretraining directly on the imbalanced dysarthric data could lead to representations that benefit the high/control intelligibility representations. As such, future works would benefit from leveraging methods to tackle class imbalance.\nIn addition, the current study focused on dysarthria, but it is unclear how these SSL models perform with other speech impairments. The generalizability of these models across a broader range of speech disorders remains an area to explore, and further investigation on other datasets would be beneficial."
46
+ },
47
+ {
48
+ "section_id": "6",
49
+ "parent_section_id": null,
50
+ "section_name": "Acknowledgements",
51
+ "text": "We thank Sandra Siby for her invaluable suggestions that improved the text.\nXavier F. Cadet is supported by UK Research and Innovation (UKRI Centre for Doctoral Training in AI for Healthcare grant number EP/S023283/1).\nHamed Haddadi is supported by the EPSRC Open Plus Fellowship (EP/W005271/1: Securing the Next Billion Consumer Devices on the Edge).\nFor open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising."
52
+ }
53
+ ],
54
+ "appendix": [],
55
+ "tables": {
56
+ "1": {
57
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.5.1.1\">Table 1</span>: </span>Task Performance:\nWe report the mean and (standard deviation) of the accuracy.\nResults for disease and word classification are derived from individual recordings, whereas intelligibility task results are based on speaker-level performance.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.4.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T1.3.4.1.1\"></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T1.3.4.1.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T1.3.4.1.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.4.1.3.1\" style=\"font-size:90%;\">Default</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T1.3.4.1.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.4.1.4.1\" style=\"font-size:90%;\">Noise Reduction</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S3.T1.3.4.1.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.4.1.5.1\" style=\"font-size:90%;\">Noise Addition</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.3.4.1\" style=\"font-size:90%;\">Task</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.3.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.3.5.1\" style=\"font-size:90%;\">Extractor</span></td>\n<td class=\"ltx_td ltx_align_center\" colspan=\"2\" id=\"S3.T1.1.1.1\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.1.1.1\" style=\"font-size:90%;\">Accuracy </span><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.1.1.2\" style=\"font-size:90%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" colspan=\"2\" id=\"S3.T1.2.2.2\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.2.2.2.1\" style=\"font-size:90%;\">Accuracy </span><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.2.2.2.2\" style=\"font-size:90%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center\" colspan=\"2\" id=\"S3.T1.3.3.3\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.3.3.1\" style=\"font-size:90%;\">Accuracy </span><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.3.3.2\" style=\"font-size:90%;\"></span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.5.2\">\n<td class=\"ltx_td\" id=\"S3.T1.3.5.2.1\"></td>\n<td class=\"ltx_td\" id=\"S3.T1.3.5.2.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.5.2.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.5.2.3.1\" style=\"font-size:90%;\">LR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.5.2.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.5.2.4.1\" style=\"font-size:90%;\">MLP</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.5.2.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.5.2.5.1\" style=\"font-size:90%;\">LR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.5.2.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.5.2.6.1\" style=\"font-size:90%;\">MLP</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.5.2.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.5.2.7.1\" style=\"font-size:90%;\">LR</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.5.2.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.5.2.8.1\" style=\"font-size:90%;\">MLP</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.6.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.6.3.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.6.3.1.1\" style=\"font-size:90%;\">disease</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.6.3.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.6.3.2.1\" style=\"font-size:90%;\">acoustic</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.6.3.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.6.3.3.1\" style=\"font-size:90%;\">69.3 (22.6)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.6.3.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.6.3.4.1\" style=\"font-size:90%;\">65.8 (24.6)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.6.3.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.6.3.5.1\" style=\"font-size:90%;\">59.4 (23.8)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.6.3.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.6.3.6.1\" style=\"font-size:90%;\">54.8 (33.7)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.6.3.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.6.3.7.1\" style=\"font-size:90%;\">79.4 (17.9)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.6.3.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.6.3.8.1\" style=\"font-size:90%;\">76.6 (21.0)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.7.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.7.4.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.7.4.1.1\" style=\"font-size:90%;\">disease</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.7.4.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.7.4.2.1\" style=\"font-size:90%;\">wav2vec2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.7.4.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.7.4.3.1\" style=\"font-size:90%;\">94.2 (8.0)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.7.4.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.7.4.4.1\" style=\"font-size:90%;\">94.1 (8.5)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.7.4.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.7.4.5.1\" style=\"font-size:90%;\">82.8 (16.4)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.7.4.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.7.4.6.1\" style=\"font-size:90%;\">81.8 (17.9)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.7.4.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.7.4.7.1\" style=\"font-size:90%;\">99.8 (0.2)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.7.4.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.7.4.8.1\" style=\"font-size:90%;\">99.7 (0.4)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.8.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.8.5.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.8.5.1.1\" style=\"font-size:90%;\">disease</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.8.5.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.8.5.2.1\" style=\"font-size:90%;\">hubert</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.8.5.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.8.5.3.1\" style=\"font-size:90%;\">94.0 (8.0)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.8.5.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.8.5.4.1\" style=\"font-size:90%;\">93.4 (9.8)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.8.5.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.8.5.5.1\" style=\"font-size:90%;\">84.8 (15.9)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.8.5.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.8.5.6.1\" style=\"font-size:90%;\">85.5 (15.1)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.8.5.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.8.5.7.1\" style=\"font-size:90%;\">99.7 (0.5)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.8.5.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.8.5.8.1\" style=\"font-size:90%;\">99.4 (1.3)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.9.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.9.6.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.9.6.1.1\" style=\"font-size:90%;\">disease</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.9.6.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.9.6.2.1\" style=\"font-size:90%;\">modified cpc</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.9.6.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.9.6.3.1\" style=\"font-size:90%;\">94.9 (7.3)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.9.6.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.9.6.4.1\" style=\"font-size:90%;\">95.8 (6.7)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.9.6.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.9.6.5.1\" style=\"font-size:90%;\">82.0 (19.4)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.9.6.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.9.6.6.1\" style=\"font-size:90%;\">84.1 (15.6)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.9.6.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.9.6.7.1\" style=\"font-size:90%;\">99.7 (0.5)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.9.6.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.9.6.8.1\" style=\"font-size:90%;\">99.6 (0.9)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.10.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.10.7.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.10.7.1.1\" style=\"font-size:90%;\">words</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.10.7.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.10.7.2.1\" style=\"font-size:90%;\">acoustic</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.10.7.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.10.7.3.1\" style=\"font-size:90%;\">9.6 (5.3)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.10.7.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.10.7.4.1\" style=\"font-size:90%;\">9.6 (5.7)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.10.7.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.10.7.5.1\" style=\"font-size:90%;\">8.9 (5.2)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.10.7.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.10.7.6.1\" style=\"font-size:90%;\">11.3 (6.4)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.10.7.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.10.7.7.1\" style=\"font-size:90%;\">9.2 (5.0)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.10.7.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.10.7.8.1\" style=\"font-size:90%;\">9.6 (5.7)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.11.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.11.8.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.11.8.1.1\" style=\"font-size:90%;\">words</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.11.8.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.11.8.2.1\" style=\"font-size:90%;\">wav2vec2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.11.8.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.11.8.3.1\" style=\"font-size:90%;\">56.5 (27.4)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.11.8.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.11.8.4.1\" style=\"font-size:90%;\">56.5 (27.2)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.11.8.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.11.8.5.1\" style=\"font-size:90%;\">43.9 (23.8)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.11.8.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.11.8.6.1\" style=\"font-size:90%;\">44.4 (24.0)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.11.8.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.11.8.7.1\" style=\"font-size:90%;\">54.0 (26.5)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.11.8.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.11.8.8.1\" style=\"font-size:90%;\">54.5 (26.3)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.12.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.12.9.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.12.9.1.1\" style=\"font-size:90%;\">words</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.12.9.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.12.9.2.1\" style=\"font-size:90%;\">hubert</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.12.9.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.12.9.3.1\" style=\"font-size:90%;\">70.6 (29.1)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.12.9.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.12.9.4.1\" style=\"font-size:90%;\">69.3 (29.0)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.12.9.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.12.9.5.1\" style=\"font-size:90%;\">56.6 (27.7)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.12.9.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.12.9.6.1\" style=\"font-size:90%;\">55.8 (27.4)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.12.9.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.12.9.7.1\" style=\"font-size:90%;\">70.2 (29.0)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.12.9.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.12.9.8.1\" style=\"font-size:90%;\">69.1 (29.1)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.13.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.13.10.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.13.10.1.1\" style=\"font-size:90%;\">words</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.13.10.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.13.10.2.1\" style=\"font-size:90%;\">modified cpc</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.13.10.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.13.10.3.1\" style=\"font-size:90%;\">53.8 (29.4)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.13.10.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.13.10.4.1\" style=\"font-size:90%;\">57.3 (29.1)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.13.10.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.13.10.5.1\" style=\"font-size:90%;\">46.1 (25.8)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.13.10.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.13.10.6.1\" style=\"font-size:90%;\">48.1 (26.5)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.13.10.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.13.10.7.1\" style=\"font-size:90%;\">54.7 (30.2)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.13.10.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.13.10.8.1\" style=\"font-size:90%;\">57.1 (29.5)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.14.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.14.11.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.14.11.1.1\" style=\"font-size:90%;\">intelligibility</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.14.11.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.14.11.2.1\" style=\"font-size:90%;\">acoustic</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.14.11.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.14.11.3.1\" style=\"font-size:90%;\">45.5 (34.8)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.14.11.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.14.11.4.1\" style=\"font-size:90%;\">45.1 (32.7)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.14.11.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.14.11.5.1\" style=\"font-size:90%;\">46.3 (41.8)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.14.11.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.14.11.6.1\" style=\"font-size:90%;\">39.3 (36.8)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.14.11.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.14.11.7.1\" style=\"font-size:90%;\">53.5 (36.4)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S3.T1.3.14.11.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.14.11.8.1\" style=\"font-size:90%;\">53.2 (34.9)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.15.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.15.12.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.15.12.1.1\" style=\"font-size:90%;\">intelligibility</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.15.12.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.15.12.2.1\" style=\"font-size:90%;\">wav2vec2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.15.12.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.15.12.3.1\" style=\"font-size:90%;\">63.4 (37.5)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.15.12.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.15.12.4.1\" style=\"font-size:90%;\">63.2 (37.6)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.15.12.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.15.12.5.1\" style=\"font-size:90%;\">54.3 (35.5)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.15.12.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.15.12.6.1\" style=\"font-size:90%;\">54.6 (36.0)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.15.12.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.15.12.7.1\" style=\"font-size:90%;\">67.6 (38.5)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.15.12.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.15.12.8.1\" style=\"font-size:90%;\">67.4 (38.1)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.16.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.16.13.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.16.13.1.1\" style=\"font-size:90%;\">intelligibility</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.16.13.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.16.13.2.1\" style=\"font-size:90%;\">hubert</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.16.13.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.16.13.3.1\" style=\"font-size:90%;\">61.6 (39.1)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.16.13.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.16.13.4.1\" style=\"font-size:90%;\">62.6 (37.9)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.16.13.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.16.13.5.1\" style=\"font-size:90%;\">55.4 (36.2)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.16.13.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.16.13.6.1\" style=\"font-size:90%;\">55.5 (35.9)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.16.13.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.16.13.7.1\" style=\"font-size:90%;\">66.2 (40.2)</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.3.16.13.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.16.13.8.1\" style=\"font-size:90%;\">66.9 (39.5)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.17.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.17.14.1\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.17.14.1.1\" style=\"font-size:90%;\">intelligibility</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.17.14.2\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.17.14.2.1\" style=\"font-size:90%;\">modified cpc</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.17.14.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.17.14.3.1\" style=\"font-size:90%;\">59.6 (41.1)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.17.14.4\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.17.14.4.1\" style=\"font-size:90%;\">62.4 (39.9)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.17.14.5\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.17.14.5.1\" style=\"font-size:90%;\">52.3 (35.0)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.17.14.6\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.17.14.6.1\" style=\"font-size:90%;\">54.1 (34.6)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.17.14.7\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.17.14.7.1\" style=\"font-size:90%;\">62.9 (41.9)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.17.14.8\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.3.17.14.8.1\" style=\"font-size:90%;\">64.9 (41.0)</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
58
+ "capture": "Table 1: Task Performance:\nWe report the mean and (standard deviation) of the accuracy.\nResults for disease and word classification are derived from individual recordings, whereas intelligibility task results are based on speaker-level performance.\n"
59
+ }
60
+ },
61
+ "image_paths": {
62
+ "1": {
63
+ "figure_path": "2306.04337v2_figure_1.png",
64
+ "caption": "Fig. 1: The proposed tool overview.",
65
+ "url": "http://arxiv.org/html/2306.04337v2/x1.png"
66
+ },
67
+ "2": {
68
+ "figure_path": "2306.04337v2_figure_2.png",
69
+ "caption": "Fig. 2: Patient-level predicted intelligibility:\nThe top and bottom row show the predictions using respectively the acoustic features and the HuBERT features.\nThe predictions are reported from left to right based on the environment: Default, Noise Reduction, and Noise Addition datasets.\nEach intelligibility class is gradient-color coded, from very low intelligibility in blue on the left to control level in red on the right.\nFor each patient the section that stands out indicates the majority predicted intelligibility class, along with its label (Very Low: 0, Low: 1, Medium: 2, High: 3, Control: 4).\nWhile the performance based on HuBERT features is higher than acoustic, for a given speaker, there are major mis-classifications at the recording level.",
70
+ "url": "http://arxiv.org/html/2306.04337v2/x2.png"
71
+ }
72
+ },
73
+ "validation": true,
74
+ "references": [
75
+ {
76
+ "1": {
77
+ "title": "``Effect of boost articulation therapy (bart) on intelligibility in\nadults with dysarthria,''",
78
+ "author": "Viviana Mendoza Ramos, Charlotte Paulyn, Leen Van den Steen, Maria E\nHernandez-Diaz Huici, Marc De Bodt, and Gwen Van Nuffelen,",
79
+ "venue": "International Journal of Language & Communication Disorders,\nvol. 56, no. 2, pp. 271\u2013282, 2021.",
80
+ "url": null
81
+ }
82
+ },
83
+ {
84
+ "2": {
85
+ "title": "``Dysarthric Speech Recognition From Raw Waveform with Parametric\nCNNs,''",
86
+ "author": "Zhengjun Yue, Erfan Loweimi, Heidi Christensen, Jon Barker, and Zoran\nCvetkovic,",
87
+ "venue": "in Proc. Interspeech, 2022, pp. 31\u201335.",
88
+ "url": null
89
+ }
90
+ },
91
+ {
92
+ "3": {
93
+ "title": "``Hubert: Self-supervised speech representation learning by masked\nprediction of hidden units,''",
94
+ "author": "Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan\nSalakhutdinov, and Abdelrahman Mohamed,",
95
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language\nProcessing, vol. 29, pp. 3451\u20133460, 2021.",
96
+ "url": null
97
+ }
98
+ },
99
+ {
100
+ "4": {
101
+ "title": "``A survey of technologies for automatic dysarthric speech\nrecognition,''",
102
+ "author": "Zhaopeng Qian, Kejing Xiao, and Chongchong Yu,",
103
+ "venue": "EURASIP Journal on Audio, Speech, and Music Processing, p. 48,\n2023.",
104
+ "url": null
105
+ }
106
+ },
107
+ {
108
+ "5": {
109
+ "title": "``Investigating Self-supervised Pretraining Frameworks for\nPathological Speech Recognition,''",
110
+ "author": "Lester Phillip Violeta, Wen Chin Huang, and Tomoki Toda,",
111
+ "venue": "in Proc. Interspeech, 2022, pp. 41\u201345.",
112
+ "url": null
113
+ }
114
+ },
115
+ {
116
+ "6": {
117
+ "title": "``An investigation to identify optimal setup for automated assessment\nof dysarthric intelligibility using deep learning technologies,''",
118
+ "author": "Kyle Hall, Andy Huang, and Seyed Reza Shahamiri,",
119
+ "venue": "Cognitive Computation, 2022.",
120
+ "url": null
121
+ }
122
+ },
123
+ {
124
+ "7": {
125
+ "title": "``Characterization of atypical vocal source excitation, temporal\ndynamics and prosody for objective measurement of dysarthric word\nintelligibility,''",
126
+ "author": "Tiago H Falk, Wai-Yip Chan, and Fraser Shein,",
127
+ "venue": "Speech Communication, vol. 54, no. 5, pp. 622\u2013631, 2012.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "8": {
133
+ "title": "``Improved speaker independent dysarthria intelligibility\nclassification using deepspeech posteriors,''",
134
+ "author": "Ayush Tripathi, Swapnil Bhosale, and Sunil Kumar Kopparapu,",
135
+ "venue": "in IEEE International Conference on Acoustics, Speech and Signal\nProcessing (ICASSP). IEEE, 2020.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "9": {
141
+ "title": "``Dysarthric speech database for universal access research,''",
142
+ "author": "Heejin Kim, Mark Hasegawa-Johnson, Adrienne Perlman, Jon Gunderson, Thomas S\nHuang, Kenneth Watkin, and Simone Frame,",
143
+ "venue": "in Ninth Annual Conference of the International Speech\nCommunication Association, 2008.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "10": {
149
+ "title": "``Automated dysarthria severity classification: A study on acoustic\nfeatures and deep learning techniques,''",
150
+ "author": "Amlu Anna Joshy and Rajeev Rajan,",
151
+ "venue": "IEEE Transactions on Neural Systems and Rehabilitation\nEngineering, vol. 30, pp. 1147\u20131157, 2022.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "11": {
157
+ "title": "``Investigating the Impact of Speech Compression on the Acoustics of\nDysarthric Speech,''",
158
+ "author": "Kelvin Tran, Lingfeng Xu, Gabriela Stegmann, Julie Liss, Visar Berisha, and\nRene Utianski,",
159
+ "venue": "in Proc. Interspeech, 2022, pp. 2263\u20132267.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "12": {
165
+ "title": "``Unsupervised pretraining transfers well across languages,''",
166
+ "author": "Morgane Rivi\u00e8re, Armand Joulin, Pierre-Emmanuel Mazar\u00e9, and Emmanuel Dupoux,",
167
+ "venue": "in IEEE International Conference on Acoustics, Speech and Signal\nProcessing (ICASSP), 2020, pp. 7414\u20137418.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "13": {
173
+ "title": "``Application of an isolated word speech recognition system in the\nfield of mental health consultation: Development and usability study,''",
174
+ "author": "Weifeng Fu,",
175
+ "venue": "JMIR Medical Informatics, vol. 8, no. 6, pp. e18677, 2020.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "14": {
181
+ "title": "``Small-footprint keyword spotting using deep neural networks,''",
182
+ "author": "Guoguo Chen, Carolina Parada, and Georg Heigold,",
183
+ "venue": "in IEEE International Conference on Acoustics, Speech and Signal\nProcessing (ICASSP). IEEE, 2014, pp. 4087\u20134091.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "15": {
189
+ "title": "``An acoustic study of the relationships among neurologic disease,\ndysarthria type, and severity of dysarthria,''",
190
+ "author": "Yunjung Kim, Raymond D Kent, and Gary Weismer,",
191
+ "venue": "Journal of Speech, Language, and Hearing Research, 2011.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "16": {
197
+ "title": "``A review of automated intelligibility assessment for dysarthric\nspeakers,''",
198
+ "author": "Andy Huang, Kyle Hall, Catherine Watson, and Seyed Reza Shahamiri,",
199
+ "venue": "in International Conference on Speech Technology and\nHuman-Computer Dialogue (SpeD). IEEE, 2021, pp. 19\u201324.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "17": {
205
+ "title": "``Classification of dysarthric speech according to the severity of\nimpairment: an analysis of acoustic features,''",
206
+ "author": "Bassam Ali Al-Qatab and Mumtaz Begum Mustafa,",
207
+ "venue": "IEEE Access, vol. 9, pp. 18183\u201318194, 2021.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "18": {
213
+ "title": "Digital audio restoration,",
214
+ "author": "Simon J Godsill and Peter JW Rayner,",
215
+ "venue": "Springer, 2013.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "19": {
221
+ "title": "``Voicefixer: Toward general speech restoration with neural\nvocoder,''",
222
+ "author": "Haohe Liu, Qiuqiang Kong, Qiao Tian, Yan Zhao, DeLiang Wang, Chuanzeng Huang,\nand Yuxuan Wang,",
223
+ "venue": "arXiv preprint arXiv:2109.13731, 2021.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "20": {
229
+ "title": "``Wham!: Extending speech separation to noisy environments,''",
230
+ "author": "Gordon Wichern, Joe Antognini, Michael Flynn, Licheng Richard Zhu, Emmett\nMcQuinn, Dwight Crow, Ethan Manilow, and Jonathan Le Roux,",
231
+ "venue": "arXiv preprint arXiv:1907.01160, 2019.",
232
+ "url": null
233
+ }
234
+ }
235
+ ],
236
+ "url": "http://arxiv.org/html/2306.04337v2"
237
+ }
20240322/2306.04366v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2306.06721v3.json ADDED
@@ -0,0 +1,479 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Differentially Private Conditional Independence Testing",
3
+ "abstract": "Conditional independence (CI) tests are widely used in statistical data analysis, e.g., they are the building block of many algorithms for causal graph discovery. The goal of a CI test is to accept or reject the null hypothesis that , where . In this work, we investigate conditional independence testing under the constraint of differential privacy. We design two private CI testing procedures: one based on the generalized covariance measure of Shah and Peters (2020) and another based on the conditional randomization test of Cand\u00e8s et al. (2016) (under the model-X assumption). We provide theoretical guarantees on the performance of our tests and validate them empirically. These are the first private CI tests with rigorous theoretical guarantees that work for the general case when is continuous.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Conditional independence (CI) tests are a powerful tool in statistical data analysis, e.g., they are building blocks for graphical models, causal inference, and causal graph discovery [9 ###reference_b9###, 20 ###reference_b20###, 26 ###reference_b26###]. These analyses are frequently performed on sensitive data, such as clinical datasets and demographic datasets, where concerns for privacy are foremost. For example, in clinical trials, CI tests are used to answer fundamental questions such as \u201cAfter accounting for (conditioning on) a set of patient covariates (e.g., age or gender), does a treatment lead to better patient outcomes ?\u201d.\nFormally, given three random variables where , , and , denote the conditional independence of and given by . Our problem is that of testing\ngiven data drawn i.i.d. from a joint distribution of . CI testing is a much harder problem than (unconditional) independence testing, where the variable is omitted. Indeed, Shah and Peters [30 ###reference_b30###] showed that CI testing is a statistically impossible task for continuous random variables.111Any test that uniformly controls the type-I error (false positive rate) for all absolutely continuous triplets such that , even asymptotically, does not have nontrivial power against any alternative. Thus, techniques for independence testing do not extend to the CI testing problem.\nWhen the underlying data is sensitive and confidential, publishing statistics (such as the value of a CI independence test statistic or the corresponding p-value) can leak private information about individuals in the data. For instance, Genome-Wide Association Studies (GWAS) involve finding (causal) relations between Single\nNucleotide Polymorphisms (SNPs) and diseases. CI tests are building blocks for establishing these relations, and the existence of a link between a specific SNP\nand a rare disease\nmay indicate the presence of a minority patient. Differential privacy [13 ###reference_b13###] is a widely studied and deployed\nformal privacy guarantee for data analysis. The output distributions of a differentially private algorithm must look nearly indistinguishable for any two input datasets that differ only in the data of a single individual. In this work, we design the first differentially private (DP) CI tests that can handle continuous variables."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Related Work",
15
+ "text": "Wang et al. [37 ###reference_b37###] is the only work, prior to ours, to explicitly study private CI testing, motivated by an application to causal discovery. Their tests (obtained from Kendall\u2019s and Spearman\u2019s score) are designed for categorical . While these tests could be adapted to work for continuous via clustering, in practice this method does not seem to control type-I error, as we show in Fig.\u20091 ###reference_###. The problem worsens with higher-dimensional .\nOur techniques also differ from those of Wang et al. [37 ###reference_b37###], who obtain their tests by bounding the sensitivity of non-private CI scores and adding appropriately scaled noise to the true value of the score.\nThey state two open problems: obtaining private CI tests for continuous and obtaining private tests from scores of unbounded sensitivity (as is the case with the GCM score).\nWe solve both open problems, and manage to privatize the GCM score by instead adding noise to an intermediate statistic, the residuals of fitting to and to .\nAnother line of work [31 ###reference_b31###, 18 ###reference_b18###, 27 ###reference_b27###] has utilized the \u201csubsample and aggregate\u201d framework of differential privacy [25 ###reference_b25###] to obtain private versions of existing hypothesis tests in a black-box fashion.\nIn this approach, the dataset is partitioned into smaller datasets; the non-private hypothesis test is evaluated on the smaller datasets; and finally, the results are privately aggregated. Based on this method, Kazan et al. [18 ###reference_b18###] propose a test-of-tests (ToT) framework to construct a private version of any known (non-private) hypothesis test. However, they show guarantees on the power of their test based on finite-sample guarantees of the power of the non-private hypothesis test. Since finite-sample guarantees are impossible for CI testing, their method gives no power guarantees for CI testing, and thus cannot be reliably used in practice. In addition, in Fig.\u20091 ###reference_### we compare the type-I error control of our tests with the ToT framework and show that it can fail to control type-I error.\nSmith [31 ###reference_b31###] analyzed the asymptotic properties of subsample-and-aggregate and showed that for a large family of statistics, one can obtain a corresponding DP statistic with the same asymptotic distribution as the original statistic.\nIn particular, the result of Smith [31 ###reference_b31###] can be applied to obtain a DP version of the GCM statistic. However, compared to our results on the private GCM, (a) only a weaker notion of privacy, known as approximate DP, would be guaranteed, and (b) an additional condition on the data-generating distribution would have to be introduced, to guarantee a bounded third moment of the GCM statistic.\nFinally, the test of Pe\u00f1a and Barrientos [27 ###reference_b27###] only outputs a binary accept/reject decision and not a p-value as our tests provide, and was empirically outperformed by the test of Kazan et al. [18 ###reference_b18###].\nA line of work on private independence testing has focused on privatizing the chi-squared statistic [36 ###reference_b36###, 17 ###reference_b17###, 34 ###reference_b34###, 41 ###reference_b41###, 38 ###reference_b38###, 16 ###reference_b16###, 28 ###reference_b28###]. These tests operate with categorical and .\nEarlier works obtained private hypothesis tests by adding noise to the histogram of the data [17 ###reference_b17###], but it was later pointed out that this approach does not provide reliable type-I error control at small sample sizes [14 ###reference_b14###]. Consequent works used numerical approaches to obtain the distribution of the noisy statistic and calculate p-values with that distribution [34 ###reference_b34###, 41 ###reference_b41###, 38 ###reference_b38###, 16 ###reference_b16###], whereas Rogers and Kifer [28 ###reference_b28###] obtain new statistics for chi-squared tests whose distribution after the privacy noise can be derived analytically. In this light, one important feature of our private GCM test is that its type-I error control can be more reliable than for the non-private GCM, even at small , as our experiments demonstrate.\nFor continuous and , Kusner et al. [21 ###reference_b21###] obtained DP versions of several dependence scores (Kendall\u2019s , Spearman\u2019s , HSIC), however, they do not provide type-I error or power guarantees. In follow-up work, Kim and Schrab [19 ###reference_b19###] obtained private versions of permutation tests, that were applied to kernel-based independence tests, such as HSIC.\nNote that CI testing is a much harder task than independence testing, and techniques for the latter do not necessarily translate to CI testing. Our work is part of the broader literature on private hypothesis testing [2 ###reference_b2###, 5 ###reference_b5###, 33 ###reference_b33###, 8 ###reference_b8###, 39 ###reference_b39###, 1 ###reference_b1###, 4 ###reference_b4###, 10 ###reference_b10###, 35 ###reference_b35###].\nA popular category of CI tests are kernel-based tests, obtained by extending the Hilbert-Schmidt independence criterion to the conditional setting [15 ###reference_b15###, 42 ###reference_b42###, 32 ###reference_b32###]. However, these tests only provide a weaker pointwise asymptotic validity guarantee. It is widely acknowledged that for a statistical test to be useful in practice, it needs to provide the stronger guarantees of either valid level at finite sample size or uniformly asymptotic level. Our private GCM test provides the latter guarantee.\nOne way of getting around the hardness result of Shah and Peters [30 ###reference_b30###] is through the model-X assumption, where the conditional distribution of is assumed to be accessible.\nTests based on this assumption, such as CRT (conditional randomization test) [6 ###reference_b6###] and CPT (conditional permutation test) [3 ###reference_b3###], provide a general framework for conditional independence testing, where one can use their test statistic of choice and exactly (non-asymptotically) control the type-I error regardless of the data dimensionality."
16
+ },
17
+ {
18
+ "section_id": "2",
19
+ "parent_section_id": null,
20
+ "section_name": "Preliminaries",
21
+ "text": "In this section, we introduce notation used in the paper as well as relevant background. In Section 2.1 ###reference_###, we introduce background on differential privacy. In Section 2.2 ###reference_### we provide background on hypothesis testing (including standard definitions of p-value, type-I error, uniform asymptotic level, power, etc.). Finally, in Section 2.3 ###reference_### we state a result of Kusner et al. [21 ###reference_b21###] used in our paper on the residuals of kernel ridge regression"
22
+ },
23
+ {
24
+ "section_id": "2.1",
25
+ "parent_section_id": "2",
26
+ "section_name": "Background on Differential Privacy",
27
+ "text": "The notion of neighboring datasets is central to differential privacy. In this work, we consider datasets of datapoints , drawn i.i.d. from a joint distribution on some domain . Let denote the universe of datasets. A dataset is a neighbor of if it can be obtained from by replacing at most one datapoint with an arbitrary entry , for some . For the purposes of CRT, where we use the distributional information about to resample additional data, we define to include the new samples (see Section 4 ###reference_###).\nA randomized algorithm Alg is -DP if for all neighboring datasets and all events in the output space of Alg, it holds\n\nwhere the probability is over the randomness of the algorithm.\nThe Laplace mechanism is a widely used framework for obtaining DP algorithms [13 ###reference_b13###].\nFor a function , its -sensitivity is defined as\nLet and be a function with -sensitivity . Let be a noise vector from the Laplace\ndistribution with scale parameter . The Laplace Mechanism that, on input and , outputs is -DP.\nDifferential privacy satisfies a post-processing property.\nIf the algorithm is -differentially private, and is any randomized function, then the algorithm is -differentially private."
28
+ },
29
+ {
30
+ "section_id": "2.2",
31
+ "parent_section_id": "2",
32
+ "section_name": "Background on Hypothesis Testing",
33
+ "text": "Let be the class of the joint distributions for the random variables . We say is a null distribution if . The null-hypothesis, denoted , is the class of null distributions,\nConsider a (potentially) randomized test that is run on samples from a distribution and outputs a binary decision: for rejecting the null hypothesis and for accepting the null hypothesis. The quantity , where is a null distribution, refers to the type-I error of the test, i.e., the probability that it erroneously rejects the (true) null hypothesis. Given level and the null hypothesis , we say that the test has valid level at sample size if the type-I error is bounded by , i.e.,:\nThe sequence has\nUsually, we want at least uniformly asymptotic level to hold for a test. Otherwise, for any sample size , there can be some null distribution that does not control type-I error at that sample size.\nA hypothesis test is usually derived from a statistic (such as the GCM statistic) calculated on samples drawn i.i.d from the distribution of . Having obtained a value for the statistic , the two-sided p-value is:\nThe hypothesis test with desired validity level can then be defined as\nTherefore, to obtain a test with the desired validity we need to compute the p-value.\nThe p-value is typically calculated using information about the distribution of . We say that converges uniformly over to the standard Gaussian distribution if:\nwhere is the CDF of the standard Gaussian.\nFor the GCM statistic, we are given that (under mild assumptions) converges uniformly over the null hypothesis to a standard Gaussian distribution [30 ###reference_b30###]. Thus, if we set and define the hypothesis test as in (1) ###reference_###, we obtain that has uniformly asymptotic level .\nOnce we have a test with uniformly asymptotic level, we would also like the test to correctly accept the alternate hypothesis, when this hypothesis holds. Let be the set of alternate distributions (for which ). The power of a test is the probability that it correctly rejects the null hypothesis, given that the alternate hypothesis holds. A sequence of tests has"
34
+ },
35
+ {
36
+ "section_id": "2.3",
37
+ "parent_section_id": "2",
38
+ "section_name": "Residuals of Kernel Ridge Regression",
39
+ "text": "In our algorithms and experiments, we use kernel ridge regression (KRR) as a procedure for regressing and on , and rely on the following result by Kusner et al. [21 ###reference_b21###] about the sensitivity of the residuals of KRR.222One could also use other regression techniques within our private GCM and private CRT frameworks, and theoretical guarantees continue to hold if similar () bounds on the sensitivity of the residuals are true.\nLet be a dataset of datapoints\n, from the domain . Suppose that . Given a Hilbert space , let be the vector that minimizes the kernel ridge regression objective\nfor kernel with for all . Define analogously for a neighboring dataset that is obtained by replacing one datapoint in . Then and for all it holds:"
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Private Generalized Covariance Measure",
45
+ "text": "Here, we present our private Generalized Covariance Measure (GCM) test.\nMissing proofs are in Appendix A ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Private Conditional Randomized Testing",
51
+ "text": "Here, we propose a private version of the conditional randomization test (CRT), which uses access to the distribution of as a key assumption. Recall that such an assumption is useful, for example, when one has access to abundant unlabeled data . Missing proofs are in Appendix B ###reference_###."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Empirical Evaluation",
57
+ "text": "We evaluate our algorithms on a real-world dataset and synthetic data. We start with the latter as it has the advantage that we know the ground-truth of whether ."
58
+ },
59
+ {
60
+ "section_id": "6",
61
+ "parent_section_id": null,
62
+ "section_name": "Concluding Remarks",
63
+ "text": "This work studies the fundamental statistical task of conditional independence testing under privacy constraints. We design the first DP conditional independence tests that support the general case of continuous variables and have strong theoretical guarantees on both statistical validity and power. Our experiments support our theoretical results and additionally demonstrate that our private tests have more robust type-I error control than their non-private counterparts.\nWe envision two straightforward generalizations of our private GCM test. First, our test can be generalized to handle multivariate and , following Shah and Peters [30 ###reference_b30###], who obtain the test statistic from the residual products of fitting each variable in and each variable in to .\nA natural extension would be to compute the same statistic on our noisy residual products.\nSecondly, following Scheidegger et al. [29 ###reference_b29###], a private version of the weighted GCM would allow the test to achieve power against a wider class of alternatives than the unweighted version.\nFinally, constructing private versions of other model-X based tests, such as the Conditional Permutation Test [3 ###reference_b3###], could be another interesting direction."
64
+ }
65
+ ],
66
+ "appendix": [
67
+ {
68
+ "section_id": "Appendix 1",
69
+ "parent_section_id": null,
70
+ "section_name": "Appendix A Proofs of Section\u00a03",
71
+ "text": "In this section, we state and prove a longer version of Theorem\u20093.2 ###reference_theorem2###. Item 1 ###reference_i1### of Theorem\u2009A.1 ###reference_theorem1### gives the pointwise asymptotic level guarantee of the private GCM, whereas Item 2 ###reference_i2### shows the more desirable uniform asymptotic level guarantee under a slightly stronger condition. Item 2 corresponds exactly to Theorem\u20093.2 ###reference_theorem2###.888The assumptions in Item 2 ###reference_i2###, Theorem\u2009A.1 ###reference_theorem1### are identical to those of Definition 3.1 ###reference_theorem1###. The assumptions in Item 1 of Theorem\u2009A.1 ###reference_theorem1### are slightly weaker than those of Definition 3.1 ###reference_theorem1###. See Section 2.2 ###reference_### for definitions of pointwise asymptotic level and uniformly asymptotic level.\nFrom a privacy perspective, the proof of Item 2 ###reference_i2### is more involved. While Shah and Peters [30 ###reference_b30###] consider the asymptotic behavior of variables (the product of the true residuals), we instead study the behavior of (the product of the true residuals with the noise random variables). Similarly, while they study the product of error terms of the fitting method, , we instead need to consider , the product of error terms with the noise variables. A key step in the proof is to show that the noise variables grow at a slower rate than the rate of decay of the error terms, with increasing sample size .\nLet be the set of distributions for that are absolutely continuous with respect to the Lebesgue measure. The null hypothesis, , is the subset of distributions for which .\nGiven , let be the joint distribution of variables where is independent of . For a set of distributions , let denote the set of distributions for all . Denote by the CDF of the standard normal distribution.\n(Type-I Error Control of Private GCM) \nLet and be known bounds on the domains of and , respectively. Let be the set of null distributions defined above. Given a dataset , let be the rescaled dataset obtained by setting and . Consider , as defined in (2 ###reference_###), for the rescaled dataset . Let for , where are constants. Then , defined in Algorithm 1 ###reference_###, satisfies:\nFor such that , and , then\nLet be a set of distributions such that and . If in addition , for some constants , then\nLet , , and . Denote by the numerator of and by the denominator. We sometimes omit from the notation for ease of presentation.\nIn this section, we state and prove a longer version of Theorem\u20093.3 ###reference_theorem3###. Item 1 ###reference_i1### of Theorem\u2009A.3 ###reference_theorem3### gives the pointwise power guarantee, whereas Item 2 ###reference_i2###shows the uniform power guarantee under a slightly stronger condition. Item 2 ###reference_i2### corresponds exactly to Theorem\u20093.3 ###reference_theorem3###.999The assumptions in Item 2, Theorem\u2009A.3 ###reference_theorem3### are identical to those of Definition 3.1 ###reference_theorem1###. The assumptions in Item 1 of Theorem\u2009A.3 ###reference_theorem3### are slightly weaker than those of Definition 3.1 ###reference_theorem1###.\nFollowing Shah and Peters [30 ###reference_b30###], to facilitate the theoretical analysis of power, we separate the model fitting step from the calculation of the residuals. We calculate and on the first half of the dataset and calculate the residuals on the second half. In practice, it is still advised to perform both steps on the full dataset.\n(Power of Private GCM). \nConsider the setup of Theorem\u20093.2 ###reference_theorem2###. Let be as defined in (4) ###reference_###, with the difference that and are estimated on the first half of the dataset , and are calculated on the second half. Define the \u201csignal\u201d () and \u201cnoise\u201d () of the true residuals as:\nIf for we have and , then\nLet such that and . If in addition , for some constants , then (13) ###reference_### holds over uniformly.\nNote that and . The proof is similar to that of Theorem\u2009A.1 ###reference_theorem1###, using Claim A.4 ###reference_theorem4### below from Shah and Peters [30 ###reference_b30###].\n\u220e\nUnder the assumptions listed in Theorem\u20093.3 ###reference_theorem3###, as , the following hold.\n.\n.\n.\n.\n.\nAdditionally, under the assumptions in Item 2 of Theorem\u20093.3 ###reference_theorem3###, all convergence statements above are uniform over .\nUnder the assumptions of Theorem\u2009A.3 ###reference_theorem3###, Algorithm 1 ###reference_### has asymptotic power of if .\nNext, we show that Theorem\u20093.3 ###reference_theorem3### implies that the private GCM has asymptotic power of . A similar claim and proof holds for uniformly asymptotic power.\nNote that Algorithm 1 ###reference_### has asymptotic power of if, for all , it holds as . Given , note that\nwhere the convergence statement follows from Theorem\u20093.3 ###reference_theorem3###.\nSince and is a constant, then as . Therefore , and as a result , as desired.\n\u220e\nIn this section, we prove Lemma 3.4 ###reference_theorem4### on the sensitivity of the residuals products for kernel ridge regression.\nWe then use Lemma 3.4 ###reference_theorem4### to prove Corollary A.6 ###reference_theorem6### on the type-I error and power gurantess of PrivGCM.\nSee 3.4 ###reference_theorem4###\nConsider two neighboring datasets and .\nFor , let denote the residuals of fitting a kernel ridge regression model of to and to , respectively. Suppose without loss of generality that and differ only in the last datapoint, i.e., for . Then, by Theorem\u20092.5 ###reference_theorem5###, for , we have\nFor the last datapoint we have\nFinally, note that for all , we have. The same bound holds for . Let and . This gives us that for all :\nFor we have\nFinally,\nas desired.\n\u220e\nLet and be known bounds on the domains of and , respectively. Given a dataset , let be the rescaled dataset obtained by setting and . Let PrivGCM be the algorithm which runs Algorithm 1 ###reference_### with kernel ridge regression as the fitting procedure and sensitivity bound , where is the constant from Lemma\u20093.4 ###reference_theorem4###. Algorithm PrivGCM is -differentially private.\nThe statistic , defined in Algorithm 1 ###reference_###, satisfies the following.\nLet be a family of distributions such that . If in addition , for some constants , then\nDefine and .\nLet be a family of distributions such that and . If in addition we have , for some constants , then\nwhere .\nThe fact that PrivGCM is -differentially private follows from Lemmas\u20092.3 ###reference_theorem3### and 3.4 ###reference_theorem4###. Note that if the regularization parameter is chosen adaptively based on the data, then obtaining the value of the constant by plugging in might not be -differentially private. This can be resolved by setting an a priori lower bound on , independent of the data, e.g., , and plugging that lower bound to obtain .\nItem 1 ###reference_i1### follows from Theorem\u20093.2 ###reference_theorem2### and Lemma\u20093.4 ###reference_theorem4###. Item 2 ###reference_i2### follows from Theorem\u20093.3 ###reference_theorem3### and Lemma\u20093.4 ###reference_theorem4###\n\u220e\nLet be sequences of random variables. If converges in distribution to and converges in probability to a constant , then (1) , (2) , and (3) .\nLet be a family of distributions for a random variable such that for all it holds , , and for some . Let be i.i.d copies of . For , define . Then\nLet be a family of distributions for a random variable such that for all it holds and for some . Let be i.i.d copies of . For , define . Then for all it holds\nLet be a family of distributions that determines the law of sequences and of random variables.\nIf converges uniformly over to , and , then converges uniformly over to .\nIf converges uniformly over to , and , then converges uniformly over to .\nIf for some and , then .\nIf and , then .\nLet be i.i.d copies of . Let . Then .\nIf , then . It is a known fact that , where is the harmonic number . The claim follows from the fact that .\n\u220e\nLet and be random variables and . Then .\nLet . Then for we have .\nThis claim follows from the fact that .\n\u220e"
72
+ },
73
+ {
74
+ "section_id": "Appendix 2",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix B Proofs of Section\u00a04",
77
+ "text": "In this section, we collect all missing proofs from Section 4 ###reference_###.\nSee 4.3 ###reference_theorem3###\nFix . First, we bound the sensitivity of . Suppose by contradiction that . Consider the case when . Then .\nLet . Define similarly. Then, for all , we have\nwhere the first inequality holds since the values have sensitivity at most , the second inequality holds from , and the last inequality holds from our assumption by contradiction.\nThus , and as a result . Moreover, for such that we have that since it does not satisfy (14) ###reference_###. Therefore , a contradiction.\nFor the case when we obtain a contradiction by a symmetric argument. This concludes the proof on the sensitivity of . Next, we bound the sensitivity of the score function. We have,\nWe just showed that . Since the queries have sensitivity at most , we also have . We obtain that for all neighboring datasets .\n\u220e\nSee 4.5 ###reference_theorem5###\nSuppose that and differ in the last row. In the following, we assume that is fixed. To ease notation, we remove the superscript from all and . Since we know exactly, we have for . Then for we have\nIf , then and , so that . For , we have by the triangle inequality and since .\nLet and . Turning to the residuals of fitting to , by the same argument as in Lemma\u20093.4 ###reference_theorem4### we have, for all ,\nFor the last datapoint it holds\nAdditionally, for all .\nWe can now bound the sensitivity of the residual products . For , we have\nFor we have\nFinally,\n\u220e\nSee 4.7 ###reference_theorem7###\nWe first show that PrivCRT is -differentially private. The scores have sensitivity at most by Lemma\u20094.5 ###reference_theorem5###. Therefore, the scores have sensitivity at most by Lemma\u20094.3 ###reference_theorem3###. Finally, by Theorem\u20094.1 ###reference_theorem1### we have that outputting is -DP. Therefore, PrivCRT is -DP. Note that if the regularization parameter is chosen adaptively based on the data, then obtaining the value of the constant by plugging in might not be -differentially private. This can be resolved by setting an a priori lower bound on , independent of the data, e.g., , and plugging that lower bound to obtain .\nNext, we analyze the accuracy of PrivCRT. Let be the true rank of amongst the statistics , sorted in decreasing order. Then . Note that . By Theorem\u20094.1 ###reference_theorem1###, with probability at least , it holds\nAs a result,\nLet . The rank of cannot differ from by more than , since is within distance of . Therefore, , and since , we obtain the desired result.\n\u220e"
78
+ },
79
+ {
80
+ "section_id": "Appendix 3",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix C Additional Experimental Details and Results",
83
+ "text": ""
84
+ }
85
+ ],
86
+ "tables": {},
87
+ "image_paths": {
88
+ "1": {
89
+ "figure_path": "2306.06721v3_figure_1.png",
90
+ "caption": "Figure 1: Type-I error control of PrivToT, private Kendall, PrivGCM, and PrivCRT (under the null): the first two fail to control Type-I error.",
91
+ "url": "http://arxiv.org/html/2306.06721v3/x1.png"
92
+ },
93
+ "2(a)": {
94
+ "figure_path": "2306.06721v3_figure_2(a).png",
95
+ "caption": "Figure 2: Comparison of the power of private and nonprivate GCM tests as the dependence strength \u03b2\ud835\udefd\\betaitalic_\u03b2 increases. At d=5\ud835\udc515d=5italic_d = 5, the (nonprivate) GCM fails to provide type-I error control when \u03b2=0\ud835\udefd0\\beta=0italic_\u03b2 = 0.",
96
+ "url": "http://arxiv.org/html/2306.06721v3/x2.png"
97
+ },
98
+ "2(b)": {
99
+ "figure_path": "2306.06721v3_figure_2(b).png",
100
+ "caption": "Figure 2: Comparison of the power of private and nonprivate GCM tests as the dependence strength \u03b2\ud835\udefd\\betaitalic_\u03b2 increases. At d=5\ud835\udc515d=5italic_d = 5, the (nonprivate) GCM fails to provide type-I error control when \u03b2=0\ud835\udefd0\\beta=0italic_\u03b2 = 0.",
101
+ "url": "http://arxiv.org/html/2306.06721v3/x3.png"
102
+ },
103
+ "3(a)": {
104
+ "figure_path": "2306.06721v3_figure_3(a).png",
105
+ "caption": "Figure 4: Comparing power of private and nonprivate CRT tests as we increase dependence \u03b2\ud835\udefd\\betaitalic_\u03b2.",
106
+ "url": "http://arxiv.org/html/2306.06721v3/x4.png"
107
+ },
108
+ "3(b)": {
109
+ "figure_path": "2306.06721v3_figure_3(b).png",
110
+ "caption": "Figure 4: Comparing power of private and nonprivate CRT tests as we increase dependence \u03b2\ud835\udefd\\betaitalic_\u03b2.",
111
+ "url": "http://arxiv.org/html/2306.06721v3/x5.png"
112
+ },
113
+ "4": {
114
+ "figure_path": "2306.06721v3_figure_4.png",
115
+ "caption": "Figure 6: Power of PrivCRT and PrivGCM versus privacy \u03b5\ud835\udf00\\varepsilonitalic_\u03b5.",
116
+ "url": "http://arxiv.org/html/2306.06721v3/x6.png"
117
+ },
118
+ "5": {
119
+ "figure_path": "2306.06721v3_figure_5.png",
120
+ "caption": "Figure 7: Power of the non-private GCM and PrivGCM on the \u201cConcrete Compressive Strength\u201d dataset. The power of PrivGCM tends to 1111 with increasing sample size.",
121
+ "url": "http://arxiv.org/html/2306.06721v3/x7.png"
122
+ },
123
+ "6": {
124
+ "figure_path": "2306.06721v3_figure_6.png",
125
+ "caption": "Figure 8: Values of X\ud835\udc4bXitalic_X and Z\ud835\udc4dZitalic_Z (after rescaling) of one sampled dataset from our simulations, with n=1000\ud835\udc5b1000n=1000italic_n = 1000, \u03b2=0\ud835\udefd0\\beta=0italic_\u03b2 = 0, s=2\ud835\udc602s=2italic_s = 2, d=1\ud835\udc511d=1italic_d = 1. A kernel ridge regression model is fitted to the data. The model we fit closely matches \ud835\udd3c\u2062[X|Z]\ud835\udd3cdelimited-[]conditional\ud835\udc4b\ud835\udc4d\\mathbb{E}[X|Z]blackboard_E [ italic_X | italic_Z ].",
126
+ "url": "http://arxiv.org/html/2306.06721v3/x8.png"
127
+ },
128
+ "7": {
129
+ "figure_path": "2306.06721v3_figure_7.png",
130
+ "caption": "Figure 9: Effect on the power of PrivCRT with increasing m\ud835\udc5amitalic_m.",
131
+ "url": "http://arxiv.org/html/2306.06721v3/x9.png"
132
+ },
133
+ "8": {
134
+ "figure_path": "2306.06721v3_figure_8.png",
135
+ "caption": "Figure 10: Distribution of p-values output by PrivCRT for different dependence strengths \u03b2\ud835\udefd\\betaitalic_\u03b2 under the setup in Section 5. Under the null, i.e., \u03b2=0\ud835\udefd0\\beta=0italic_\u03b2 = 0, the p-values are uniformly distributed as desired.",
136
+ "url": "http://arxiv.org/html/2306.06721v3/x10.png"
137
+ }
138
+ },
139
+ "validation": true,
140
+ "references": [
141
+ {
142
+ "1": {
143
+ "title": "Differentially private uniformly most powerful tests for binomial\ndata.",
144
+ "author": "Jordan Awan and Aleksandra B. Slavkovic.",
145
+ "venue": "In Advances in Neural Information Processing Systems\n(NeurIPS), pages 4212\u20134222, 2018.",
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "2": {
151
+ "title": "Differentially private significance tests for regression\ncoefficients.",
152
+ "author": "Andr\u00e9s F. Barrientos, Jerome P. Reiter, Ashwin Machanavajjhala, and Yan\nChen.",
153
+ "venue": "Journal of Computational and Graphical Statistics,\n28:440 \u2013 453, 2017.",
154
+ "url": null
155
+ }
156
+ },
157
+ {
158
+ "3": {
159
+ "title": "The conditional permutation test for independence while controlling\nfor confounders.",
160
+ "author": "Thomas B. Berrett, Yi Wang, Rina Foygel Barber, and Richard J. Samworth.",
161
+ "venue": "Journal of the Royal Statistical Society: Series B (Statistical\nMethodology), 82, 2019.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "4": {
167
+ "title": "Impossibility of differentially private universally optimal\nmechanisms.",
168
+ "author": "Hai Brenner and Kobbi Nissim.",
169
+ "venue": "SIAM Journal on Computing (SICOMP), 43(5):1513\u20131540, 2014.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "5": {
175
+ "title": "Differentially private ANOVA testing.",
176
+ "author": "Zachary Campbell, Andrew Bray, Anna M. Ritz, and Adam Groce.",
177
+ "venue": "In International Conference on Data Intelligence and Security,\npages 281\u2013285, 2018.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "6": {
183
+ "title": "Panning for gold: \u2018model\u2010X\u2019 knockoffs for high dimensional\ncontrolled variable selection.",
184
+ "author": "Emmanuel J. Cand\u00e8s, Yingying Fan, Lucas Janson, and Jinchi Lv.",
185
+ "venue": "Journal of the Royal Statistical Society: Series B (Statistical\nMethodology), 80, 2016.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "7": {
191
+ "title": "Double/debiased machine learning for treatment and structural\nparameters.",
192
+ "author": "Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian\nHansen, Whitney Newey, and James Robins.",
193
+ "venue": "The Econometrics Journal, 21:C1\u2013C68, 2018.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "8": {
199
+ "title": "Differentially private nonparametric hypothesis testing.",
200
+ "author": "Simon Couch, Zeki Kazan, Kaiyan Shi, Andrew Bray, and Adam Groce.",
201
+ "venue": "In Proceedings of the ACM Conference on Computer and\nCommunications Security, CCS, pages 737\u2013751, 2019.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "9": {
207
+ "title": "Conditional independence in statistical theory.",
208
+ "author": "A Philip Dawid.",
209
+ "venue": "Journal of the Royal Statistical Society: Series B\n(Methodological), 41(1):1\u201315, 1979.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "10": {
215
+ "title": "The Permute-and-Flip mechanism is identical to\nReport-Noisy-Max with exponential noise.",
216
+ "author": "Zeyu Ding, Daniel Kifer, Sayed M. Saghaian N. E., Thomas Steinke, Yuxin Wang,\nYingtai Xiao, and Danfeng Zhang.",
217
+ "venue": "arXiv 2105.07260, 2021.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "11": {
223
+ "title": "The algorithmic foundations of differential privacy.",
224
+ "author": "Cynthia Dwork and Aaron Roth.",
225
+ "venue": "Found. Trends Theor. Comput. Sci., 9(3-4):211\u2013407, 2014.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "12": {
231
+ "title": "On the complexity of differentially private data release: efficient\nalgorithms and hardness results.",
232
+ "author": "Cynthia Dwork, Moni Naor, Omer Reingold, Guy N. Rothblum, and Salil P. Vadhan.",
233
+ "venue": "In Proceedings, ACM Symposium on Theory of Computing (STOC),\npages 381\u2013390, 2009.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "13": {
239
+ "title": "Calibrating noise to sensitivity in private data analysis.",
240
+ "author": "Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith.",
241
+ "venue": "Journal of Privacy and Confidentiality, 7(3):17\u201351, 2016.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "14": {
247
+ "title": "Differential privacy and the risk-utility tradeoff for\nmulti-dimensional contingency tables.",
248
+ "author": "Stephen E. Fienberg, Alessandro Rinaldo, and Xiaolin Yang.",
249
+ "venue": "In Privacy in Statistical Databases, volume 6344 of\nLecture Notes in Computer Science, pages 187\u2013199. Springer, 2010.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "15": {
255
+ "title": "Kernel measures of conditional dependence.",
256
+ "author": "Kenji Fukumizu, Arthur Gretton, Xiaohai Sun, and Bernhard Sch\u00f6lkopf.",
257
+ "venue": "Advances in Neural Information Processing Systems (NeurIPS),\n20, 2007.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "16": {
263
+ "title": "Differentially private Chi-Squared hypothesis testing: Goodness\nof fit and independence testing.",
264
+ "author": "Marco Gaboardi, Hyun-Woo Lim, Ryan M. Rogers, and Salil P. Vadhan.",
265
+ "venue": "In Proceedings, International Conference on Machine Learning\n(ICML), volume 48, 2016.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "17": {
271
+ "title": "Privacy-preserving data exploration in genome-wide association\nstudies.",
272
+ "author": "Aaron Johnson and Vitaly Shmatikov.",
273
+ "venue": "In ACM SIGKDD International Conference on Knowledge Discovery\nand Data Mining, pages 1079\u20131087, 2013.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "18": {
279
+ "title": "The test of tests: A framework for differentially private\nhypothesis testing.",
280
+ "author": "Zeki Kazan, Kaiyan Shi, Adam Groce, and Andrew P. Bray.",
281
+ "venue": "In Proceedings, International Conference on Machine Learning\n(ICML), volume 202, pages 16131\u201316151, 2023.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "19": {
287
+ "title": "Differentially private permutation tests: Applications to kernel\nmethods.",
288
+ "author": "Ilmun Kim and Antonin Schrab.",
289
+ "venue": "CoRR, abs/2310.19043, 2023.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "20": {
295
+ "title": "Probabilistic graphical models: principles and techniques.",
296
+ "author": "Daphne Koller and Nir Friedman.",
297
+ "venue": "MIT Press, 2009.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "21": {
303
+ "title": "Private causal inference.",
304
+ "author": "Matt J. Kusner, Yu Sun, Karthik Sridharan, and Kilian Q. Weinberger.",
305
+ "venue": "In Proceedings, International Conference on Artificial\nIntelligence and Statistics (AISTATS), volume 51, pages 1308\u20131317, 2016.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "22": {
311
+ "title": "Just interpolate: Kernel \u201cridgeless\u201d regression can generalize.",
312
+ "author": "Tengyuan Liang and Alexander Rakhlin.",
313
+ "venue": "The Annals of Statistics, 48(3):1329 \u2013\n1347, 2020.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "23": {
319
+ "title": "Private selection from private candidates.",
320
+ "author": "Jingcheng Liu and Kunal Talwar.",
321
+ "venue": "In Proceedings, ACM Symposium on Theory of Computing (STOC),\npages 298\u2013309. ACM, 2019.",
322
+ "url": null
323
+ }
324
+ },
325
+ {
326
+ "24": {
327
+ "title": "Permute-and-flip: A new mechanism for differentially private\nselection.",
328
+ "author": "Ryan McKenna and Daniel Sheldon.",
329
+ "venue": "In Advances in Neural Information Processing Systems\n(NeurIPS), 2020.",
330
+ "url": null
331
+ }
332
+ },
333
+ {
334
+ "25": {
335
+ "title": "Smooth sensitivity and sampling in private data analysis.",
336
+ "author": "Kobbi Nissim, Sofya Raskhodnikova, and Adam D. Smith.",
337
+ "venue": "In Proceedings, ACM Symposium on Theory of Computing (STOC),\npages 75\u201384. ACM, 2007.",
338
+ "url": null
339
+ }
340
+ },
341
+ {
342
+ "26": {
343
+ "title": "Models, reasoning and inference.",
344
+ "author": "Judea Pearl.",
345
+ "venue": "Cambridge, UK: Cambridge University Press, 19(2),\n2000.",
346
+ "url": null
347
+ }
348
+ },
349
+ {
350
+ "27": {
351
+ "title": "Differentially private hypothesis testing with the subsampled and\naggregated randomized response mechanism.",
352
+ "author": "V\u00edctor Pe\u00f1a and Andr\u00e9s F. Barrientos.",
353
+ "venue": "Statistica Sinica, 2022.",
354
+ "url": null
355
+ }
356
+ },
357
+ {
358
+ "28": {
359
+ "title": "A new class of private chi-square hypothesis tests.",
360
+ "author": "Ryan Rogers and Daniel Kifer.",
361
+ "venue": "In Proceedings, International Conference on Artificial\nIntelligence and Statistics (AISTATS), volume 54, pages 991\u20131000, 2017.",
362
+ "url": null
363
+ }
364
+ },
365
+ {
366
+ "29": {
367
+ "title": "The weighted generalised covariance measure.",
368
+ "author": "Cyrill Scheidegger, Julia H\u00f6rrmann, and Peter B\u00fchlmann.",
369
+ "venue": "Journal of Machine Learning Research, 23(273):1\u201368, 2022.",
370
+ "url": null
371
+ }
372
+ },
373
+ {
374
+ "30": {
375
+ "title": "The hardness of conditional independence testing and the generalised\ncovariance measure.",
376
+ "author": "Rajen D. Shah and Jonas Peters.",
377
+ "venue": "The Annals of Statistics, 48(3), 2020.",
378
+ "url": null
379
+ }
380
+ },
381
+ {
382
+ "31": {
383
+ "title": "Privacy-preserving statistical estimation with optimal convergence\nrates.",
384
+ "author": "Adam Smith.",
385
+ "venue": "In Proceedings, ACM Symposium on Theory of Computing (STOC),\npages 813\u2013822, 2011.",
386
+ "url": null
387
+ }
388
+ },
389
+ {
390
+ "32": {
391
+ "title": "Approximate kernel-based conditional independence tests for fast\nnon-parametric causal discovery.",
392
+ "author": "Eric V Strobl, Kun Zhang, and Shyam Visweswaran.",
393
+ "venue": "Journal of Causal Inference, 7(1), 2019.",
394
+ "url": null
395
+ }
396
+ },
397
+ {
398
+ "33": {
399
+ "title": "Improved differentially private analysis of variance.",
400
+ "author": "Marika Swanberg, Ira Globus-Harris, Iris Griffith, Anna M. Ritz, Adam Groce,\nand Andrew Bray.",
401
+ "venue": "Proc. Priv. Enhancing Technol., 2019(3):310\u2013330, 2019.",
402
+ "url": null
403
+ }
404
+ },
405
+ {
406
+ "34": {
407
+ "title": "Privacy-preserving data sharing for genome-wide association studies.",
408
+ "author": "Caroline Uhler, Aleksandra B. Slavkovic, and Stephen E. Fienberg.",
409
+ "venue": "J. Priv. Confidentiality, 5(1), 2013.",
410
+ "url": null
411
+ }
412
+ },
413
+ {
414
+ "35": {
415
+ "title": "Private independence testing across two parties.",
416
+ "author": "Praneeth Vepakomma, Mohammad Mohammadi Amiri, Cl\u00e9ment L. Canonne, Ramesh\nRaskar, and Alex Pentland.",
417
+ "venue": "CoRR, abs/2207.03652, 2022.",
418
+ "url": null
419
+ }
420
+ },
421
+ {
422
+ "36": {
423
+ "title": "Differential privacy for clinical trial data: Preliminary\nevaluations.",
424
+ "author": "Duy Vu and Aleksandra B. Slavkovic.",
425
+ "venue": "In ICDM Workshops, pages 138\u2013143, 2009.",
426
+ "url": null
427
+ }
428
+ },
429
+ {
430
+ "37": {
431
+ "title": "Towards practical differentially private causal graph discovery.",
432
+ "author": "Lun Wang, Qi Pang, and Dawn Song.",
433
+ "venue": "In Advances in Neural Information Processing Systems\n(NeurIPS), 2020.",
434
+ "url": null
435
+ }
436
+ },
437
+ {
438
+ "38": {
439
+ "title": "Revisiting differentially private hypothesis tests for categorical\ndata.",
440
+ "author": "Yue Wang, Jaewoo Lee, and Daniel Kifer.",
441
+ "venue": "arXiv 1511.03376, 2015.",
442
+ "url": null
443
+ }
444
+ },
445
+ {
446
+ "39": {
447
+ "title": "Statistical approximating distributions under differential privacy.",
448
+ "author": "Yue Wang, Daniel Kifer, Jaewoo Lee, and Vishesh Karwa.",
449
+ "venue": "J. Priv. Confidentiality, 8(1), 2018.",
450
+ "url": null
451
+ }
452
+ },
453
+ {
454
+ "40": {
455
+ "title": "Concrete Compressive Strength.",
456
+ "author": "I-Cheng Yeh.",
457
+ "venue": "UCI Machine Learning Repository, 2007.",
458
+ "url": null
459
+ }
460
+ },
461
+ {
462
+ "41": {
463
+ "title": "Scalable privacy-preserving data sharing methodology for genome-wide\nassociation studies.",
464
+ "author": "Fei Yu, Stephen E. Fienberg, Aleksandra B. Slavkovic, and Caroline Uhler.",
465
+ "venue": "J. Biomed. Informatics, 50:133\u2013141, 2014.",
466
+ "url": null
467
+ }
468
+ },
469
+ {
470
+ "42": {
471
+ "title": "Kernel-based conditional independence test and application in causal\ndiscovery.",
472
+ "author": "Kun Zhang, Jonas Peters, Dominik Janzing, and Bernhard Sch\u00f6lkopf.",
473
+ "venue": "In Proceedings of the Conference on Uncertainty in Artificial\nIntelligence (UAI), pages 804\u2013813, 2011.",
474
+ "url": null
475
+ }
476
+ }
477
+ ],
478
+ "url": "http://arxiv.org/html/2306.06721v3"
479
+ }
20240322/2306.13185v2.json ADDED
@@ -0,0 +1,372 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "An Agnostic View on the Cost of Overfitting in (Kernel) Ridge Regression",
3
+ "abstract": "We study the cost of overfitting in noisy kernel ridge regression (KRR), which we define as the ratio between the test error of the interpolating ridgeless model and the test error of the optimally-tuned model. We take an \u201cagnostic\u201d view in the following sense: we consider the cost as a function of sample size for any target function, even if the sample size is not large enough for consistency or the target is outside the RKHS. We analyze the cost of overfitting under a Gaussian universality ansatz using recently derived (non-rigorous) risk estimates in terms of the task eigenstructure. Our analysis provides a more refined characterization of benign, tempered and catastrophic overfitting (cf. Mallinar et al., 2022).",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The ability of large neural networks to generalize, even when they overfit to noisy training data (Neyshabur et al., 2015 ###reference_b21###; Zhang et al., 2017 ###reference_b32###; Belkin et al., 2019 ###reference_b3###), has significantly challenged our understanding of the effect of overfitting. A starting point for understanding overfitting in deep learning is to understand the issue in kernel methods, possibly viewing deep learning through their kernel approximation (Jacot et al., 2020 ###reference_b11###).\nIndeed, there is much progress in understanding the effect of overfitting in kernel ridge regression and ridge regression with Gaussian data. It has been shown that the test error of the minimal norm interpolant can approach Bayes optimality and so overfitting is \u201cbenign\u201d (Bartlett et al., 2020 ###reference_b2###; Muthukumar et al., 2020 ###reference_b20###; Koehler et al., 2021 ###reference_b12###; Wang et al., 2021 ###reference_b29###; Donhauser et al., 2022 ###reference_b6###). In other situations such as Laplace kernels and ReLU neural tangent kernels, the interpolating solution is not consistent but also not \u201ccatastrophically\u201d bad, which falls into an intermediate regime called \u201ctempered\u201d overfitting (Mallinar et al., 2022 ###reference_b14###).\nHowever, the perspective taken in this line of work differs from the agnostic view of statistical learning. These results typically focus on asymptotic behavior and consistency of a well-specified model, asking how the limiting behavior of interpolating learning rules compares to the Bayes error (the smallest risk attainable by any measurable function of the feature ). In contrast, the agnostic PAC model (Vapnik & Chervonenkis, 1971 ###reference_b28###; Haussler, 1992 ###reference_b10###; Shalev-Shwartz & Ben-David, 2014 ###reference_b24###) does not require any assumption on the conditional distribution of the label . In particular, the conditional expectation is not necessarily a member of the hypothesis class and it does not need to have small Hilbert norm in the Reproducing Kernel Hilbert Space (RKHS). Instead, the learning rule is asked to find a model whose test risk can compete with the smallest risk within the hypothesis class, which can be quite high if\nno predictor in the hypothesis class can\nattain the Bayes error. In these situations, the agnostic PAC model can still provide a meaningful learning guarantee.\nFurthermore, we would like to isolate the effect of overfitting (i.e. underregularizing, and choosing to use a predictor that fits the noise, instead of compromising on empirical fit and choosing a predictor that balances empirical fit with complexity or norm) from the difficulty of the learning problem and appropriateness of the model irrespective of overfitting (i.e. even if we were to not overfit and instead optimally balance empirical fit and norm, as in ridge regression). A view which considers only the risk of the overfitting rule (e.g. Mallinar et al., 2022 ###reference_b14###) confounds these two issues. Instead, we would like to study the direct effect of overfitting: how much does it hurt to overfit and use ridgeless regression compared to optimally tuned ridge regression.\nIn this paper, we take an agnostic view to the direct effect of overfitting in (kernel) ridge regression. Rather than comparing the asymptotic risk of the interpolating ridgeless model to the Bayes error, we compare it to the best ridge model in terms of population error as a function of sample size, and we measure the cost of overfitting as a ratio. We show that the cost of overfitting can be bounded by using only the sample size and the effective ranks of the covariance, even when the risk of the optimally-tuned model is high relative to the Bayes error. Our analysis applies to any target function (including ones with unbounded RKHS norm)\nand\nrecovers the matching upper and lower bounds from Bartlett et al. (2020 ###reference_b2###), which allows us to have a more refined understanding of the benign overfitting. In addition to benign overfitting, we show that the amount of \u201ctempered\u201d overfitting can also be understood using the cost of interpolation, and we derive the necessary and sufficient condition for \u201ccatastrophic\u201d overfitting (Mallinar et al., 2022 ###reference_b14###). Combining these results leads to a refined notion of benign, tempered, and catastrophic overfitting (focusing on the difference versus the optimally tuned predictor), and a characterization as a function of sample size based on computing the effective rank at some index . We further apply our results to the setting of inner product kernels in the polynomial regime (Ghorbani et al., 2021 ###reference_b8###; Mei et al., 2022 ###reference_b15###; Misiakiewicz, 2022 ###reference_b19###) and recover the multiple descent curve."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Problem Formulation",
15
+ "text": "Let be an abstract input space and a positive semi-definite kernel111i.e.: (i) , and (ii) , it holds that .."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Bi-criterion Optimization in KRR",
21
+ "text": "Given a data set consisting of sampled from some unknown joint distribution , in order to find a predictor with good test error , we solve the bi-criterion optimization:\nwhere is the Hilbert norm in the RKHS and the test error and training error (in square loss) of a predictor is given by\nThe Pareto-frontier of the bi-criterion problem (1 ###reference_###) corresponds to the regularization path given by the sequence of problems:\nBy the representation theorem, has the explicit closed form:\nwhere are given by , and . The interpolating \u201cridgeless\u201d solution (minimal norm interpolant) is the extreme Pareto point and obtained by taking :\nEven though has the minimal norm among all interpolants, the norm of will still be very large because it needs to memorize all the noisy training labels. In this paper, we are particularly interested in the generalization performance of the ridgeless solution , which minimizes the training error in the bi-criterion problem (1 ###reference_###) too much."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Mercer\u2019s Decomposition",
27
+ "text": "Though the setting for KRR is very generic, we can understand it as (linear) ridge regression.\nBy Mercer\u2019s theorem (Mercer, 1909 ###reference_b17###), the kernel admits the decomposition\nwhere form a complete orthonormal basis satisfying if and 0 otherwise, and the expectation is taken with respect to the marginal distribution of given by . For example, if has finite cardinality and is uniformly distributed over , then (3 ###reference_###) can be found by the spectral decomposition of the matrix given by When is uniformly distributed over the sphere in or the boolean hypercube , then can be taken to be the spherical harmonics or the Fourier-Walsh (parity) basis. In the case that is the Gaussian kernel or polynomial kernel, the eigenvalues has closed-form expression in terms of the modified Bessel function or the Gamma function (Minh et al., 2006 ###reference_b18###).\nTherefore, instead of viewing the feature as an element of , we can consider the potentially infinite-dimensional real-valued vector and denote the design matrix . Then we can write and understand the predictor in (2 ###reference_###) as\nwhere is simply the ridge regression estimate with respect to the data set . For a predictor of the form , its Hilbert norm is given by .\nThe Bayes-optimal target function is .\nWe may expand this function in the kernel eigenbasis as , where are eigencoefficients.\nLet the noise level be ."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Closed-form Risk Estimate for (Kernel) Ridge Regression",
33
+ "text": "A great number of recent theoretical works have converged on a powerful set of closed-form equations which estimate the test risk of KRR in terms of task eigenstructure (Hastie et al., 2019 ###reference_b9###; Wu & Xu, 2020 ###reference_b31###; Jacot et al., 2020 ###reference_b11###; Canatar et al., 2021 ###reference_b4###; Loureiro et al., 2021 ###reference_b13###; Mel & Ganguli, 2021 ###reference_b16###; Richards et al., 2021 ###reference_b23###).\nWe shall use the risk estimate from these works as our starting point.\nThese equations rely on (some variant of) the following Gaussian design ansatz:\nWhen sampling , the eigenfunctions are either Gaussian in the sense that ,\nor else we have Gaussian universality in the sense that the expected test risk is unchanged if we replace\n with , where is Gaussian in this manner.\nRemarkably, 1 ###reference_umption1### appears to hold even for many real datasets: predictions computed for Gaussian design agree excellently with kernel regression experiments with real data (Canatar et al., 2021 ###reference_b4###; Simon et al., 2021 ###reference_b25###; Wei et al., 2022 ###reference_b30###).\nWe will take 1 ###reference_umption1### henceforth.\nWe now state the \u201comniscient risk estimate\u201d presented by this collection of works.222\nWe adopt the notation of Simon et al. (2021 ###reference_b25###), but the risk estimates of all mentioned works are equivalent.\nWe take the term \u201comniscient risk estimate\u201d from Wei et al. (2022 ###reference_b30###).\n\nFirst, let the effective regularization constant be the unique nonnegative solution to\nUsing , we can define\nwhere we refer to as the learnability of mode and as the overfitting coefficient.\nThe expected test risk over datasets is then given approximately by\nThe \u201c\u201d in (6 ###reference_###) can be given several meanings.\nFirstly, it becomes an equivalence in an appropriate asymptotic limit in which and the number of eigenmodes in a given eigenvalue range both grow proportionally large (Hastie et al., 2019 ###reference_b9###; Bach, 2023 ###reference_b1###).\nSecondly, with fixed task eigenstructure, the error incurred can be bounded by a decaying function of (Cheng & Montanari, 2022 ###reference_b5###).\nThirdly, numerical experiments attest that the error is small even at quite modest (Canatar et al., 2021 ###reference_b4###; Simon et al., 2021 ###reference_b25###).\nFor the rest of this paper, we will simply treat it as an equivalence, formally proving facts about the omniscient risk estimate .\nThus, our results follow by analyzing the expression from (6 ###reference_###)."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Cost of Overfitting",
39
+ "text": "The sensible and traditional approach to learning using a complexity penalty, such as the Hilbert norm , is to use a Pareto point (point on the regularization path) of the bi-criteria problem (1 ###reference_###) that minimizes some balanced combination of the empirical risk and penalty (the \u201cstructural risk\u201d) so as to ensure small population risk. Assumptions about the problem can help us choose which Pareto optimal point, i.e. what value of the tradeoff parameter , to use. Simpler and safer is to choose this through validation: calculate the Pareto frontier (aka regularization path) on half the training data set, and choose among these Pareto points by minimizing the \u201cvalidation error\u201d on the held-out half of the training set. Here we do not get into these details, and simply compare to the best Pareto point:\nAlthough we cannot find exactly empirically, it is useful as an oracle, and studying the gap versus this ideal Pareto point provides an upper bound on the gap versus any possible Pareto point (i.e. with any amount of \u201cideal\u201d regularization). And in practice, as well as theoretically, a validation approach as described above will behave very similar to . We therefore define the cost of overfitting as the (multiplicative) gap between the interpolating predictor and the optimaly regularized :\nGiven any data distribution over and sample size , we define the cost of overfitting as\nIt is possible to directly analyze and (or their predictions based on (6 ###reference_###)) in order to study the cost of overfitting. However, any bound on or will necessarily depend on the target function. Instead, we show that there is a much simpler argument to bound the cost of overfitting.\nConsider defined in (5 ###reference_###) with , then it holds that\nObserve that\nwhere we use the fact that decreases as decreases, and decreases as decreases. The proof concludes by observing .\n\u220e\nIndeed, (4 ###reference_###) and (5 ###reference_###) used to define does not depend on the target coefficients. It is also straightforward to check that if , then and by choosing , and for any . This shows that (7 ###reference_###) is the tightest agnostic bound on the cost of overfitting:\nwhere on the left-hand-side depends only on the marginal , while depends on both the marginal and the conditional .\nMore generally, it is clear that we have the lower bound\ndue to the non-negativity of in (6 ###reference_###).\nThus, from the above and Theorem 1 ###reference_orem1###, we have .\nTherefore, if as , namely, the optimal-tuned ridge is consistent, then . That is, in this case precisely captures the cost of overfitting.\nIf the optimal-tuned ridge is not consistent, (7 ###reference_###) might be a loose upper bound on . However, under our assumption, even in this case still captures the qualitative noisy overfitting behavior in the following sense:\nIf , we have benign overfitting, i.e. , regardless of the target; If and , then we have catastrophic overfitting, i.e. , regardless of the target; If then overfitting is either benign or tempered.\nFinally, we note that\nthe argument in the proof of Theorem 1 ###reference_orem1### shows something more: for any , it holds that . Therefore, the quantity bounds the cost of overfitting not only for the interpolating solution, but also for any ridge model with a sufficiently small regularization parameter . Consequently, if is close to one, then the risk curve will become flat once all of the signal is fitted (for example, see Figure 1 of Zhou et al. (2021 ###reference_b34###)), exhibiting the double descent phenomenon instead of the classical U-shape curve (Belkin et al., 2019 ###reference_b3###). Similar results on the flatness of the generalization curve are proven in Tsigler & Bartlett (2020 ###reference_b27###) and Zhou et al. (2021 ###reference_b34###)."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Benign Overfitting",
45
+ "text": "In this section, we discuss when can be close to 1 and so overfitting is benign. Note that the target coefficients play no role at all in our analysis. To further upper bound the cost of overfitting, we will introduce the notion of effective rank (Bartlett et al., 2020 ###reference_b2###).\nThe effective ranks of a covariance matrix with eigenvalues in descending order are defined as\nThe two effective ranks are closely related to each other by the relationship and are equal if is the identity matrix (Bartlett et al., 2020 ###reference_b2###). Roughly speaking, the minimal norm interpolant can approximate the target in the span of top eigenfunctions and use the remaining components of to memorize the residual. A large effective rank ensures that the small eigenvalues of are roughly equal to each other and so it is possible to evenly spread out the cost of overfitting into many different directions. More precisely, we show the following finite-sample bound on , which decreases to 1 as increases if and :\nFor any , it holds that\nThe conditions that and are two key conditions for benign overfitting in linear regression (Bartlett et al., 2020 ###reference_b2###). They require an additional assumption that for consistency, which is sufficient for the consistency of the optimally tuned model when the target is well-specified. Our Theorem 2 ###reference_orem2### provides a more refined understanding of benign overfitting: at a finite sample , if we can choose a small such that is large relative to , then the interpolating ridgeless solution is nearly as good as the optimally tuned model, regardless of whether the optimally tuned model can learn the target. Furthermore, we also recover a version of the matching lower bound of Theorem 4 in Bartlett et al. (2020 ###reference_b2###), though our proof technique is completely different and simpler since we have a closed-form expression. Since , it suffices to lower bound .\nFix any . If there exists such that , then let be the first such integer. Otherwise, pick . It holds that\nFor simplicity, we can take in the lower bound above. We see that cannot be close to 1 unless is small relative to . Even if is small, the first term in (9 ###reference_###) requires to be small. Conversely, if both and are small, then we can apply Theorem 2 ###reference_orem2### to show that is close to 1 and we have identify the necessary and sufficient condition for .\nFor any , let be the first integer such that . Then if and only if\nThough Corollary 1 ###reference_ollary1### is stated as an asymptotic result, the spectrum is allowed to change with the sample size and the target function plays no role in condition (10 ###reference_###). Next, we apply our results to some canonical examples where overfitting is benign.\nIn this case, we can estimate\nand so\nThen by choosing , we have and because .\nIn this case, it is routine to check by choosing . Letting and , Theorem 2 ###reference_orem2### shows that .\nFinally, we show our bound (8 ###reference_###) also applies to isotropic features in the proportional regime even though overfitting is not necessarily benign.\nIn this case, it is easy to check that and so and . The first condition in (10 ###reference_###) holds because . However, the second condition in (10 ###reference_###) does not hold because and . Plugging in to Theorem 2 ###reference_orem2###, we obtain\nThe above upper bound is tight when because it is well-known that in the proportional regime (for example, see Hastie et al. (2019 ###reference_b9###) and Zhou et al. (2021 ###reference_b34###)), it holds that"
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Tempered Overfitting",
51
+ "text": "Theorem 2 ###reference_orem2### allows us to understand the cost of overfitting when it is benign. However, it is not informative when no satisfies . In Theorem 4 ###reference_orem4### below, we provide an estimate for the amount of \u201ctempered\u201d overfitting based on the ratio over a finite range of indices.\nFix any and consider given by\nThen it holds that\nTo interpret (11 ###reference_###), we first suppose that the spectrum does not change with and has infinitely many non-zero eigenvalues (which is the case in Example 1 ###reference_mple1###, 4 ###reference_mple4### and 5 ###reference_mple5### below). For any fixed , must increases as increases. If is large, then it is usually the case that or the ratio is bounded. Letting , we can understand (11 ###reference_###) as .\nIn particular, if , then is bounded and overfitting cannot be catastrophic. Conversely, we show that overfitting is catastrophic when in section 3.3 ###reference_### below. Therefore, the condition is both necessary and sufficient for catastrophic overfitting: . Furthermore, we argue that (11 ###reference_###) is also sufficient for benign overfitting in some settings: if , then we have for any , and thus .\nIn this case, we can estimate\nand so\nTherefore, we have and so , which agrees with Mallinar et al. (2022 ###reference_b14###).\nWe remark that the Laplace kernel and ReLU NTK restricted to the hypersphere have power law decay (Geifman et al., 2020 ###reference_b7###)."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Catastrophic Overfitting",
57
+ "text": "We first state a generic non-asymptotic lower bound on and then discuss the implication for catastrophic overfitting as increases.\nFor any , it holds that\nFor any , if and we consider , then it is straightforward from (12 ###reference_###) that . Since the choice of is arbitrary, we have and so .\nIn this case, we can estimate\nand and . Theorem 5 ###reference_orem5### implies that overfitting is catastrophic, as expected from Mallinar et al. (2022 ###reference_b14###).\nSince Theorem 3 ###reference_orem3###, 4 ###reference_orem4### and 5 ###reference_orem5### are agnostic and non-asymptotic, we can use them to obtain an elegant characterization of whether overfitting is benign, tempered, or catastrophic, resolving an open problem333See footnote 11 in their paper. The settings they consider (e.g., clause (a) of Theorem 3.1 with ) always satisfy and so . raised by Mallinar et al. (2022 ###reference_b14###):\nSuppose that the spectrum is fixed as increases and contains infinitely many non-zero eigenvalues.\nIf \u2009 , then overfitting is benign: .\nIf \u2009 , then overfitting is tempered: .\nIf \u2009 , then overfitting is catastrophic: ."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Application: Inner-Product Kernels in the Polynomial Regime",
63
+ "text": "In this section, we consider KRR with inner-product kernels in the polynomial regime (Ghorbani et al., 2021 ###reference_b8###; Mei et al., 2022 ###reference_b15###; Misiakiewicz, 2022 ###reference_b19###). Let\u2019s take the distribution of to be uniformly distributed over the hypersphere in or the boolean hypercube. Denote to be the subspace of all polynomials of degree and to be the dimension of the subspace of degree- polynomials orthogonal to . Moreover, denote to be the projection onto and to be the projection onto its complement. Let be the polynomial basis with respect to (e.g. spherical harmonics or parity functions)."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusion",
69
+ "text": "Understanding the effect of overfitting is a fundamental problem in statistical learning theory. Contrary to the traditional intuition, prior works have shown that predictors that interpolate noisy training labels can achieve nearly optimal test error when the data distribution is well-specified. In this paper, we extend these results to the agnostic case and we use them to develop a more refined understanding of benign, tempered, and catastrophic overfitting. To the best of our knowledge, our work is the first to connect the complex closed-form risk predictions and the effective rank introduced by Bartlett et al. (2020 ###reference_b2###) to establish a simple and interpretable learning guarantee for KRR. As we can see in Corollary 1 ###reference_ollary1### and Theorem 6 ###reference_orem6###, the effective ranks play a crucial role in the analysis and tightly characterize the cost of overfitting in many settings.\nAn interesting future direction may be asking whether our results extend to other settings, such as kernel SVM, since our theory is agnostic to the target. We hope that the theory of KRR and ridge regression with Gaussian features can lead us toward a better understanding of generalization in neural networks."
70
+ }
71
+ ],
72
+ "appendix": [
73
+ {
74
+ "section_id": "Appendix 1",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix A Supplemental Proofs",
77
+ "text": "In the appendix, we give proofs of all results from the main text. Our proofs are very self-contained and only use elementary results such as the Cauchy-Schwarz inequality.\nThe main challenge for analyzing from equation (5 ###reference_###) is that the effective regularization is defined by the non-linear equation (4 ###reference_###), which does not have a simple closed-form solution. However, the following lemma can provide an estimate for in terms of the effective rank.\nFor any , it holds that\nMoreover, for any , it holds that\nFrom the Cauchy-Schwarz inequality, we show that\nRearranging in terms of proves the first inequality. Moreover, it holds that\nwhich can be rearranged to the second lower bound. Finally, observe that\nand rearranging concludes the proof of the last inequality.\n\u220e\nIn particular, when there exists such that and , then . Using lemma 1 ###reference_ma1###, we can show Theorem 2 ###reference_orem2###.\nSee 2 ###reference_orem2###\nFor any , by the definition (4 ###reference_###), we have\nRearranging, we get\nAt the same time, we can use the definition (4 ###reference_###) again and (15 ###reference_###) to show that\nPlugging in and Lemma 1 ###reference_ma1###, we have\nprovided that .\n\u220e\nUsing the second part of equation (14 ###reference_###), we can show a similar bound that depends , which is smaller than , but has a better dependence on .\nFor any , it holds that\nFor , it holds that and so by Lemma 1 ###reference_ma1###, we have\nFinally, by equation (4 ###reference_###), we have\nTaking the inverse on both hand side concludes the proof.\n\u220e\nFinally, we prove Theorem 4 ###reference_orem4###. The proof goes through a different argument to avoid the dependence on because we might need to choose when overfitting is tempered.\nSee 4 ###reference_orem4###\nIf , then it is clear that satisfies . It is also clear that choosing satisfies because . Then both and are well-defined. To show that both are finite, we observe that by definition and because it is defined as the minimum .\nNext, let be the smallest integer such that . We will show that is also well defined and . Note that for any , we can apply Lemma 1 ###reference_ma1### to show\nTherefore, by our definition of and , it holds that . Since the eigenvalues are sorted, it must hold that . On the other hand, for any , we also apply Lemma 1 ###reference_ma1### to show\nBy our definition of and , it holds that and so . Finally, since we have for all and , we can check that\nRecall that and so by definition of , we have . Therefore, it holds that\nwhere in the last step we use\nThe rest follows from the fact that .\n\u220e\nWe will now prove two lower bound for .\nSee 3 ###reference_orem3###\nFirst, suppose that there exists such that and let be the first such integer. Then we can rearrange into\nand since for , we apply the above and equation (14 ###reference_###) of Lemma 1 ###reference_ma1### to show that\nMoreover, by the definition of , we must have which can be rearranged to\nby equation (14 ###reference_###) of Lemma 1 ###reference_ma1### again. Then for any , we have and so . Therefore, we have\nFinally, if there is no such , then the first inequality is trivial. Moreover, we have which rearranges to . Therefore, by all , we have and the rest of the proof is the same.\n\u220e\nSee 5 ###reference_orem5###\nBy the Cauchy-Schwarz inequality, we have\nBy Lemma 1 ###reference_ma1###, we have . Combine with above, we obtain\nRearranging gives us\nwhich implies that\n\u220e\nSee 6 ###reference_orem6###\nWe will show each clause separately.\nFor any , we can pick in Theorem 2 ###reference_orem2### and obtain the following:\nSince we have\nwe can send and . Therefore, it holds that\nSince the choice of can be made arbitrarily small, we have the desired conclusion by taking .\nIf converges to a non-zero constant, then the sequence must be bounded. In particular, there exists such that for all . If we let in Theorem 3 ###reference_orem3###, then for all , it holds that\nThen we need to choose in Theorem 3 ###reference_orem3### and\nand so\nSimilarly, there also exists such that for all . Then by choosing and Theorem 8 ###reference_orem8###, we have\nWe will apply Theorem 5 ###reference_orem5###. For any , choose , we get\nTherefore, if , then\nHowever, since the choice of is arbitrary, then we can send . The desired conclusion follows by .\n\u220e\nAs mentioned in the main text, it is also possible to use Theorem 4 ###reference_orem4### to show the upper bounds in the proof of Theorem 6 ###reference_orem6### above. For simplicity, we use a different argument here by applying Theorem 2 ###reference_orem2### and 8 ###reference_orem8###."
78
+ },
79
+ {
80
+ "section_id": "Appendix 2",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix B Uniform Convergence",
83
+ "text": "In this appendix, we show that the predictions from Simon et al. (2021 ###reference_b25###) can establish a type of uniform convergence guarantee known as \u201coptimistic rate\u201d (Panchenko, 2002 ###reference_b22###; Srebro et al., 2010 ###reference_b26###) along the ridge path, which maybe of independent interest. We briefly mention the uniform convergence result in section 4 ###reference_### of the main text.\nIn particular, the tight result from Zhou et al. (2021 ###reference_b34###) avoids any hidden multiplicative constant and logarithmic factor present in previous works and can be used to establish benign overfitting. However, their proof techniques depend on the Gaussian Minimax Theorem (GMT) and are limited to the setting of Gaussian features. We recover their result in Theorem 7 ###reference_orem7### here with a (non-rigorous) calculation that extends beyond the Gaussian case.\nWe first provide closed-form expression for the training error and Hilbert norm of . By the predictions from Simon et al. (2021 ###reference_b25###), we know that\nand we can use section 4.1 of Simon et al. (2021 ###reference_b25###) to compute the expected Hilbert norm:\nTherefore, we will just use the expression:\nSee 7 ###reference_orem7###\nApplying equation (6 ###reference_###) and (4 ###reference_###), we can write the difference\nBy the Cauchy-Schwarz inequality, for any , we have\nBy the expression (17 ###reference_###), we have\nthen using , we show that\nRearranging concludes the proof.\n\u220e\nFor any and such that , it holds that\nWhen , it holds that\nby applying (5 ###reference_###) and (4 ###reference_###). Therefore, the second term in (17 ###reference_###) can be simplified as\nby the definition in (6 ###reference_###). Plugging in, we arrive at\nTo handle situations where is not in the RKHS, observe that for any , we have\nand so\nThe proof concludes by plugging in Lemma 1 ###reference_ma1###.\n\u220e\nFinally, we can plug in the norm bound of Theorem 9 ###reference_orem9### into Theorem 7 ###reference_orem7### to establish benign overfitting, as in Koehler et al. (2021 ###reference_b12###); Zhou et al. (2022 ###reference_b35###).\nFor any and such that and . Let , then it holds that"
84
+ }
85
+ ],
86
+ "tables": {},
87
+ "image_paths": {},
88
+ "validation": true,
89
+ "references": [
90
+ {
91
+ "1": {
92
+ "title": "High-dimensional analysis of double descent for linear regression\nwith random projections.",
93
+ "author": "Francis Bach.",
94
+ "venue": "arXiv preprint arXiv:2303.01372, 2023.",
95
+ "url": null
96
+ }
97
+ },
98
+ {
99
+ "2": {
100
+ "title": "Benign overfitting in linear regression.",
101
+ "author": "Peter L. Bartlett, Philip M. Long, G\u00e1bor Lugosi, and Alexander Tsigler.",
102
+ "venue": "Proceedings of the National Academy of Sciences, 117(48):30063\u201330070, 2020.",
103
+ "url": null
104
+ }
105
+ },
106
+ {
107
+ "3": {
108
+ "title": "Reconciling modern machine learning practice and the bias-variance\ntrade-off.",
109
+ "author": "Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal.",
110
+ "venue": "Proceedings of the National Academy of Sciences, 116(32):15849\u201315854, 2019.",
111
+ "url": null
112
+ }
113
+ },
114
+ {
115
+ "4": {
116
+ "title": "Spectral bias and task-model alignment explain generalization in\nkernel regression and infinitely wide neural networks.",
117
+ "author": "Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan.",
118
+ "venue": "Nature Communications, 12(1):1\u201312, 2021.",
119
+ "url": null
120
+ }
121
+ },
122
+ {
123
+ "5": {
124
+ "title": "Dimension free ridge regression.",
125
+ "author": "Chen Cheng and Andrea Montanari.",
126
+ "venue": "arXiv preprint arXiv:2210.08571, 2022.",
127
+ "url": null
128
+ }
129
+ },
130
+ {
131
+ "6": {
132
+ "title": "Fast rates for noisy interpolation require rethinking the effects of\ninductive bias.",
133
+ "author": "Konstantin Donhauser, Nicolo Ruggeri, Stefan Stojanovic, and Fanny Yang.",
134
+ "venue": "2022.",
135
+ "url": null
136
+ }
137
+ },
138
+ {
139
+ "7": {
140
+ "title": "On the similarity between the laplace and neural tangent kernels.",
141
+ "author": "Amnon Geifman, Abhay Yadav, Yoni Kasten, Meirav Galun, David Jacobs, and Basri\nRonen.",
142
+ "venue": "Advances in Neural Information Processing Systems,\n33:1451\u20131461, 2020.",
143
+ "url": null
144
+ }
145
+ },
146
+ {
147
+ "8": {
148
+ "title": "Linearized two-layers neural networks in high dimension.",
149
+ "author": "Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari.",
150
+ "venue": "The Annals of Statistics, 49(2):1029\u20131054, 2021.",
151
+ "url": null
152
+ }
153
+ },
154
+ {
155
+ "9": {
156
+ "title": "Surprises in high-dimensional ridgeless least squares interpolation.",
157
+ "author": "Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani.",
158
+ "venue": "Annals of Statistics, 2019.",
159
+ "url": null
160
+ }
161
+ },
162
+ {
163
+ "10": {
164
+ "title": "Decision theoretic generalizations of the pac model for neural net\nand other learning applications.",
165
+ "author": "David Haussler.",
166
+ "venue": "Information and computation, 100(1):78\u2013150, 1992.",
167
+ "url": null
168
+ }
169
+ },
170
+ {
171
+ "11": {
172
+ "title": "Kernel alignment risk estimator: Risk prediction from training data.",
173
+ "author": "Arthur Jacot, Berfin Simsek, Francesco Spadaro, Cl\u00e9ment Hongler, and Franck\nGabriel.",
174
+ "venue": "In Advances in Neural Information Processing Systems, 2020.",
175
+ "url": null
176
+ }
177
+ },
178
+ {
179
+ "12": {
180
+ "title": "Uniform convergence of interpolators: Gaussian width, norm bounds\nand benign overfitting.",
181
+ "author": "Frederic Koehler, Lijia Zhou, Danica J. Sutherland, and Nathan Srebro.",
182
+ "venue": "In Advances in Neural Information Processing Systems, 2021.",
183
+ "url": null
184
+ }
185
+ },
186
+ {
187
+ "13": {
188
+ "title": "Learning curves of generic features maps for realistic datasets with\na teacher-student model.",
189
+ "author": "Bruno Loureiro, Cedric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala,\nMarc Mezard, and Lenka Zdeborov\u00e1.",
190
+ "venue": "Advances in Neural Information Processing Systems,\n34:18137\u201318151, 2021.",
191
+ "url": null
192
+ }
193
+ },
194
+ {
195
+ "14": {
196
+ "title": "Benign, tempered, or catastrophic: A taxonomy of overfitting.",
197
+ "author": "Neil Rohit Mallinar, James B Simon, Amirhesam Abedsoltan, Parthe Pandit,\nMikhail Belkin, and Preetum Nakkiran.",
198
+ "venue": "In Advances in Neural Information Processing Systems, 2022.",
199
+ "url": null
200
+ }
201
+ },
202
+ {
203
+ "15": {
204
+ "title": "Generalization error of random feature and kernel methods:\nHypercontractivity and kernel matrix concentration.",
205
+ "author": "Song Mei, Theodor Misiakiewicz, and Andrea Montanari.",
206
+ "venue": "Applied and Computational Harmonic Analysis, 59:3\u201384, 2022.",
207
+ "url": null
208
+ }
209
+ },
210
+ {
211
+ "16": {
212
+ "title": "A theory of high dimensional regression with arbitrary correlations\nbetween input features and target functions: Sample complexity, multiple\ndescent curves and a hierarchy of phase transitions.",
213
+ "author": "Gabriel C. Mel and Surya Ganguli.",
214
+ "venue": "In International Conference on Machine Learning, volume 139,\npp. 7578\u20137587, 2021.",
215
+ "url": null
216
+ }
217
+ },
218
+ {
219
+ "17": {
220
+ "title": "Functions of positive and negative type, and their connection the\ntheory of integral equations.",
221
+ "author": "James Mercer.",
222
+ "venue": "Philosophical Transactions of the Royal Society of London,\n209:4\u2013415, 1909.",
223
+ "url": null
224
+ }
225
+ },
226
+ {
227
+ "18": {
228
+ "title": "Mercer\u2019s theorem, feature maps, and smoothing.",
229
+ "author": "Ha Quang Minh, Partha Niyogi, and Yuan Yao.",
230
+ "venue": "In International Conference on Computational Learning Theory,\n2006.",
231
+ "url": null
232
+ }
233
+ },
234
+ {
235
+ "19": {
236
+ "title": "Spectrum of inner-product kernel matrices in the polynomial regime\nand multiple descent phenomenon in kernel ridge regression.",
237
+ "author": "Theodor Misiakiewicz.",
238
+ "venue": "arXiv:2204.10425, 2022.",
239
+ "url": null
240
+ }
241
+ },
242
+ {
243
+ "20": {
244
+ "title": "Harmless interpolation of noisy data in regression.",
245
+ "author": "Vidya Muthukumar, Kailas Vodrahalli, Vignesh Subramanian, and Anant Sahai.",
246
+ "venue": "IEEE Journal on Selected Areas in Information Theory, 2020.",
247
+ "url": null
248
+ }
249
+ },
250
+ {
251
+ "21": {
252
+ "title": "In search of the real inductive bias: On the role of implicit\nregularization in deep learning.",
253
+ "author": "Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro.",
254
+ "venue": "In International Conference on Learning Representations \u2013\nWorkshop, 2015.",
255
+ "url": null
256
+ }
257
+ },
258
+ {
259
+ "22": {
260
+ "title": "Some extensions of an inequality of vapnik and chervonenkis.",
261
+ "author": "Dmitry Panchenko.",
262
+ "venue": "Electronic Communications in Probability, 7:55\u201365,\n2002.",
263
+ "url": null
264
+ }
265
+ },
266
+ {
267
+ "23": {
268
+ "title": "Asymptotics of ridge(less) regression under general source condition.",
269
+ "author": "Dominic Richards, Jaouad Mourtada, and Lorenzo Rosasco.",
270
+ "venue": "In International Conference on Artificial Intelligence and\nStatistics, volume 130, pp. 3889\u20133897, 2021.",
271
+ "url": null
272
+ }
273
+ },
274
+ {
275
+ "24": {
276
+ "title": "Understanding machine learning: From theory to algorithms.",
277
+ "author": "Shai Shalev-Shwartz and Shai Ben-David.",
278
+ "venue": "Cambridge University Press, 2014.",
279
+ "url": null
280
+ }
281
+ },
282
+ {
283
+ "25": {
284
+ "title": "The eigenlearning framework: A conservation law perspective on kernel\nregression and wide neural networks.",
285
+ "author": "James B Simon, Madeline Dickens, Dhruva Karkada, and Michael R. DeWeese.",
286
+ "venue": "arXiv:2110.03922, 2021.",
287
+ "url": null
288
+ }
289
+ },
290
+ {
291
+ "26": {
292
+ "title": "Optimistic rates for learning with a smooth loss, 2010.",
293
+ "author": "Nathan Srebro, Karthik Sridharan, and Ambuj Tewari.",
294
+ "venue": null,
295
+ "url": null
296
+ }
297
+ },
298
+ {
299
+ "27": {
300
+ "title": "Benign overfitting in ridge regression.",
301
+ "author": "Alexander Tsigler and Peter L. Bartlett.",
302
+ "venue": "2020.",
303
+ "url": null
304
+ }
305
+ },
306
+ {
307
+ "28": {
308
+ "title": "On the uniform convergence of relative frequencies of events to their\nprobabilities.",
309
+ "author": "Vladimir Vapnik and Alexey Chervonenkis.",
310
+ "venue": "Theory of Probability and its applications, XVI(2):264\u2013280, 1971.",
311
+ "url": null
312
+ }
313
+ },
314
+ {
315
+ "29": {
316
+ "title": "Tight bounds for minimum l1-norm interpolation of noisy data.",
317
+ "author": "Guillaume Wang, Konstantin Donhauser, and Fanny Yang.",
318
+ "venue": "2021.",
319
+ "url": null
320
+ }
321
+ },
322
+ {
323
+ "30": {
324
+ "title": "More than a toy: Random matrix models predict how real-world neural\nrepresentations generalize.",
325
+ "author": "Alexander Wei, Wei Hu, and Jacob Steinhardt.",
326
+ "venue": "In International Conference on Machine Learning, Proceedings\nof Machine Learning Research, 2022.",
327
+ "url": null
328
+ }
329
+ },
330
+ {
331
+ "31": {
332
+ "title": "On the optimal weighted regularization in overparameterized\nlinear regression.",
333
+ "author": "Denny Wu and Ji Xu.",
334
+ "venue": "In Advances in Neural Information Processing Systems, 2020.",
335
+ "url": null
336
+ }
337
+ },
338
+ {
339
+ "32": {
340
+ "title": "Understanding deep learning requires rethinking generalization.",
341
+ "author": "Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals.",
342
+ "venue": "In International Conference on Learning Representations, 2017.",
343
+ "url": null
344
+ }
345
+ },
346
+ {
347
+ "33": {
348
+ "title": "On uniform convergence and low-norm interpolation learning.",
349
+ "author": "Lijia Zhou, Danica J. Sutherland, and Nathan Srebro.",
350
+ "venue": "In Advances in Neural Information Processing Systems, 2020.",
351
+ "url": null
352
+ }
353
+ },
354
+ {
355
+ "34": {
356
+ "title": "Optimistic rates: A unifying theory for interpolation learning and\nregularization in linear regression.",
357
+ "author": "Lijia Zhou, Frederic Koehler, Danica J. Sutherland, and Nathan Srebro.",
358
+ "venue": "In ACM / IMS Journal of Data Science, 2021.",
359
+ "url": null
360
+ }
361
+ },
362
+ {
363
+ "35": {
364
+ "title": "A non-asymptotic moreau envelope theory for high-dimensional\ngeneralized linear models.",
365
+ "author": "Lijia Zhou, Frederic Koehler, Pragya Sur, Danica J. Sutherland, and Nathan\nSrebro.",
366
+ "venue": "In Advances in Neural Information Processing Systems, 2022.",
367
+ "url": null
368
+ }
369
+ }
370
+ ],
371
+ "url": "http://arxiv.org/html/2306.13185v2"
372
+ }
20240322/2306.16973v2.json ADDED
@@ -0,0 +1,344 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Robust Direct Data-Driven Control for Probabilistic Systems",
3
+ "abstract": "We propose a data-driven control method for systems with aleatoric uncertainty, for example, robot fleets with variations between agents. Our method leverages shared trajectory data to increase the robustness of the designed controller and thus facilitate transfer to new variations without the need for prior parameter and uncertainty estimations. In contrast to existing work on experience transfer for performance, our approach focuses on robustness and uses data collected from multiple realizations to guarantee generalization to unseen ones. Our method is based on scenario optimization combined with recent formulations for direct data-driven control. We derive lower bounds on the amount of data required to achieve quadratic stability for probabilistic systems with aleatoric uncertainty and demonstrate the benefits of our data-driven method through a numerical example. We find that the learned controllers generalize well to high variations in the dynamics even when based on only a few short open-loop trajectories. Robust experience transfer enables the design of safe and robust controllers that work \u201cout of the box\u201d without any additional learning during deployment.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Data-driven control uses data collected from the system for the control design. The predominant assumption for most methods is that the data is collected on a single system and this system is the one being controlled later on. While under laboratory conditions this is often true, in practice the same controller is may be deployed on many realizations of similar systems. For example, in robotic fleets for forest fire fighting Haksar and Schwager (2018 ###reference_b1###) or agriculture Emmi et al. (2014 ###reference_b2###), and in power electronics for wind farms Markovsky et al. (2023 ###reference_b3###).\nHowever, the different instantiations of such systems are rarely perfectly homogeneous and each system is subject to variations, for example due to production and assembly procedures or different setups and payloads.\nIn this case, designing specific controllers for all possible configurations can quickly become impractical.\nInstead, it is desirable to design a controller that is robust to this variability and can be used out of the box.\nThis is a classical use case of robust control methods.\nHowever, these methods assume that a nominal system model and an uncertainty description are known during control design.\nIn contrast, data-based control (partly) replaces model knowledge with data and can be designed to be robust to uncertainty after learning on finite and noisy data Berkenkamp and Schoellig (2015 ###reference_b4###); Umlauft et al. (2018 ###reference_b5###); von Rohr et al. (2021 ###reference_b6###); Jiao et al. (2022 ###reference_b7###).\nYet, these method are developed for learning on a single system and are not necessarily robust to variations between multiple systems and have no guarantees for new variations that were not encountered during learning.\nWe propose a data-based control synthesis that shares data between multiple instantiations and yields controllers that provably generalize to unseen variations.\nIn summary, our proposed method yields robustness guarantees for data-driven control of probabilistic systems without the need to learn on all instantiations.\n###figure_1### ###figure_2### ###figure_3### ###figure_4### We envision our method is especially useful for learning-based control methods that benefit from a preliminary stabilizing control for learning and data collection. The proposed method guarantees generalization and therefore data can be collected once on a subset of systems and be re-used on systems that were not available during the learning phase. Learning on these new systems then can start with an already stabilizing controller.\nIn this letter, we consider linear dynamics and propose a direct data-driven control111A \u2018direct\u2019 data-driven method forgoes the modeling step, where the dynamics of the open-loop system are identified (cf. De Persis and Tesi (2020 ###reference_b8###)). Instead, the collected data (state-input trajectories) is directly used in the control design. design for probabilistic systems.\nWe aim to obtain a single state-feedback controller for all possible variations, including ones that are unseen during the data-collection phase (cf. Fig. 1 ###reference_###).\nFormally, we consider variations as a realization of a random variable influencing the system\u2019s dynamics, and we consider robustness as a probabilistic property of the closed-loop system; hence, we aim to design a controller that is stable with high probability with respect to (w.r.t.) the randomness in the variations.\nFor the probabilistic control design, we employ the scenario approach Campi et al. (2009 ###reference_b9###).\nOur proposed combination of the scenario approach with direct data-driven control treats an observed trajectory as an uncertain scenario.\nWe make the following contributions:\nWe introduce the concept of probabilistic data informativity for stabilization as an extension to the informativity framework of van Waarde et al. (2020 ###reference_b10###).\nWe propose a method to compute a stabilizing controller based on informative data and derive lower bounds on the required number of trajectory samples based on the scenario approach.\nWe further demonstrate the effectiveness and benefits of the proposed synthesis on a numerical benchmark example."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Related Work",
15
+ "text": "This section discusses the related literature on cross-system experience transfer, learning-based control designs that consider probabilistic robustness, and recent advances in direct data-driven control.\nExperience Transfer for Learning-Based Control.\nIn this letter, we present an approach for probabilistic systems where we use trajectories of sampled systems as data. This can be seen as a type of data sharing or experience transfer between the different realizations.\nA related body of work on experience transfer uses data collected from variations of source systems to derive a controller on a target system to improve control performance on this target.\nIn the framework of iterative learning control (ILC) it has been shown that experience transfer can lead to performance increases, but only in limited cases Schoellig et al. (2010 ###reference_b11###, 2012 ###reference_b12###).\nSorocky et al. (2020 ###reference_b13###) investigate the conditions for an increased performance after experience transfer for the case of linear single-input single-output systems using deep neural networks as transfer functions.\nIn contrast to these works on transfer learning to increase performance, we use the diversity of the data for the complementary goal of robustness.\nOur approach is also related to domain and dynamics randomization developed in the machine learning community to enable deep reinforcement learning (RL) to generalize to real environments from simulated ones (see Muratore et al. (2022 ###reference_b14###) for a recent survey).\nBasically, domain randomization generates artificial environmental or dynamics variations during training of an RL policy and to robustify the trained policy.\nInstead of artificial variations of a simulator, we use trajectories collected from different realizations, and we provide controllers that are guaranteed to stabilize variations sampled from the underlying distribution with high probability.\nProbabilistic Robust Control.\nGenerally, considering uncertainty about the underlying dynamics is not new and the topic of robust optimal control Zhou et al. (1996 ###reference_b15###).\nRobust control methods require an accurate description of the nominal system and its uncertainties to guarantee robustness and performance.\nLearning-based control often combines robust control methods with machine learning to get models and their uncertainties from data.\nThe uncertainty descriptions result from statistical learning theory Koller et al. (2018 ###reference_b16###); Umlauft et al. (2018 ###reference_b5###); Helwa et al. (2019 ###reference_b17###); Fiedler et al. (2021 ###reference_b18###) or probabilistic modeling Berkenkamp and Schoellig (2015 ###reference_b4###); Umenberger and Sch\u00f6n (2018 ###reference_b19###); von Rohr et al. (2021 ###reference_b6###); Jiao et al. (2022 ###reference_b7###).\nThese methods account for the uncertainty inherent in learning from finite and noisy data, the epistemic uncertainty.\nThe goal in our problem formulation is to learn from data from multiple realizations of the probabilistic uncertainty and account for the additional uncertainty from the variations, the aleatoric uncertainty.\nRobust control methods are often conservative.\nThe scenario approach provides a solution by relaxing a given constraint set to a randomly sampled subset and providing probabilistic guarantees Campi et al. (2009 ###reference_b9###). However, designing the distribution over the sampled constraints is still difficult a priori. In the case of a single deterministic system, one can use probabilistic machine learning to infer a distribution over systems to generate scenarios Umenberger and Sch\u00f6n (2018 ###reference_b19###); von Rohr et al. (2021 ###reference_b6###). Direct data-based control treats a trajectory as a single data point without identifying a model, resulting in our scenario formulation that directly incorporates data from the systems. Our proposed approach considers the uncertainty present in a probabilistic system without prior knowledge on distribution or the uncertainty set.\nDirect Data-Driven Control. Direct data-driven control is an emerging approach to learning-based control based on Willems\u2019 Fundamental Lemma Willems et al. (2005 ###reference_b20###), which characterizes the behavior of a linear system as linear combinations of columns from a data matrix based on trajectory data.\nIn prior work on direct data-driven control, robustness is defined w.r.t. uncertainties in the data due to unknown disturbances.\nOne can achieve robustness through regularization De Persis and Tesi (2020 ###reference_b8###, 2021 ###reference_b21###); D\u00f6rfler et al. (2022 ###reference_b22###, 2023a ###reference_b23###, 2023b ###reference_b24###) or by upper-bounding the process noise Berberich et al. (2020 ###reference_b25###, 2022 ###reference_b26###); Bisoffi et al. (2021 ###reference_b27###).\nHerein, we focus on the latter, specifically on a formulation proposed by van Waarde et al. (2022 ###reference_b28###).\nHowever, our proposed approach is compatible with both robustification approaches and most direct data-driven control formulations we are aware of.\nOur contribution adds to this body of work by considering data collected from different realizations of a probabilistic system and a controller that generalizes well, meaning there is no need to collect data from every possible variation.\nWhile existing formulations are robust w.r.t. uncertain data, they do not provide any explicit robustness w.r.t. uncertain systems.\nThe data uncertainty formulations typically provide less robust controllers w.r.t. variations as more data is collected. This is desired when learning for a single system because with more data the uncertainty shrinks and less robustness is needed.\nIn contrast, our formulation becomes more robust with data from different systems.\nAn approach close to our contribution considers stochastic systems in a distributionally robust framework Coulson et al. (2022 ###reference_b29###).\nThe authors present a chance constraint model predictive control formulation, where multiple trajectories are sampled to estimate a distribution over disturbances.\nTheir resulting controller is then distributionally robust w.r.t. the disturbances.\nIn contrast, we consider the systems themselves as stochastic realizations, and instead of an MPC design, we propose a probabilistically-robust state-feedback controller."
16
+ },
17
+ {
18
+ "section_id": "2",
19
+ "parent_section_id": null,
20
+ "section_name": "Problem Formulation",
21
+ "text": "We consider a probabilistic linear, time-invariant and discrete-time system\nwhere is the state, is the input and is the process noise of the system.\nIn our problem formulation the system matrices and are random variables with an unknown distribution.\nWe make the standard assumptions for direct data-driven control: noise-free measurements of the state , and i.i.d. process noise Berberich et al. (2020 ###reference_b25###); De Persis and Tesi (2020 ###reference_b8###); van Waarde et al. (2020 ###reference_b10###).\nA sample from the probabilistic system is defined by the parameter tuple .\nA probabilistic system is a random variable with domain on a probability space .\nAll variations are i.i.d. samples from .\nThis definition is a formalization of system variations as realizations of a random variable. It allows us to define the control synthesis problem in a probabilistic framework. Note here, that the random variable is not a function of the time step , i.e., a realization of the probabilistic system is fixed in the closed-loop.\nWe aim to solve such problems without any knowledge about the parameters of the variations , the random variable and its distribution, nor its domain .\nInstead, we have incomplete access to the behavior of the probabilistic system through data in the form of a finite number of state-input trajectories.\nFor the control synthesis problem to be feasible, we need to assume the distribution of the random variable is not \u2018too wide\u2019 so that there exists a single state-feedback controller that can stabilize the probabilistic system with high probability.\nA state feedback controller is -probabilistic robust w.r.t. if there exists a such that\nWe assume the probabilistic system is -probabilistically stabilizable, i.e., there exists a state feedback controller that is -probabilistic robust w.r.t. .\nThe data-driven control design problem is now as follows: After seeing trajectories from different variations of length , denoted as\nfrom systems that are sampled i.i.d. from , we want to find an -probabilistic robust controller."
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "Preliminaries",
27
+ "text": "We define the following matrices for the trajectories"
28
+ },
29
+ {
30
+ "section_id": "3.1",
31
+ "parent_section_id": "3",
32
+ "section_name": "Data Informativity in Direct Data-Driven Control",
33
+ "text": "A fundamental question when designing controllers from data is: What data is needed in order to guarantee a desired outcome?\nTo answer this question, the informativity framework van Waarde et al. (2020 ###reference_b10###) formalizes assumptions on the data and model class to analyze systems and design controllers from data.\nGiven a model class , e.g., all linear systems of a certain state-input dimension\nand a trajectory , we define a set as the set of all systems that are consistent with the data, i.e., all systems that could have produced the observed trajectory.\nSince we are interested in stability, we define the set as the set of all systems stabilized by the static feedback gain .\nA data set is informative about stabilization if there exists a controller such that : the set of systems stabilized by contain the whole uncertainty set .\nClearly, if there are no assumptions on the disturbances , then data cannot be informative and .\nTherefore, the need arises to formulate an assumption, which often is taken as a bound on the disturbances.\nWe follow van Waarde et al. (2022 ###reference_b28###) and formulate such bound in the form of a set membership for the disturbance trajectory.\nThe matrices are an element of , where\nfor some known , , and .\nIf, for example, each noise realization is bounded by some known constant , then a valid noise model is (13 ###reference_###) with , , and van Waarde et al. (2022 ###reference_b28###).\nEquipped with this assumption the uncertainty set containing all systems from the model class that are consistent with the data is (cf. (van Waarde et al., 2022 ###reference_b28###, Lemma 4))\nBy Assumption 2 ###reference_umption2###, we have for the true noise realization that . Therefore, the system is in the uncertainty set, i.e., .\nThis assumption is sufficient to design robust state feedback controllers with noisy data as we will show in the next section."
34
+ },
35
+ {
36
+ "section_id": "3.2",
37
+ "parent_section_id": "3",
38
+ "section_name": "Robust State Feedback",
39
+ "text": "After the definition of the uncertainty set , we present here the Linear Matrix Inequality (LMI)-based controller synthesis from van Waarde et al. (2022 ###reference_b28###).\nFirst, we state a condition on the data, and second, we use this data to parameterize a state-feedback controller that stabilizes all systems in the uncertainty set for deterministic system.\nThe generalized Slater condition is fulfilled if there exists some matrix such that\nThe condition (15 ###reference_###) is used a prerequisite that the recorded trajectory data can be used in to check the informativity condition and can be used in the LMI for controller synthesis. It can easily be checked and in our empirical evaluation we found that under some mild excitation it is almost always fulfilled. In practice, if a trajectory does not fulfill the condition, it can be discarded and re-recorded.\nAssume the generalized Slater condition (15 ###reference_###) holds. Then the data is informative for quadratic stabilization if and only if there exists with , and scalars , satisfying\nIf and satisfy (17 ###reference_###), then is a stabilizing feedback controller for all .\nLemma 1 ###reference_ma1### shows that a solution to the LMI (17 ###reference_###) yields a controller that is guaranteed to stabilize the closed-loop system from which the trajectory is collected.\nTherefore, the controller synthesis problem can be solved by finding a solution to the above LMI."
40
+ },
41
+ {
42
+ "section_id": "4",
43
+ "parent_section_id": null,
44
+ "section_name": "Stabilization for Finite Sets of Systems",
45
+ "text": "We now extend Lemma 1 ###reference_ma1### to the case where the controller stabilizes not only one but a finite set of systems .\nFor that purpose, we will use a single trajectory collected from each system.\nAgain, we do not explicitly identify the systems, but we derive an LMI formulation that depends solely on the collected trajectories.\nWe start by first defining informativity for quadratic stabilization of a finite set of systems.\nAssume all trajectories in the set satisfy Assumption 2 ###reference_umption2###.\nWe say a set of trajectories are informative for quadratic stabilization of the set of systems if there exists a state feedback gain and a matrix such that\nfor all .\nNext we show how we can use the LMI (17 ###reference_###) to find such a controller.\nAssume that the generalized Slater condition (15 ###reference_###) is fulfilled for all .\nIf there exists with , and scalars and such that (17 ###reference_###) holds for all , then the data is informative for quadratic stability of all systems .\nFurthermore, is a stabilizing controller for all .\nIf a solution to the LMI in (17 ###reference_###) is satisfied for a single trajectory, the system that generated that data is stable with feedback controller (Lemma 1 ###reference_ma1###).\nConsider the set of all and that satisfy (17 ###reference_###) for a single . Then the solution to the LMI described by (17 ###reference_###) for all is the intersection of all .\nTherefore, the must stabilize all sets .\n\u220e\nThe LMI in Theorem 1 ###reference_orem1### can be solved using standard solvers.\nFig. 2 ###reference_### depicts an example of a controller that simultaneously stabilize multiple system by stabilizing the union of the uncertain regions .\nTheorem 1 ###reference_orem1### considers informativity for quadratic stabilization, but readily extends to synthesis problems with additional performance criteria such as the and formulations in van Waarde et al. (2022 ###reference_b28###).\nFor this section we have only considered stability for the systems that generated trajectory data. If we want to stabilize a continuous set , this would require solving a semi-infinite optimization problem and collecting an infinite amount of data.\nNext, we will show how to find an -probabilistically robust controller for the whole distribution with finite data."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "Stabilization of Probabilistic Systems",
51
+ "text": "In Sect. 4 ###reference_###, we introduced a sufficient condition for a controller that provably stabilizes a finite set of systems for which there is available data.\nTo make this result usable for probabilistic systems with possibly infinite number of variations, we show that the solution to the finite program generalizes to the whole distribution using scenario optimization Campi et al. (2009 ###reference_b9###).\nFirst, we define an informativity notion for -probabilistic stabilization (cf. Def. 2 ###reference_inition2###).\nAssume all satisfy Assumption 2 ###reference_umption2###.\nAssume further all trajectories are created from i.i.d. samples of the probabilistic system (Def. 1 ###reference_inition1###).\nWe say a set of trajectories is informative for -probabilistic quadratic stabilization of the probabilistic system if there exists a state feedback gain and a matrix such that\nNext, we show how to determine a lower bound on the number of trajectories required to design an -robust controller with high probability.\nAssume that the generalized Slater condition (15 ###reference_###) is fulfilled for all .\nSelect a confidence parameter and a violation parameter .\nIf with and there exist , and scalars and such that (17 ###reference_###) holds for all , then the data is informative for -probabilistic quadratic stabilization with probability , i.e.,\nThe LMI (17 ###reference_###) for all is a convex scenario program with decision variables. The result follows from (Campi et al., 2009 ###reference_b9###, Theorem 1).\n\u220e\nNote the nested probabilities in Theorem 2 ###reference_orem2###.\nThe outer probability depends on the sampled data and quantifies the possibility that the samples are not representative of the underlying distribution in which case the controller might fail to generalize.\nHowever, since the required number of trajectories only scales logarithmically in this probability can be chosen relatively low without increasing the required samples too much.\nAgain, this result can be readily extended to the and setting in van Waarde et al. (2022 ###reference_b28###), in which case we can not only guarantee stability but also a minimum performance level for all systems in the fleet with high probability."
52
+ },
53
+ {
54
+ "section_id": "6",
55
+ "parent_section_id": null,
56
+ "section_name": "Numerical Example",
57
+ "text": "In this section, we apply the results of the scenario optimization for probabilistically robust direct data-driven control on a linear system benchmark problem.\nThe theorems developed above guarantee probabilistic stability only if a solution to the LMI is found.\nA natural question to ask is: do such LMIs have a feasible solution? In this section we give an example for a probabilistic system where they have and show that the proposed method compares favorably to prior work based in probabilistic system identification Berkenkamp and Schoellig (2015 ###reference_b4###); von Rohr et al. (2021 ###reference_b6###).\nFurther, we investigate the generalization of our method beyond the lower bound given in Theorem 2 ###reference_orem2### and the influence of the trajectory length on a specific example.\nIn summary, we find that on the chosen benchmark problem we can deal with relatively high levels of uncertainty and that our method generalizes well even on small data sets.\n222Source code to reproduce results is available at https://github.com/Data-Science-in-Mechanical-Engineering/rddc ###reference_cal-Engineering/rddc###."
58
+ },
59
+ {
60
+ "section_id": "6.1",
61
+ "parent_section_id": "6",
62
+ "section_name": "Probabilistic Linear System",
63
+ "text": "The synthetic benchmark problem investigated here is adapted from our previous work von Rohr et al. (2021 ###reference_b6###) and is based on a popular example system in the data-based control literature first proposed by Dean et al. (2020 ###reference_b30###).\nThe fleet distribution is\nwhere is the truncated normal distribution with mean and variance .\nWe truncate such that the mean is centered and a sample from the non-truncated normal is inside the interval with a probability of .\nThe mean is chosen as an unstable graph Laplacian system with with\nFor the variance over systems, we choose , which has a single parameter to control size of the domain and also allows for correlations between parameters.\nThe bound of Theorem 2 ###reference_orem2### imposes for and .\nIn our experiments, we first sample systems from the distribution and use each system to generate a trajectory of length with random initial conditions.\nTo verify that the controller generalizes to the whole distribution we sample new systems from and test their stability. The percentage of unstable systems is used as an estimate for .\nWe repeat the experiments for all combinations times and average the probability of closed-loop stability."
64
+ },
65
+ {
66
+ "section_id": "6.2",
67
+ "parent_section_id": "6",
68
+ "section_name": "Generalization and Feasibility of the Synthesis",
69
+ "text": "In the first experiment we validate the lower bound derived in Theorem 2 ###reference_orem2###.\nThe trajectory length is set to .\nThe results are shown in Fig. 3 ###reference_###.\nAs the number of observed systems is increased, the proposed method either yields a controller stabilizing the overwhelming majority of the fleet until the aleatoric uncertainty is too large and no -probabilistic robust control can be found.\nAs expected, for the theoretical lower bound we achieve the desired result that the synthesis returns an -probabilistic robust controller.\nWhile using the theoretical lower bound of the synthesis is feasible for distributions uncertainties up to and yields no controller for higher uncertainties.\nThe methods based on probabilistic system identification were able to find controllers for uncertainties up to von Rohr et al. (2021 ###reference_b6###) and Berkenkamp and Schoellig (2015 ###reference_b4###) (see the empirical results of von Rohr et al. (2021 ###reference_b6###)).\nAt least for the benchmark problem the proposed method can deal with higher uncertainties, despite working with uncertain scenarios.\nFurthermore, the controller generalizes even when only trajectory samples are available.\nIn Fig. 4 ###reference_### we can also observe that the relaxed problem with enables synthesis for probabilistic systems with even higher variance and yields controllers for , albeit without the guarantees of Theorem 2 ###reference_orem2###.\nThese findings are consistent with Umenberger and Sch\u00f6n (2018 ###reference_b19###) that reported learning stabilizing controllers after a few rollouts using probabilistic system identification."
70
+ },
71
+ {
72
+ "section_id": "6.3",
73
+ "parent_section_id": "6",
74
+ "section_name": "Effect of the Trajectory Length",
75
+ "text": "Our theoretical results we focus on the number of trajectories required to achieve a stabilization controller with high probability. However, the theory does not reveal the effect of the trajectory length. As long as the trajectory satisfies the generalized Slater condition, the result holds. However, since the available data influences the uncertainty sets our second experiment explores the influence of the trajectory length on the robustness and feasibility of the synthesis problem.\nFor these experiment we use a fixed distribution with .\nFig. 4 ###reference_### shows that the trajectory length has no effect on the stability. It does however influence the feasibility of the resulting LMI. In our experiment longer trajectories make the problem feasible more often. However, the effect is relatively small and even very short trajectories can be sufficient to achieve stability with high probability.\nTo investigate the effect of the trajectory length further we show the uncertainty sets for different values of in Fig. 5 ###reference_###.\nFor this, we choose the system and use it to generate trajectories of varying lengths. The process noise bound is . For each trajectory length, we produce different trajectories with different noise realizations. Then, we calculate the data-consistent set based on the trajectory and noise assumption.\nThis figure illustrates the following relationship: Longer trajectories do not decrease the size of the uncertainty sets, but they reduce their variance. The fact that more data does not reduce the uncertainty might first seem counter-intuitive. Usually, when determining an uncertain quantity, performing more measurements reduces the uncertainty, but only down to the measurement tool\u2019s tolerance. For determining the data-consistent set, the process noise plays the role of \u201cmeasurement tolerance\u201d. When we reach this tolerance, each new sample will not only provide new information but also obfuscate it with the newly introduced noise."
76
+ },
77
+ {
78
+ "section_id": "7",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusion",
81
+ "text": "In this letter, we propose a new approach for direct-data driven control for probabilistic linear system based on the novel concept of probabilistic informativity.\nOur method utilizes a convex scenario program to design controllers that provably generalize to the whole distribution based on trajectory data from realizations of the probabilistic system.\nTo this extend we provide lower bounds for the necessary numbers of trajectories and demonstrate empirically that the resulting controller synthesis remains feasible even when the variance of the probabilistic system is large.\nFurther, the method can be used even when the trajectory length is rather short.\nThis makes the method suitable to devise preliminary stabilizing controllers for learning-based control even for unstable systems where sampling long open-loop trajectories is difficult.\nAn avenue for future work is developing data-based methods for unstable systems in an iterative fashion, much like reinforcement learning algorithms, with additional performance criterion where the controller is sequentially improved with more samples."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {},
86
+ "image_paths": {
87
+ "1(a)": {
88
+ "figure_path": "2306.16973v2_figure_1(a).png",
89
+ "caption": "Figure 1: Sketch of the proposed method: A controller is based on a set of collected trajectory data from N\ud835\udc41Nitalic_N systems with variations (here symbolized as quadcopters). The resulting controller is guaranteed to work on any unseen system drawn from the same probability distribution with high probability.",
90
+ "url": "http://arxiv.org/html/2306.16973v2/extracted/5488551/figures/drone_sketch.png"
91
+ },
92
+ "1(b)": {
93
+ "figure_path": "2306.16973v2_figure_1(b).png",
94
+ "caption": "Figure 1: Sketch of the proposed method: A controller is based on a set of collected trajectory data from N\ud835\udc41Nitalic_N systems with variations (here symbolized as quadcopters). The resulting controller is guaranteed to work on any unseen system drawn from the same probability distribution with high probability.",
95
+ "url": "http://arxiv.org/html/2306.16973v2/extracted/5488551/figures/drone_sketch.png"
96
+ },
97
+ "1(c)": {
98
+ "figure_path": "2306.16973v2_figure_1(c).png",
99
+ "caption": "Figure 1: Sketch of the proposed method: A controller is based on a set of collected trajectory data from N\ud835\udc41Nitalic_N systems with variations (here symbolized as quadcopters). The resulting controller is guaranteed to work on any unseen system drawn from the same probability distribution with high probability.",
100
+ "url": "http://arxiv.org/html/2306.16973v2/extracted/5488551/figures/drone_sketch.png"
101
+ },
102
+ "1(d)": {
103
+ "figure_path": "2306.16973v2_figure_1(d).png",
104
+ "caption": "Figure 1: Sketch of the proposed method: A controller is based on a set of collected trajectory data from N\ud835\udc41Nitalic_N systems with variations (here symbolized as quadcopters). The resulting controller is guaranteed to work on any unseen system drawn from the same probability distribution with high probability.",
105
+ "url": "http://arxiv.org/html/2306.16973v2/extracted/5488551/figures/drone_sketch.png"
106
+ }
107
+ },
108
+ "validation": true,
109
+ "references": [
110
+ {
111
+ "1": {
112
+ "title": "Distributed deep reinforcement learning for fighting forest fires with a network of aerial robots,",
113
+ "author": "R. N. Haksar, M. Schwager,",
114
+ "venue": "in: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018, pp. 1067\u20131074. doi:10.1109/IROS.2018.8593539.",
115
+ "url": null
116
+ }
117
+ },
118
+ {
119
+ "2": {
120
+ "title": "New trends in robotics for agriculture: Integration and assessment of a real fleet of robots,",
121
+ "author": "L. Emmi, M. Gonzalez-de Soto, G. Pajares, P. Gonzalez-de Santos,",
122
+ "venue": "Scientific World Journal 2014 (2014). doi:10.1155/2014/404059.",
123
+ "url": null
124
+ }
125
+ },
126
+ {
127
+ "3": {
128
+ "title": "Data-driven control based on the behavioral approach: From theory to applications in power systems,",
129
+ "author": "I. Markovsky, L. Huang, F. D\u00f6rfler,",
130
+ "venue": "IEEE Control Systems Magazine 43 (2023) 28\u201368. doi:10.1109/MCS.2023.3291638.",
131
+ "url": null
132
+ }
133
+ },
134
+ {
135
+ "4": {
136
+ "title": "Safe and robust learning control with gaussian processes,",
137
+ "author": "F. Berkenkamp, A. P. Schoellig,",
138
+ "venue": "in: European Control Conference, 2015, pp. 2496\u20132501. doi:10.1109/ECC.2015.7330913.",
139
+ "url": null
140
+ }
141
+ },
142
+ {
143
+ "5": {
144
+ "title": "An uncertainty-based control lyapunov approach for control-affine systems modeled by gaussian process,",
145
+ "author": "J. Umlauft, L. P\u00f6hler, S. Hirche,",
146
+ "venue": "IEEE Control Systems Letters 2 (2018) 483\u2013488. doi:10.1109/LCSYS.2018.2841961.",
147
+ "url": null
148
+ }
149
+ },
150
+ {
151
+ "6": {
152
+ "title": "Probabilistic robust linear quadratic regulators with gaussian processes,",
153
+ "author": "A. von Rohr, M. Neumann-Brosig, S. Trimpe,",
154
+ "venue": "in: Proc. of the 3rd Conference on Learning for Dynamics and Control, volume 144 of Proceedings of Machine Learning Research, 2021, pp. 324\u2013335.",
155
+ "url": null
156
+ }
157
+ },
158
+ {
159
+ "7": {
160
+ "title": "Backstepping tracking control using gaussian processes with event-triggered online learning,",
161
+ "author": "J. Jiao, A. Capone, S. Hirche,",
162
+ "venue": "IEEE Control Systems Letters 6 (2022) 3176\u20133181. doi:10.1109/LCSYS.2022.3183530.",
163
+ "url": null
164
+ }
165
+ },
166
+ {
167
+ "8": {
168
+ "title": "Formulas for data-driven control: Stabilization, optimality, and robustness,",
169
+ "author": "C. De Persis, P. Tesi,",
170
+ "venue": "IEEE Transactions on Automatic Control 65 (2020) 909\u2013924. doi:10.1109/TAC.2019.2959924.",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "9": {
176
+ "title": "The scenario approach for systems and control design,",
177
+ "author": "M. C. Campi, S. Garatti, M. Prandini,",
178
+ "venue": "Annual Reviews in Control 33 (2009) 149 \u2013 157. doi:https://doi.org/10.1016/j.arcontrol.2009.07.001.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "10": {
184
+ "title": "Data informativity: A new perspective on data-driven analysis and control,",
185
+ "author": "H. J. van Waarde, J. Eising, H. L. Trentelman, M. K. Camlibel,",
186
+ "venue": "IEEE Transactions on Automatic Control 65 (2020) 4753\u20134768. doi:10.1109/TAC.2020.2966717.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "11": {
192
+ "title": "Independent vs. joint estimation in multi-agent iterative learning control,",
193
+ "author": "A. Schoellig, J. Alonso-Mora, R. D\u2019Andrea,",
194
+ "venue": "in: IEEE Conference on Decision and Control, 2010, pp. 6949\u20136954. doi:10.1109/CDC.2010.5717888.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "12": {
200
+ "title": "Limited benefit of joint estimation in multi-agent iterative learning,",
201
+ "author": "A. P. Schoellig, J. Alonso-Mora, R. D\u2019Andrea,",
202
+ "venue": "Asian Journal of Control 14 (2012) 613\u2013623. doi:10.1002/asjc.398.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "13": {
208
+ "title": "To share or not to share? performance guarantees and the asymmetric nature of cross-robot experience transfer,",
209
+ "author": "M. J. Sorocky, S. Zhou, A. P. Schoellig,",
210
+ "venue": "IEEE Control Systems Letters 5 (2020) 923\u2013928. doi:10.1109/LCSYS.2020.3005886.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "14": {
216
+ "title": "Robot Learning From Randomized Simulations: A Review,",
217
+ "author": "F. Muratore, F. Ramos, G. Turk, W. Yu, M. Gienger, J. Peters,",
218
+ "venue": "Frontiers in Robotics and AI 9 (2022). doi:10.3389/frobt.2022.799893.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "15": {
224
+ "title": "Learning-based model predictive control for safe exploration,",
225
+ "author": "T. Koller, F. Berkenkamp, M. Turchetta, A. Krause,",
226
+ "venue": "in: IEEE Conference on Decision and Control, 2018, pp. 6059\u20136066. doi:10.1109/CDC.2018.8619572.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "16": {
232
+ "title": "Provably robust learning-based approach for high-accuracy tracking control of lagrangian systems,",
233
+ "author": "M. K. Helwa, A. Heins, A. P. Schoellig,",
234
+ "venue": "IEEE Robotics and Automation Letters 4 (2019) 1587\u20131594. doi:10.1109/LRA.2019.2896728.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "17": {
240
+ "title": "Learning-enhanced robust controller synthesis with rigorous statistical and control-theoretic guarantees,",
241
+ "author": "C. Fiedler, C. W. Scherer, S. Trimpe,",
242
+ "venue": "in: IEEE Conference on Decision and Control, 2021, pp. 5122\u20135129. doi:10.1109/CDC45484.2021.9682836.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "18": {
248
+ "title": "Learning convex bounds for linear quadratic control policy synthesis,",
249
+ "author": "J. Umenberger, T. B. Sch\u00f6n,",
250
+ "venue": "in: Advances in Neural Information Processing Systems, 2018, pp. 9561\u20139572.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "19": {
256
+ "title": "A note on persistency of excitation,",
257
+ "author": "J. C. Willems, P. Rapisarda, I. Markovsky, B. L. De Moor,",
258
+ "venue": "Systems & Control Letters 54 (2005) 325\u2013329. doi:10.1016/j.sysconle.2004.09.003.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "20": {
264
+ "title": "Low-complexity learning of linear quadratic regulators from noisy data,",
265
+ "author": "C. De Persis, P. Tesi,",
266
+ "venue": "Automatica 128 (2021) 109548. doi:10.1016/j.automatica.2021.109548.",
267
+ "url": null
268
+ }
269
+ },
270
+ {
271
+ "21": {
272
+ "title": "On the role of regularization in direct data-driven LQR control,",
273
+ "author": "F. D\u00f6rfler, P. Tesi, C. De Persis,",
274
+ "venue": "in: IEEE Conference on Decision and Control, 2022, pp. 1091\u20131098. doi:10.1109/CDC51059.2022.9992770.",
275
+ "url": null
276
+ }
277
+ },
278
+ {
279
+ "22": {
280
+ "title": "Bridging direct and indirect data-driven control formulations via regularizations and relaxations,",
281
+ "author": "F. D\u00f6rfler, J. Coulson, I. Markovsky,",
282
+ "venue": "IEEE Transactions on Automatic Control 68 (2023a) 883\u2013897. doi:10.1109/TAC.2022.3148374.",
283
+ "url": null
284
+ }
285
+ },
286
+ {
287
+ "23": {
288
+ "title": "On the certainty-equivalence approach to direct data-driven LQR design,",
289
+ "author": "F. D\u00f6rfler, P. Tesi, C. De Persis,",
290
+ "venue": "IEEE Transactions on Automatic Control (2023b) 1\u20138. doi:10.1109/TAC.2023.3253787.",
291
+ "url": null
292
+ }
293
+ },
294
+ {
295
+ "24": {
296
+ "title": "Robust data-driven state-feedback design,",
297
+ "author": "J. Berberich, A. Romer, C. W. Scherer, F. Allg\u00f6wer,",
298
+ "venue": "in: American Control Conference, 2020, pp. 1532\u20131538. doi:10.23919/ACC45564.2020.9147320.",
299
+ "url": null
300
+ }
301
+ },
302
+ {
303
+ "25": {
304
+ "title": "Combining prior knowledge and data for robust controller design,",
305
+ "author": "J. Berberich, C. W. Scherer, F. Allg\u00f6wer,",
306
+ "venue": "IEEE Transactions on Automatic Control (2022) 1\u201316. doi:10.1109/TAC.2022.3209342.",
307
+ "url": null
308
+ }
309
+ },
310
+ {
311
+ "26": {
312
+ "title": "Trade-offs in learning controllers from noisy data,",
313
+ "author": "A. Bisoffi, C. De Persis, P. Tesi,",
314
+ "venue": "Systems & Control Letters 154 (2021) 104985. doi:10.1016/j.sysconle.2021.104985.",
315
+ "url": null
316
+ }
317
+ },
318
+ {
319
+ "27": {
320
+ "title": "From noisy data to feedback controllers: Nonconservative design via a matrix s-lemma,",
321
+ "author": "H. J. van Waarde, M. K. Camlibel, M. Mesbahi,",
322
+ "venue": "IEEE Transactions on Automatic Control 67 (2022) 162\u2013175. doi:10.1109/TAC.2020.3047577.",
323
+ "url": null
324
+ }
325
+ },
326
+ {
327
+ "28": {
328
+ "title": "Distributionally robust chance constrained data-enabled predictive control,",
329
+ "author": "J. Coulson, J. Lygeros, F. D\u00f6rfler,",
330
+ "venue": "IEEE Transactions on Automatic Control 67 (2022) 3289\u20133304. doi:10.1109/TAC.2021.3097706.",
331
+ "url": null
332
+ }
333
+ },
334
+ {
335
+ "29": {
336
+ "title": "On the sample complexity of the linear quadratic regulator,",
337
+ "author": "S. Dean, H. Mania, N. Matni, B. Recht, S. Tu,",
338
+ "venue": "Foundations of Computational Mathematics 20 (2020) 633\u2013679. doi:10.1007/s10208-019-09426-y.",
339
+ "url": null
340
+ }
341
+ }
342
+ ],
343
+ "url": "http://arxiv.org/html/2306.16973v2"
344
+ }
20240322/2307.05279v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2307.08080v2.json ADDED
@@ -0,0 +1,272 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Sampling Proper Colorings on Line Graphs Using (1+\ud835\udc5c\u2062(1))\u2062\u0394 Colors",
3
+ "abstract": "We prove that the single-site Glauber dynamics for sampling proper -colorings mixes in time on line graphs with vertices and maximum degree when . The main tool in our proof is the matrix trickle-down theorem developed by Abdolazimi, Liu and Oveis Gharan in [ALG21].",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "A proper (vertex) -coloring of a graph is an assignment of each vertex to one of colors so that the colors of adjacent vertices are distinct. Let be the maximum degree of . It has been widely conjectured that the single-site Glauber dynamics for uniformly sampling proper colorings on mixes rapidly as long as the number of colors is at least . Since the seminal work of Jerrum [Jer95 ###reference_bx14###] and Salas and Sokal [SS97 ###reference_bx18###] where Glauber dynamics was shown to be rapidly mixing when , a number of works devoted to resolving the conjecture and the current best bound requires for some [CDM19 ###reference_bx4###], which is still far from desired.\nAnother line of work to approach the conjecture is by considering special graph families, with the most successful cases being those graphs with large girths (e.g., [DF03 ###reference_bx8###, Mol04 ###reference_bx16###, DFHV13 ###reference_bx9###, CG\u0160V21 ###reference_bx5###, FGYZ20 ###reference_bx11###]). Notably in a very recent work of Chen, Liu, Mani and Moitra [CLMM23 ###reference_bx6###], it was proven that for any , there exists a such that for any graph with the maximum degree at most and the girth at least , the single-site Glauber dynamics for sampling proper -colorings on mixes rapidly as long as . This result almost solves the aforementioned conjecture regarding the case of large girth graphs.\nA closely related problem is sampling proper edge colorings, where one assigns colors to the edges instead of vertices so that adjacent edges have distinct colors. It is clear that a proper edge coloring of naturally corresponds to a vertex coloring of the line graph of in which the vertices are the edges of and two vertices are adjacent in the line graph if and only if the corresponding edges are incident in . As a result, studying sampling proper colorings on line graphs is of particular interest. Unlike graphs with a large girth which are \u201clocally sparse\u201d, line graphs are \u201clocally dense\u201d. Specifically, a line graph is formed by gluing together several cliques with each vertex belonging to exactly two cliques. Previous methods such as coupling can hardly take advantage of this special structure and therefore the rapid mixing condition for line graphs remains the same as that for general graphs for a long time.\nIn a recent breakthrough, Abdolazimi, Liu and Oveis Gharan [ALG21 ###reference_bx2###] developed a new technique to establish rapid mixing results for Glauber dynamics, namely the matrix trickle-down theorem, which can well-utilize the structural information of the underlying graphs. As a result, it was shown in [ALG21 ###reference_bx2###] the Glauber dynamics for sampling colorings on line graphs mixes rapidly as long as . The technique, incorporating with recent revolutionary development of the \u201clocal-to-global\u201d scheme for high-dimensional expanders [AL20 ###reference_bx1###, CLV21 ###reference_bx7###], yields optimal mixing time for certain Markov chains. Specifically, the matrix trickle-down theorem is a generalization of the trickle-down theorem in [Opp18 ###reference_bx17###] which established connections between the spectral gaps of the local walks on links of the underlying simplicial complexes. Instead of solely looking at the spectral gap, the matrix trickle-down theorem takes into account the local walks themselves, and establishes more general connections between (the transition matrix of) local walks on links.\nTo apply the matrix trickle-down, one needs to design appropriate matrix upper bounds for local walks while keeping their spectrum bounded. Moreover, these matrices must satisfy certain inductive inequality constraints. The construction of these matrix upper bounds is the main technical challenge since one has to have precise control over the magnitudes of their eigenvalues. In this work, we systematically study the constraints arising in the matrix trickle-down theorem and provide an almost optimal construction of matrix upper bounds for sampling proper colorings on line graphs. Our construction has several advantages compared to the one in [ALG21 ###reference_bx2###]:\nOur construction of the matrix upper bounds for each local walk is explicit.\nWe directly relate the spectrum of each matrix upper bound to that of the adjacent matrix of the line graph, which is a clique locally and is therefore well-understood. Therefore, we can obtain desired spectrum bound in an improved regime.\nWe reduce the existence of matrix upper bounds to the feasibility of a system of inequalities, for which we give a complete characterization.\nAs a result, we obtain an 111The notation means that the hidden constant before might depend on . mixing time for bounded degree line graph colorings with extra colors.\nLet be a line graph with vertices and maximum degree . If , then the Glauber dynamics on\nthe -colorings of has modified log-Sobolev constant , and thus mixes in time .\nMoreover, we obtain an mixing time on the (degree unbounded) family of line graphs with extra colors.\nLet be a line graph with vertices and maximum degree . If , then the Glauber dynamics on\nthe -colorings of has spectral gap larger than , and thus\nmixes in time .\nIn fact, our proofs hold for more general list coloring instances with extra colors. See Theorem 24 ###reference_orem24### for the most general statement.\nOur results can also be stated in terms of sampling edge colorings. Then the rapid mixing condition becomes to where is the maximum degree of the graph (note that the maximum degree of corresponding line graph can be as large as ). We remark that this is close to the ergodicity threshold for the single-site Glaubder dynamics for sampling edge colorings since it is known that the chain is reducible when [HJNP19 ###reference_bx13###]. On the other hand, Vizing\u2019s theorem states that the graph is edge colorable whenever and rapid mixing under the same condition is known for trees [DHP20 ###reference_bx10###].\nWe will introduce necessary notations and some preliminary results in Section 2 ###reference_###. In particular, we review the matrix trickle-down theorem in Section 2.5 ###reference_###. Then we describe our construction and prove the main results in Section 3 ###reference_###. A main ingredient of our proof, the analysis of the inequalities arising in the matrix trickle-down theorem, is presented in Section 4 ###reference_###. This analysis might be of independent interest."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Simplicial Complexes",
21
+ "text": "Let be a universe. A simplicial complex is a collection of subsets of that is closed under taking subsets. That is, if and , then . Every is called a face, and a face that is not a proper subset of any other face is called a maximal face or a facet. The dimension of a face is , namely the number of elements in . For every , we use to denote the set of faces of dimension . Specifically, . The dimension of is the maximum dimension of faces in . We say is a pure -dimensional simplicial complex if all maximal faces in are of dimension . In the following, we assume is a pure -dimensional simplicial complex. For every face , we define its co-dimension .\nLet be a distribution over the maximal faces . We use the pair to denote a weighted simplicial complex where for each , the distribution induces a distribution over . Formally, for every and every ,\nOne can easily verify that is a distribution on . Combined with itself, for each , the distribution over is defined. Sometimes, we omit the subscript when , i.e., we write for .\nFor a face of dimension , we define its link as\nClearly, is a pure -dimensional simplicial complex. Similarly, for every , we use to denote the set of faces in of dimension . We also use to denote the marginal distribution on . Formally, for every ,\nWe drop the subscript when , i.e., we write for .\nNote that the marginal distributions are the same as the distributions over induced from the weighted simplicial complex , so there is no ambiguity about this notation.\nLet , be two pure weighted simplicial complexes of dimension and respectively. We can define another pure weighted simplicial complex of dimension whose maximal faces are the disjoint union of maximal faces of and . Moreover, is the product measure . is also called the product of and . This definition can be naturally generalized to the products of more than two weighted simplicial complexes.\nThen we define notations for matrices related to . Define as supported on . For convenience, define the pseudo inverse of as for and otherwise. Similarly, the pseudo inverse square root is defined as for and otherwise."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Vertex Coloring",
27
+ "text": "Fix a color set where . Let be an undirected graph and be a collection of color lists associated with each vertex in . For every , we use to denote the size of . The pair is an instance of list-coloring. If there exists an integer such that for any where is the degree of , we call a -extra list-coloring instance.\nWe say is a proper coloring if for any and for any . We also regard as a set of pairs of vertex and color, namely . Let denote the set of all proper colorings and be the uniform distribution on . Let and . We say is a proper partial coloring on if it is a proper coloring on where is the subgraph of induced by and . We also define on as .\nFor a subset and a partial coloring on , define .\nAssume . The list-coloring instance can be naturally represented as a weighted simplicial complex where consists of all proper partial colorings and .\nThe following identity is useful throughout the paper:\nLet . For every partial coloring on , every and partial coloring , it holds that"
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Markov Chains and Mixing Time",
33
+ "text": "Let be a finite discrete state space. Let be the transition matrix of a Markov chain with the stationary distribution . We say is reversible with respect to the stationary distribution if it satisfies the detailed balance condition, i.e., for every , it holds that\nOnly reversible chains are considered in this paper.\nFor a weighted simplicial complex , the single-site Glauber dynamics on is a Markov chain on with the transition matrix\nFrom an operational view, each transition of the Glauber dynamics, with the current state being , consists of two steps:\nUniformly select a random .\nSelect a following the distribution and transfer to the state .\nOne can easily verify that the Glauber dynamics on is reversible, with as the stationary distribution.\nWe are concerned with the convergence rate of Markov chains, which is described by the mixing time.\nThe mixing time is defined as the duration required\nfor the total variation distance between \nand the stationary distribution to become smaller than , starting from any initial distribution . Formally,\nwhere is the total variation distance.\nFor a reversible Markov chain on a discrete space , since is self-adjoint with respect to the inner product induced by , all eigenvalues of are real.\nSo we can define the spectral gap of as , where denotes the second largest eigenvalue of .\nAnd the absolute spectral gap of as .\nThe following lemma arises to bound the mixing time of a reversible Markov chain by its absolute spectral gap.\nFor an irreducible reversible Markov chain on a discrete space,\nwhere .\nNotice that when all eigenvalues of are non-negative, the absolute spectral gap in the above lemma equals the spectral gap. This is the case of Glauber dynamics, as in Proposition 5 ###reference_orem5###."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "Local-to-Global Scheme",
39
+ "text": "The local random walk on is defined as\nfor all . An operational view of the local chain is as follows: when the current state is at , move to with probability proportional to . Note that we will treat as a matrix in such that the undefined entries are . It is obvious that is reversible with respect to . Specifically, we denote the local random walk on by , i.e.,\nis also reversible with respect to .\nWe say a weighted simplicial complex is -local spectral expander if for any and , .\nWe focus on the spectral gaps of local walks in this paper. As studied earlier by [AL20 ###reference_bx1###], the local spectral expansion implies bounds for Glauber dynamics.\nLet be a weighted simplicial complex where is a uniform distribution over proper list-colorings over a graph with and maximum degree . If is a -local spectral expander, then\nall eigenvalues of the Glauber dynamics are real and non-negative;\nthe second largest eigenvalue of the Glauber dynamics is at most .\nThe mixing time in terms of the local spectral expansion then follows from the proposition and Lemma 4 ###reference_orem4###. To obtain a tighter mixing time bound, we employ the following proposition concerning the local spectral expansion and the modified log-Sobolev constant.\nLet be a weighted simplicial complex where is a uniform distribution over proper list-colorings of a graph with and maximum degree . If is a -local spectral expander with for all , then the modified log-Sobolev constant is at least , and the mixing time of the Glauber dynamics is at most ."
40
+ },
41
+ {
42
+ "section_id": "2.5",
43
+ "parent_section_id": "2",
44
+ "section_name": "Trickle-Down Theorems",
45
+ "text": "The trickle-down theorem of Oppenheim states that the spectral gaps of local walks in a certain dimension imply spectral gaps of local walks in larger dimensions.\nGiven a weighted simplicial complex , suppose the following holds:\n, i.e., the local walk is irreducible;\nThere exists some such that for all .\nThen the local walk satisfies the spectral bound .\nA more general version of the trickle-down theorem was established in [ALG21 ###reference_bx2###]. Instead of bounding the second largest eigenvalues of local walks, it bounds (the transition matrix of) local walks directly.\nA symmetric matrix is positive semi-definite, written as if and only if all its eigenvalues are nonnegative. For two symmetric matrices and of the same dimension, we write , or equivalently if and only if . As a result, the binary relation defines an order between matrices called Loewner Order. For brevity, we write if for any in the following statement.\nGiven a -dimensional weighted simplicial complex , suppose the following conditions hold:\nwhere is the local walk on ;\nFor a family of matrices and a constant ,\nwhere is the stationary distribution of .\nThen for any matrix satisfying and , it holds that\nIn particular, .\nWe include a proof of Proposition 8 ###reference_orem8### in Appendix A ###reference_### for completeness.\nThe following proposition is the main tool we will use to prove the main theorems. It was obtained in [ALG21 ###reference_bx2###] by applying Proposition 8 ###reference_orem8### to simplicial complexes inductively.\nGiven a pure -dimensional weighted simplicial complex , if there exists a family of matrices satisfying\nFor every ,\nFor every face with , one of the following two conditions hold:\nis the product of pure weighted simplicial complexes of dimension respectively and\nwhere for an arbitrary .\nThen for every face with , it holds that\nIn particular, ."
46
+ },
47
+ {
48
+ "section_id": "2.6",
49
+ "parent_section_id": "2",
50
+ "section_name": "Properties of Loewner Order",
51
+ "text": "We collect some useful results on the properties of the Loewner order below.\nLet be two matrices in . For any constant , we have\n. Moreover, and .\nObviously\nThe second and third inequalities then follow from the first one by expanding the respective LHS.\n\u220e\nWe use to denote the support of a matrix , namely the collection of coordinates with nonzero value.\nLet be a finite set and be a collection of subsets of . Let be a matrix. Assume where each is a matrix satisfying . For every , let . For every , let . Then\nLet denote the row in indexed by . For any ,\nWe write . Then\nBy Cauchy-Schwarz inequality,\nTherefore,\n\u220e\nLet be a collection of symmetric matrices and . Then\nIt follows from Lemma 11 ###reference_orem11### directly by substituting by and noting that .\n\u220e\nLet be a matrix and be two symmetric matrices such that . Then\nFor any , we have\n\u220e\nLet be matrices where and are diagonal. Assume . Then\nSince are diagonal, . This lemma is then implied by Lemma 13 ###reference_orem13###.\n\u220e"
52
+ },
53
+ {
54
+ "section_id": "3",
55
+ "parent_section_id": null,
56
+ "section_name": "Vertex Coloring on Line Graphs",
57
+ "text": "In this section, we fix a graph which is the line graph of with maximum degree . As a result, and the maximum degree of is at most . Let be a -extra list-coloring instance with . After fixing notations in Section 3.1 ###reference_###, we will construct matrices fulfilling requirements of Proposition 9 ###reference_orem9### in Section 3.2 ###reference_### and Section 3.3 ###reference_###. Then we prove the main theorems in Section 3.4 ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "3.1",
61
+ "parent_section_id": "3",
62
+ "section_name": "Notations",
63
+ "text": "We fix some notations that will be used throughout the construction. Some of them might have been introduced in Section 2 ###reference_###. Nevertheless, we summarize here for easier reference.\nLet and be a partial coloring of . We define\nas the set of colors used by ;\nas the set of vertices not colored by ;\nas the set of vertices colored by .\nFor any , we use to denote the partial coloring obtained from by restricting on . Let be the subgraph induced by , and be the degree of on .\nWe define the color list after pinning as and for every . Similarly, for every two distinct vertices , we define and . Then is the list-coloring instance after pinning where .\nFor every , we use to denote the set of edges in incident to . By the definition of a line graph, is a clique in . Let denote the vertices of not in , i.e., .\nFix a color . We write and . We also use to denote the size of minus one. Note that is still a clique after pinning . Define as for every distinct and all other entries , which is obtained by restricting the adjacency matrix of to the vertices that appear in . Let be the identity matrix restricted on , and similarly be the identity matrix restricted on ."
64
+ },
65
+ {
66
+ "section_id": "3.2",
67
+ "parent_section_id": "3",
68
+ "section_name": "Base Case",
69
+ "text": "The base case of Proposition 9 ###reference_orem9### is that\nfor any with . Our construction of is similar to the one in [ALG21 ###reference_bx2###].\nWe can explicitly write down when . Let be the instance of list-coloring after pinning in . If is disconnected, then . In this case, we let be the all zero matrix. Otherwise, assume . For the sake of brevity, we drop the superscript of and in this section. For every and , we have\nLet be the vector with on positions indexed by for every and otherwise. Define similarly. Let be the vector with on positions indexed by or where and otherwise. We can write as\nFor every and , we have\nLet be the matrix where if and the other entries are .\nAs a result,\nWe can therefore express as\nwhere we used Lemma 10 ###reference_orem10###.\nWe now claim that\nIt follows from the claim that\nFor every , define the matrix as if and otherwise\nLet be the block-diagonal matrix with block on the diagonal for each .\nLet be matrix that if and the other entries are . Observing that the -th row summation of is at most , we have\nHence .\nIt remains to verify the claim. Note that since ,\nSince , the coefficient . Therefore, by Lemma 10 ###reference_orem10###, we have\nAnd hence,"
70
+ },
71
+ {
72
+ "section_id": "3.3",
73
+ "parent_section_id": "3",
74
+ "section_name": "Induction Step",
75
+ "text": "The induction step in Proposition 9 ###reference_orem9### is to show that for every with and connected ,\nFor every and , we will define a matrix and let be the block diagonal matrix with block for every .\nIt is not hard to see that we only require\nto hold for every and with connected , where is restricted on . We now describe our construction of for a fixed color . We write into the sum of a diagonal matrix and an off-diagonal matrix, i.e.,\nwhere is an off-diagonal matrix and is a diagonal matrix. For the off-diagonal matrix , we further decompose it into , where with .\nNote that we can extend the notations to those with disconnected . Following Proposition 9 ###reference_orem9###, for with disconnected , that is, when is the product of pure weighted simplicial complexes of dimension respectively, we define\nwhere for an arbitrary .\nWe can write above as the block-diagonal matrices with block for each and decompose as in the connected case. Plugging into (16 ###reference_###), we obtain\nWe can also decompose for with disconnected similarly.\nFrom now on, when is clear from the context, we will omit the superscript for matrices. For example, we will write , , , , , , , instead of , , , , , , , respectively. Also we write for . Plugging the above construction of into (14 ###reference_###) and remembering that the superscript has been omitted, we obtain\nIt follows from Corollary 12 ###reference_orem12### that\nSince each vertex only occurs in and assuming in , by Lemma 11 ###reference_orem11###,\nAs a result, in order for (13 ###reference_###) to hold, we only need to design for every and satisfying\nWe have\nThen\nAssume for any . We have\nThen\nAlso for any where , . From the analysis in the above two cases, by Lemma 23 ###reference_orem23###, the constraint (18 ###reference_###) is satisfied as long as\nand\nfor any with . We remark that the above constraints ( (31 ###reference_###) and (32 ###reference_###)) only exist for vertices with since and is connected. This is crucial since otherwise, the inequality system derived later has no solution. Assume . Since\nwe have as long as .\nWe strengthen this constraint to .\nFor brevity, we donote by in the following calculation. Therefore, our constraints for and are\nIt follows from Lemma 25 ###reference_orem25### that there exists a feasible solution of such that as long as . Since , the requirement can be strengthened to and can be upper bounded by\nTherefore, we can set , then by Lemma 26 ###reference_orem26###, there is a feasible solution of the Equation ###reference_01###\nif where .\nNote that when . Therefore, our final constraints for are\nTaking , we obtain the final requirement for :"
76
+ },
77
+ {
78
+ "section_id": "3.3.1",
79
+ "parent_section_id": "3.3",
80
+ "section_name": "3.3.1 Construction of",
81
+ "text": "A natural starting point (as did in [ALG21 ###reference_bx2###]) is to recursively define and this yields an explicit expression for . However, under this construction, the LHS of (18 ###reference_###) becomes , which is too large for (18 ###reference_###) to be feasible in the regime we are interested in. Nevertheless, it is still helpful to see what looks like under this recursive definition.\nFor the base case , if and is the common end vertex of in , let . Otherwise set . It is obvious that .\n\nWhen , we can expand the recursion down to faces of dimension :\nRecall that . In order to decrease the LHS of eq. 18 ###reference_###, we introduce a collection of positive coefficients decreasing in for whose value will be determined later. Especially . For connected , define\nWhen , is trivial. So in the following analysis, we assume .\nNote that the above relation holds for with disconnected .\nFor with disconnected , the identity (19 ###reference_###) holds.\nFix and . We assume the connected component of containing is indexed by . Then we have\nSince conditioning on does not affect the distribution of , we can further write above as\n\u220e\nAs a result, (19 ###reference_###) holds for all , and we can deduce the following relation between \u2019s whose co-dimensions differ by one.\nwhere .\nFor any ,\n\u220e\nWe remark that Lemma 15 ###reference_orem15### is essential in the above proof since pinning a single might result in disconnected .\nIt follows from the definition that is proportional to the expectation of the base cases when the boundary is drawn from . For some technical reasons, we would like to isolate those boundaries containing the color . This leads us to the following lemma.\nwhere is the matrix satisfying for any ,\nand all other entries .\nLet . We denote the set of proper partial colorings restricted on when is pinned by . Formally, . It follows from eq. 19 ###reference_### that for any ,\nNote that for every , we have\nPlugging this into eq. 20 ###reference_###, we have\nObserve that if for a , it holds that , then\nWe can further write as\nwhere the last line follows from the fact that implies .\n\u220e"
82
+ },
83
+ {
84
+ "section_id": "3.3.2",
85
+ "parent_section_id": "3.3",
86
+ "section_name": "3.3.2 Spectral analysis of",
87
+ "text": "In the following lemma, we show that each matrix can be written as the sum of two matrices which we call the main term and the remainder respectively. The main term only depends on the adjacency matrix and terms under various boundary conditions and is irrelevant to terms. All the effects of terms are collected in the remainder.\nFor every such that , define as the diagonal matrix such that for every :\nwhere .\nLet . To ease the notation, when and are clear from the context, we use and to denote the partial coloring and respectively. We also use to denote and define similarly.\nUsing our new notations, we have\nObserving that since , we have\nSo we can write as\nwhere\nBy definition, it holds that and . So we have . Therefore,\nwhere the last inequality follows from the fact that .\nTaking row summation of , we obtain that .\n\u220e\nIn order to bound LHS of Equation 18 ###reference_###, we introduce the following lemma.\nFor every , it holds\nThe proof of Lemma 19 ###reference_orem19### is included in Appendix B ###reference_###.\nWe assume where is a slowly increasing function. In the following two lemmas, we bound .\nLet be the set of partial colorings in where none of vertices in is colored .\n.\nIt follows from Lemma 17 ###reference_orem17### that\nwhere is due to Corollary 12 ###reference_orem12###.\n\u220e\nIn the following discussion, let .\n.\nBy Lemma 19 ###reference_orem19###, for any ,\nThen it follows from Lemma 18 ###reference_orem18### and that\n\u220e\n.\nLet . Recall that . As we did in Lemma 17 ###reference_orem17###, we use the notation to represent the set of proper partial colorings restricted on when is pinned. Therefore,\nwhere the second equation is obtained by taking the summation over the color of when others are fixed.\n\u220e\nWe are now ready to bound in the LHS of Equation 18 ###reference_###.\nThere exists a sequence of non-negative numbers such that\nApplying Lemma 17 ###reference_orem17### and Lemma 20 ###reference_orem20###, we obtain\nThen by Lemma 18 ###reference_orem18### and Lemma 21 ###reference_orem21###, we can bound above by\nwhere and .\nNaturally, we want to find a sequence of so that the spectral radius of the following matrices appearing in the non-remainder terms in Equation 24 ###reference_### is small:\nSince the spectrum of is , the spectrum of is\nWe want to be of order . This can be achieved by picking as a solution to the recurrence relation . The solution we choose is\nThen we have\nfor . In particular, when , , which is consistent with the above bound. So we have for .\nNote that , it then follows from Equation 25 ###reference_### and Lemma 22 ###reference_orem22### that\nA direct calculation yields that\nCombining Equation 26 ###reference_### and Equation 27 ###reference_### finishes the proof.\n\u220e"
88
+ },
89
+ {
90
+ "section_id": "3.3.3",
91
+ "parent_section_id": "3.3",
92
+ "section_name": "3.3.3 Construction of",
93
+ "text": "For of co-dimension such that is connected, we introduce coefficients and whose values will be determined later, and define as follows:\nfor any and all other entries are where means and are adjacent in .\nWhen , this is exactly the base case considered in Section 3.2 ###reference_###. According to (9 ###reference_###), we have . The definition of above for with connected extends to all by (17 ###reference_###).\nNotice that we only need to satisfy the inductive constraint (18 ###reference_###) when is connected. By Lemma 14 ###reference_orem14###, the constraint (18 ###reference_###) is equivalent to\nDenote the RHS of (29 ###reference_###) by , i.e.,\nThen we have\nwhere the last equality follows from the fact that .\nWe have\nThen\nAssume for any . We have\nThen\nAlso for any where , . From the analysis in the above two cases, by Lemma 23 ###reference_orem23### ###reference_orem23###, the constraint (18 ###reference_### ###reference_###) is satisfied as long as\nand\nfor any with . We remark that the above constraints ( (31 ###reference_### ###reference_###) and (32 ###reference_### ###reference_###)) only exist for vertices with since and is connected. This is crucial since otherwise, the inequality system derived later has no solution. Assume . Since\nwe have as long as .\nWe strengthen this constraint to .\nFor brevity, we donote by in the following calculation. Therefore, our constraints for and are\nIt follows from Lemma 25 ###reference_orem25### ###reference_orem25### that there exists a feasible solution of such that as long as . Since , the requirement can be strengthened to and can be upper bounded by\nTherefore, we can set , then by Lemma 26 ###reference_orem26### ###reference_orem26###, there is a feasible solution of the Equation ###reference_01### ###reference_01###\nif where .\nNote that when . Therefore, our final constraints for are\nTaking , we obtain the final requirement for :"
94
+ },
95
+ {
96
+ "section_id": "3.4",
97
+ "parent_section_id": "3",
98
+ "section_name": "Proof of the Main Theorems",
99
+ "text": "Let be a weighted simplicial complex where is a uniform distribution over proper -extra list-colorings over a line graph with and maximum degree .\nThen as long as\nwe have for any of co-dimension . Therefore\n, the mixing time of is ;\n, the mixing time of is ,\nwhere is the transition matrix of Glauber dynamics on .\nAs we did in Section 3.2 ###reference_### and Section 3.3 ###reference_###, we are able to construct a set of matrices 222 means all faces of dimension at most in . which are block diagonal with the block for each color . In the following discussion, we fix a color and drop the superscript . In our construction, . By (\u2023 3.3.3 ###reference_03###), as long as\nwe have . And by (33 ###reference_###) and (34 ###reference_###), we have . Therefore,\n\nand hence satisfy all the conditions in Proposition 9 ###reference_orem9###. So we immediately have by Proposition 9 ###reference_orem9###.\n\nCalculating the modified log-Sobolev constant by Proposition 6 ###reference_orem6###, we get\n, therefore the mixing time is , proving the first part of the theorem.\nCalculating the spectral gap, Proposition 5 ###reference_orem5### implies\nso .\nSince , by Lemma 4 ###reference_orem4###, the mixing time is , proving the second part of the theorem.\n\u220e\nHere we only need to unify the bound in Equation 34 ###reference_### to the form of .\nBy calculation, the maximum value of when is less than .\nSo the final bound of is .\n\u220e"
100
+ },
101
+ {
102
+ "section_id": "4",
103
+ "parent_section_id": null,
104
+ "section_name": "Solving the Constraints",
105
+ "text": "Given positive constants , and ,\nconsider the inductive constraint\nThen eq. ###reference_09### is solvable when .\nMoreover, under this condition, a feasible solution satisfies .\nLet\nWe have so the first constraint is satisfied.\nPlugging into the second constraint, we have\nSince , , , we can strengthen this constraint to\nMultiplying both sides by and moving all terms to the same side, we have\nAs long as we can find an that satisfies Equation 35 ###reference_###, the constraint Equation ###reference_09### is satisfied by our . Since Equation 35 ###reference_### is nothing but a quadratic equation about , we can immediately calculate that the minimum solution of is\nNotice that exists as long as , i.e.,\nSo by using as the coefficient in , we get a feasible solution of Equation ###reference_09###, and for all ,\n\u220e\nGiven positive constants , and ,\nconsider the inductive constraint\nThen eq. ###reference_18### is solvable when ,\nwhere\n.\nIf = 1, then eq. ###reference_18### is equivalent to , it is satisfiable as long as .\nNext, we consider the case that .\nLet\nwhere is a constant to be determined.\nSince , the first constraint holds."
106
+ }
107
+ ],
108
+ "appendix": [
109
+ {
110
+ "section_id": "Appendix 1",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix A The Matrix Trickle-Down Theorem",
113
+ "text": "Let be a weighted pure -dimensional simplicial complex.\nThe following identities hold.\n.\n.\n.\nFor any , it holds that\nThis can be simplified to\nFor any , if , . Otherwise, direct calculation gives\nThis can be simplified to\nOn the other hand, we have\nFor every , it holds that\nOn the other hand, note that for every , the row of indexed by is , we have\nThis can be written as\n\u220e\nTo prove Proposition 8 ###reference_orem8###, we introduce the following property of the Loewner order.\nLet be two symmetric matrices. If for a constant and , then .\nConsider the matrix function for a symmetric matrix and . If , then we have , which means that is a matrix function extended from a real bijective function from to . Therefore, from its inverse function , we give the inverse function of on :\nSince is monotone under Loewner order (see e.g. Theorem V.1.9 of [Bha97 ###reference_bx3###]), it can be obtained by simple calculation that is monotone under Loewner order.\n\u220e\nFor every , eq. 2 ###reference_### is equivalent to\nTaking expectation and applying Proposition 27 ###reference_orem27###, we have\nwhich is equivalent to\nTherefore, . Picking with , we immediately have\nby comparing the spectrum. As a result,\nSince , by the original trickle-down theorem (Proposition 7 ###reference_orem7###), we have . Combined with we obtain that . It follows from Lemma 28 ###reference_orem28### that .\n\u220e"
114
+ },
115
+ {
116
+ "section_id": "Appendix 2",
117
+ "parent_section_id": null,
118
+ "section_name": "Appendix B Marginal Probability Bounds in [GKM15]",
119
+ "text": "Let . For any proper partial coloring on (boundary) when is pinned, let where is the uniform distribution over all proper colorings of and .\n\nFor the upper bound of , it holds that\nTherefore,\nAs for the lower bound, assuming the free neighbors of after pinning are , recall the recursion on marginal probabilities:\nwhere denote the coloring instance after removing the vertex and color from the color lists of \u2019s neighbors for .\nFrom (41 ###reference_###) we have . Also . So we have\n\u220e"
120
+ }
121
+ ],
122
+ "tables": {},
123
+ "image_paths": {},
124
+ "validation": true,
125
+ "references": [
126
+ {
127
+ "1": {
128
+ "title": "Improved analysis of higher order random walks and applications.",
129
+ "author": "Vedat Levi Alev and Lap Chi Lau.",
130
+ "venue": "In Konstantin Makarychev, Yury Makarychev, Madhur Tulsiani, Gautam\nKamath, and Julia Chuzhoy, editors, Proceedings of the 52nd Annual ACM\nSIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA,\nJune 22-26, 2020, pages 1198\u20131211. ACM, 2020.",
131
+ "url": null
132
+ }
133
+ },
134
+ {
135
+ "2": {
136
+ "title": "A matrix trickle-down theorem on simplicial complexes and\napplications to sampling colorings.",
137
+ "author": "Dorna Abdolazimi, Kuikui Liu, and Shayan Oveis Gharan.",
138
+ "venue": "In 62nd IEEE Annual Symposium on Foundations of Computer\nScience, FOCS 2021, Denver, CO, USA, February 7-10, 2022, pages 161\u2013172.\nIEEE, IEEE, 2021.",
139
+ "url": null
140
+ }
141
+ },
142
+ {
143
+ "3": {
144
+ "title": "Matrix analysis.",
145
+ "author": "Rajendra Bhatia.",
146
+ "venue": "Number 169 in Graduate texts in mathematics. Springer, New York,\n1997.",
147
+ "url": null
148
+ }
149
+ },
150
+ {
151
+ "4": {
152
+ "title": "Improved bounds for randomly sampling colorings via linear\nprogramming.",
153
+ "author": "Sitan Chen, Michelle Delcourt, Ankur Moitra, Guillem Perarnau, and Luke Postle.",
154
+ "venue": "In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on\nDiscrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9,\n2019, pages 2216\u20132234. SIAM, SIAM, 2019.",
155
+ "url": null
156
+ }
157
+ },
158
+ {
159
+ "5": {
160
+ "title": "Rapid mixing for colorings via spectral independence.",
161
+ "author": "Zongchen Chen, Andreas Galanis, Daniel \u0160tefankovi\u010d, and Eric\nVigoda.",
162
+ "venue": "In D\u00e1niel Marx, editor, Proceedings of the 2021 ACM-SIAM\nSymposium on Discrete Algorithms, SODA 2021, Virtual Conference, January 10\n- 13, 2021, pages 1548\u20131557. SIAM, 2021.",
163
+ "url": null
164
+ }
165
+ },
166
+ {
167
+ "6": {
168
+ "title": "Strong spatial mixing for colorings on trees and its algorithmic\napplications.",
169
+ "author": "Zongchen Chen, Kuikui Liu, Nitya Mani, and Ankur Moitra.",
170
+ "venue": "In 64th IEEE Annual Symposium on Foundations of Computer\nScience, FOCS 2023, Santa Cruz, CA, USA, November 6-9, 2023, pages\n810\u2013845. IEEE, 2023.",
171
+ "url": null
172
+ }
173
+ },
174
+ {
175
+ "7": {
176
+ "title": "Optimal mixing of glauber dynamics: entropy factorization via\nhigh-dimensional expansion.",
177
+ "author": "Zongchen Chen, Kuikui Liu, and Eric Vigoda.",
178
+ "venue": "In Samir Khuller and Virginia Vassilevska Williams, editors, STOC \u201921: 53rd Annual ACM SIGACT Symposium on Theory of Computing,\nVirtual Event, Italy, June 21-25, 2021, pages 1537\u20131550. ACM, 2021.",
179
+ "url": null
180
+ }
181
+ },
182
+ {
183
+ "8": {
184
+ "title": "Randomly coloring graphs with lower bounds on girth and maximum\ndegree.",
185
+ "author": "Martin E. Dyer and Alan M. Frieze.",
186
+ "venue": "Random Struct. Algorithms, 23(2):167\u2013179, 2003.",
187
+ "url": null
188
+ }
189
+ },
190
+ {
191
+ "9": {
192
+ "title": "Randomly coloring constant degree graphs.",
193
+ "author": "Martin E. Dyer, Alan M. Frieze, Thomas P. Hayes, and Eric Vigoda.",
194
+ "venue": "Random Struct. Algorithms, 43(2):181\u2013200, 2013.",
195
+ "url": null
196
+ }
197
+ },
198
+ {
199
+ "10": {
200
+ "title": "The glauber dynamics for edge-colorings of trees.",
201
+ "author": "Michelle Delcourt, Marc Heinrich, and Guillem Perarnau.",
202
+ "venue": "Random Struct. Algorithms, 57(4):1050\u20131076, 2020.",
203
+ "url": null
204
+ }
205
+ },
206
+ {
207
+ "11": {
208
+ "title": "Rapid mixing from spectral independence beyond the boolean domain.",
209
+ "author": "Weiming Feng, Heng Guo, Yitong Yin, and Chihao Zhang.",
210
+ "venue": "ACM Trans. Algorithms, 18(3), Oct 2020.",
211
+ "url": null
212
+ }
213
+ },
214
+ {
215
+ "12": {
216
+ "title": "Strong spatial mixing of list coloring of graphs.",
217
+ "author": "David Gamarnik, Dmitriy Katz, and Sidhant Misra.",
218
+ "venue": "Random Struct. Algorithms, 46(4):599\u2013613, 2015.",
219
+ "url": null
220
+ }
221
+ },
222
+ {
223
+ "13": {
224
+ "title": "Unpublished manuscript.",
225
+ "author": "Marc Heinrich, Alice Joffard, Jonathan Noel, and Aline Parreau.",
226
+ "venue": "https://hoanganhduc.github.io/events/CoRe2019/CoRe_2019_Open_Problems.pdf,\n2019.",
227
+ "url": null
228
+ }
229
+ },
230
+ {
231
+ "14": {
232
+ "title": "A very simple algorithm for estimating the number of k-colorings of a\nlow-degree graph.",
233
+ "author": "Mark Jerrum.",
234
+ "venue": "Random Struct. Algorithms, 7(2):157\u2013165, 1995.",
235
+ "url": null
236
+ }
237
+ },
238
+ {
239
+ "15": {
240
+ "title": "Markov chains and mixing times, volume 107.",
241
+ "author": "David A Levin, Yuval Peres, and Elizabeth L. Wilmer.",
242
+ "venue": "American Mathematical Soc., 2017.",
243
+ "url": null
244
+ }
245
+ },
246
+ {
247
+ "16": {
248
+ "title": "The glauber dynamics on colorings of a graph with high girth and\nmaximum degree.",
249
+ "author": "Michael Molloy.",
250
+ "venue": "SIAM J. Comput., 33(3):721\u2013737, 2004.",
251
+ "url": null
252
+ }
253
+ },
254
+ {
255
+ "17": {
256
+ "title": "Local spectral expansion approach to high dimensional expanders part\nI: descent of spectral gaps.",
257
+ "author": "Izhar Oppenheim.",
258
+ "venue": "Discret. Comput. Geom., 59(2):293\u2013330, Mar 2018.",
259
+ "url": null
260
+ }
261
+ },
262
+ {
263
+ "18": {
264
+ "title": "Absence of phase transition for antiferromagnetic potts models via\nthe dobrushin uniqueness theorem.",
265
+ "author": "Jes\u00fas Salas and Alan D Sokal.",
266
+ "venue": "Journal of Statistical Physics, 86:551\u2013579, 1997.",
267
+ "url": null
268
+ }
269
+ }
270
+ ],
271
+ "url": "http://arxiv.org/html/2307.08080v2"
272
+ }
20240322/2307.08309v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2308.04025v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2308.13712v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2309.07139v2.json ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A Traffic Management Framework for On-Demand Urban Air Mobility Systems",
3
+ "abstract": "Urban Air Mobility (UAM) offers a solution to current traffic congestion by providing on-demand air mobility in urban areas. Effective traffic management is crucial for efficient operation of UAM systems, especially for high-demand scenarios. In this paper, we present a centralized traffic management framework for on-demand UAM systems. Specifically, we provide a scheduling policy, called VertiSync, which schedules the aircraft for either servicing trip requests or rebalancing in the system subject to aircraft safety margins and energy requirements. We characterize the system-level throughput of VertiSync, which determines the demand threshold at which passenger waiting times transition from being stabilized to being increasing over time. We show that the proposed policy is able to maximize throughput for sufficiently large fleet sizes. We demonstrate the performance of VertiSync through a case study for the city of Los Angeles, and show that it significantly reduces passenger waiting times compared to a first-come first-serve scheduling policy.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Traffic congestion is a significant issue in urban areas, leading to increased travel times, reduced productivity, and environmental concerns. A potential solution to this issue is Urban Air Mobility (UAM), which aims to use the urban airspace for on-demand mobility [1 ###reference_b1###]. A crucial aspect of UAM systems, especially in high-demand regimes, is traffic management [2 ###reference_b2###]. The objective of traffic management is to efficiently use the limited UAM resources, such as the airspace, takeoff and landing areas, and the aircraft, to meet the demand. The purpose of this paper is to systematically design and analyze a traffic management policy for on-demand UAM networks.\nThe UAM traffic management problem can be considered as a natural extension of the classic Air Traffic Flow Management (ATFM) problem [3 ###reference_b3###]. The objective of ATFM is to optimize the flow of commercial air traffic to ensure safe and efficient operations in the airspace system, considering factors such as airspace and airport capacity constraints, weather conditions, and operational constraints [3 ###reference_b3###, 4 ###reference_b4###]. The first departure point in the context of UAM is the unpredictable nature of demand. Unlike commercial air traffic where the demand is highly predictable weeks in advance, the UAM systems will be designed to provide on-demand services. This poses a significant planning challenge.\nTo address this problem, recent works such as [5 ###reference_b5###] have attempted to incorporate fairness considerations into the existing ATFM formulation to accommodate the on-demand nature of UAM. Other solutions include heuristic approaches such as first-come first-served scheduling [6 ###reference_b6###] and simulations [7 ###reference_b7###]. While previous works provide valuable insights into the operation of UAM systems, they do not explicitly address two critical aspects. First is the concept of rebalancing: the UAM aircraft will need to be constantly redistributed in the network when the demand for some destinations is higher than others. Efficient rebalancing ensures the effectiveness and sustainability of on-demand UAM systems. The concept of rebalancing has been explored extensively in the context of on-demand ground transportation [8 ###reference_b8###]. However, these studies predominantly use flow-level formulations which do not capture the safety and separation considerations associated with aircraft operations. The second aspect which has not been addressed in the UAM literature is a thorough characterization of the system-level throughput. Roughly, the throughput of a given traffic management policy determines the highest demand that the policy can handle [9 ###reference_b9###]. In the context of UAM, the throughput is tightly related to the notion of passenger waiting time. In particular, the throughput determines the demand threshold at which the expected passenger waiting time transitions from being stabilized to being increasing over time. Therefore, it is desirable to design a policy that achieves the maximum possible throughput.\nIn light of the aforementioned gaps in the literature, we present a centralized traffic management framework for on-demand UAM networks. We propose a scheduling policy, called VertiSync, which synchronously schedules the aircraft for either servicing trip requests or rebalancing in the network. The primary contributions of this paper are as follows:\nDeveloping a scheduling policy, called VertiSync, for on-demand UAM networks, subject to aircraft safety margins and energy requirements.\nIncorporating the aspect of rebalancing into the UAM scheduling framework.\nCharacterizing the system-level throughput of VertiSync, and demonstrating its effectiveness through a case study for the city of Los Angeles.\nThe rest of the paper is organized as follows: in Section II ###reference_###, we describe the problem formulation. We provide our traffic management policy and characterize its throughput in Section III ###reference_###. We provide the Los Angeles case study in Section IV ###reference_###, and conclude the paper in Section V ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Problem Formulation",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A UAM Network Structure",
21
+ "text": "We describe a UAM network by a directed graph . A node in the graph represents either a vertiport, i.e., take-off/landing area, or an intermediate point where two or more routes cross paths. A link in the graph represents a section of the routes that have the link in common. We let be the set of vertiports and be its cardinality. We let be the total number of vertipads, i.e., takeoff/landing pads, at vertiport . An Origin-Destination (O-D) pair is an ordered pair where and there is at least one route from to . We let be the set of O-D pairs and be its cardinality; see Figure 1 ###reference_###. To simplify the network representation and without loss of generality, we assume that each vertiport has exactly one outgoing link exclusively used for takeoffs from that vertiport, and a separate incoming link exclusively used for landings. For simplicity (and lack of existing routes), we also assume that there is at most one route between any two vertiports and that the UAM routes do not conflict with the current airspace.\n###figure_1### Given an O-D pair , the opposite pair may or may not be an O-D pair. However, to enable rebalancing, it is natural to assume that there always exists a collection of routes that connect any two vertiports.\nIn the next section, we will discuss the constraints associated with a UAM aircraft flight operation."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Operational Constraints",
27
+ "text": "In this section, we describe the constraints and assumptions related to UAM aircraft flight operations. Let be the set of aircraft in the system and be its cardinality. Each aircraft\u2019s flight operation consists of the following three phases:\ntakeoff: during this phase, the aircraft is positioned on a departure vertipad and passengers (if any) are boarded onto the aircraft before it is ready for takeoff. To position the aircraft on the departure vertipad, it is either transferred from a parking space or directly lands from a different vertiport. Let denote the takeoff separation, which represents the minimum time required between successive aircraft takeoffs from the same vertipad. In other words, the takeoff operations are completed in a -minute time window for every flight, which implies that the takeoff rate from each vertipad is at most one aircraft per minutes.\nairborne: to ensure safe operation, all UAM aircraft must maintain appropriate horizontal and vertical safety distance from each other while airborne. We assume that all UAM aircraft have the same characteristics so that these margins are the same for all the aircraft. Without loss of generality, we assume that different links of the graph are at a safe horizontal and vertical distance from each other, except in the vicinity of the nodes where they intersect. Let be the minimum time between two aircraft takeoffs with the same route from the same vertiport, ensuring that all the airborne safety margins are satisfied. Therefore, the takeoff rate from each vertiport is at most one aircraft per minutes. We assume that , i.e., the takeoff separation is more restrictive than the separation imposed by the safety margin, and is integer-valued.\nlanding: once the aircraft lands, passengers (if any) are disembarked, new passengers (if any) are embarked, and the aircraft undergoes servicing. Thereafter, the aircraft is either transferred to a parking space or, if it has boarded new passengers or needs to be rebalanced, takes off to another vertiport. Similar to takeoff operations, we assume that the landing operations are completed within a -minute time window for every flight. That is, once an aircraft lands, the next takeoff or landing can occur after minutes. Therefore, if two aircraft with the same route take off from the same vertipad at least minutes apart, then they will be able to land on the same vertipad at their destination. We assume that the parking capacity at each vertiport is at least so that an arriving aircraft always clears the vertipad after landing.\nThe above assumptions regarding the takeoff and landing separations, as well as the airborne safety margins, are practical given the current technological limitations [7 ###reference_b7###, 10 ###reference_b10###]. However, our results can be generalized and are not limited to these specific assumptions.\nIn addition to the above assumptions, we consider an ideal case where there is no external disturbance such as adverse weather conditions. As a result, if an aircraft\u2019s flight trajectory satisfies the safety margins and the separation requirements, then the aircraft follows it without deviating from the trajectory. On the other hand, if its trajectory does not satisfy either of the safety or separation requirements, we assume that a lower-level controller, e.g., a pilot or a remote operator, handles the safe operation of the aircraft. We do not specify this controller in the paper since we only consider policies that guarantee before takeoff that the aircraft\u2019s route is clear and a vertipad is available for landing."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Demand and Performance Metric",
33
+ "text": "In an on-demand UAM network, the demand is likely not known in advance. We use exogenous stochastic processes to capture the unpredictable nature of the demand. It will be convenient for performance analysis later on to adopt a discrete time setting. Let the duration of each time step be , which represents the the minimum time between two aircraft takeoffs with the same route from the same vertiport that guarantees the safety margins. The number of trip requests for an O-D pair is assumed to follow an i.i.d Bernoulli process with parameter independent of other O-D pairs. That is, at any given time step, the probability that a new trip is requested for the O-D pair is independent of everything else. Note that specifies the rate of new requests for the O-D pair in terms of the number of requests per minutes. Let be the vector of arrival rates.\nFor each O-D pair, the trip requests are queued up in an unlimited capacity queue until they are serviced, at which point they leave the queue. In order to be serviced, a request\nmust be assigned to an aircraft, and the aircraft must take\noff from the verriport. A scheduling policy is a rule that schedules the aircraft in the system for either servicing trip requests or rebalancing, i.e., taking off without passengers to service trip requests at other vertiports.\nThe objective of the paper is to design a policy that can handle the maximum possible demand under the operational constraints discussed in Section II-B ###reference_###. The key performance metric to evaluate a policy is the notion of throughput which we will now formalize. For , let be the number of trip requests in the queue for the O-D pair at time . Let be the vector of trip requests for all the O-D pairs at time . We define the under-saturation region of a policy as\nThis is the set of \u2019s for which the expected number of trip requests remain bounded for all the O-D pairs. The boundary of this set is called the throughput of the policy . We are interested in finding a policy such that for all policies , including those that have information about the demand . In other words, if the network remains under-saturated using some policy , then it also remains under-saturated using the policy . In that case, we say that policy maximizes the throughput for the UAM network. In the next section, we introduce one such policy."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III Network-Wide Scheduling",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-A VertiSync Policy",
45
+ "text": "We now introduce our policy which is inspired by the queuing theory literature [11 ###reference_b11###] and the classical Traffic Flow Management Problem (TFMP) formulation [3 ###reference_b3###]. The policy works in cycles during which only the trips that were requested before the start of the cycle are serviced. At the start of a cycle, a central planner schedules the aircraft for either servicing trip requests or rebalancing in the network until all the trips that were requested before the start of the cycle are serviced. The aircraft schedule during a cycle is communicated to vertiport operators responsible for takeoff and landing operations at each vertiport. It is assumed that the central planner knows the location and energy state of each aircraft as well as the number of trip requests for each O-D pair. The scheduling for service or rebalancing is done synchronously during a cycle, and hence the name of the policy.\nTo conveniently track aircraft locations in discrete time, we introduce the notion of slot. For each O-D pair , a slot represents a specific position along the route of that pair, such that if two aircraft with the same route occupy adjacent slots, then they satisfy the airborne safety margins. In particular, consider an aircraft\u2019s flight trajectory that satisfies the safety margins and separation requirements. At the end of every minutes along the aircraft\u2019s route, a slot is associated with the aircraft\u2019s position, with the first slot located at the origin vertiport. If an aircraft occupies the first slot of an O-D pair , it means that it is positioned on a vertipad at the origin vertiport . Similarly, if it occupies the last slot of the O-D pair , it means that it has landed on a vertipad at the destination vertiport . Consider a configuration of slots for all the O-D pairs established at time . Without loss of generality, we assume that if a link is common to two or more routes, then the slots associated with those routes coincide with each other on that link. Additionally, if two aircraft with different routes occupy adjacent slots, then they will satisfy the airborne safety margins with respect to each other. We also let the first and last slots on each link coincide with the tail and head of that link, respectively. We assign a unique identifier to each slot, with overlapping slots having different identifiers. Let be the set of slots associated with the O-D pair , and let be its cardinality.\nLet be a fixed time, and . A key decision variable in the VertiSync policy is , where if aircraft has visited slot of the O-D pair , times in the interval . For brevity, the time is dropped from as it will be clear from the context. By definition, is non-decreasing with respect to . Moreover, if for some , then aircraft has occupied slot at some time in the interval . We use the notation to represent the number of times aircraft with route has taken off from vertiport in the interval . Similarly, indicates the number of times aircraft with route has landed on vertiport in the interval . For slot , denotes its following slot, i.e., the slot that comes after slot along the route of the O-D pair . Given two O-D pairs and slots and , we let if slot coincides with slot . Finally, we use the binary variable to denote whether aircraft can begin its takeoff phase of the flight operation from vertiport at time (if ) or not (if ).\n(VertiSync Policy)\nThe policy works in cycles of variable length, with the first cycle starting at time . At the beginning of the -th cycle at time , each vertiport communicates the number of trip requests originating from that vertiport to the central planner, i.e., the vector of trip requests is communicated to the central planner. During the -th cycle, only theses requests will be serviced.\nThe central planner solves the following optimization problem to determine the aircraft schedule while minimizing the total airborne time of all the aircraft. That is, the central planner aims to minimize\nwhere is such that is a conservative upper-bound on the -th cycle length, is the number of times aircraft with route has taken off from vertiport in the interval , and is the flight time from vertiport to when the aircraft satisfies the airborne safety margins and separation requirements. The following constraints must be satisfied:\nConstraint (2a ###reference_1###) ensures that all the trip requests are serviced by the end of the cycle. Constraint (2b ###reference_2###) enforces the decision variable to be non-decreasing in time. Constraint (2c ###reference_3###) ensures that each aircraft occupies at most one slot in the network at any time, and constraint (2d ###reference_4###) guarantees that if aircraft occupies slot at some time , then it will occupy slot at time . Constraint (2e ###reference_5###) ensures the airborne safety margins by allowing at most one aircraft occupying any overlapping or non-overlapping slot at any time. Similarly, constraints (2f ###reference_6###) and (2i ###reference_9###) ensure that the takeoff and landing separations are satisfied at every vertiport, respectively. Constraint (2g ###reference_7###) enforces that aircraft can take off from a vertiport at time only if it has landed at that vertiport at or before time , and constraints (2h ###reference_8###) and (2j ###reference_10###) update once aircraft takes off from vertiport and lands on vertiport , respectively.\nWhile traversing route , aircraft expends energy , which is calculated as the sum of the energy required for takeoff, cruise, and landing. Let be the remaining energy of aircraft at time , and let be a binary variable indicating whether aircraft is being re-charged at time at vertiport (if ) or not (if ). In addition to the constraints in (1 ###reference_###), we require\nwhere is the energy increment of an aircraft during one time step while being recharged, and is the maximum energy of an aircraft. Constraint (3a ###reference_1###) is the balance equation for the energy state of aircraft , and constraints (3b ###reference_2###) and (3c ###reference_3###) limit the minimum and maximum energy of aircraft . Constraint (3d ###reference_4###) ensures that aircraft can be recharged at a vertiport only if it is available at that vertiport, i.e., .\nThe initial values , , and are determined by the location and energy state of aircraft at the end of the previous cycle. For example, if aircraft has occupied slot of the O-D pair at the end of cycle , then for all , for slot and any other slot that precedes slot , i.e., comes before slot along the route of the O-D pair , and for any other and . The -th cycle ends once all the requests for that cycle have been serviced.\nNote that the VertiSync policy only requires real-time information about the number of trip requests, but does not require any information about the arrival rate. This makes VertiSync a suitable option for an actual UAM network where the arrival rate is unknown or could vary over time."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-B VertiSync Throughput",
51
+ "text": "We next characterize the throughput of the VeriSync policy. To this end, we introduce a -dimensional service vector . If is activated, then aircraft can continuously takeoff from the origin vertiport at the rate of per minutes without violating the airborne safety margins and separation requirements. If , then the takeoff rate for the O-D pair is zero.\nLet be the set of all non-zero service vectors, and be its cardinality. We use , , to denote a particular vector in . Note that each is associated with at least one schedule that, upon availability of aircraft, guarantees continuous takeoffs for the O-D pair at the rate of aircraft per minutes without violating the safety margins and separation requirements. Recall the aircraft operational constraints from Section II-B ###reference_###, and note that is an integer multiple of and .\nConsider the network in Figure 1 ###reference_###. We number the O-D pairs , , , , , , , and as to , respectively. Suppose that each vertiport has only one vertipad. Let the takeoff separation be minutes, and minutes. Due to symmetry, if an aircraft for the O-D pair takes off at , then an aircraft for the O-D pair can take off at minute without violating the airborne safety margins. Therefore, is a service vector in with the takeoff schedules , and , for the O-D pairs and , respectively. Similarly, and are two other service vectors in .\nBy using the service vectors , a feasible solution to the optimization problem (1 ###reference_###)-(1 ###reference_###) can be constructed as follows: (i) activate at most one service vector at any time, (ii) while is active, schedule available aircraft to take off at the rate of per minutes for any O-D pair , (iii) switch to another service vector in provided that the safety margins and separation requirements are not violated after switching, and (iv) repeat (i)-(iii) until all the requests for the -th cycle are serviced.\nThe next theorem provides an inner-estimate of the throughput of the VertiSync policy when the number of aircraft is sufficiently large and the following \u201csymmetry\u201d assumption holds:\nFor any service vector , there exists a service vector such that for all with , and , where is the opposite O-D pair to the pair . In words, by using the service vector , aircraft can continuously take off at the same rate for the O-D pair and its opposite pair without violating the safety margins and separation requirements. Let be the total number of slots, with overlapping slots being considered a single slot.\nIf the UAM network satisfies the symmetry Assumption 1 ###reference_umption1###, and the number of aircraft satisfies\nthen the VertiSync policy can keep the network under-saturated for demands belonging to the set\nwhere the vector inequality is considered component-wise.\nSee Appendix A ###reference_###.\n\n\u220e"
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-C Fundamental Limit on Throughput",
57
+ "text": "In this section, we provide an outer-estimate on the throughput of any safe-by-construction policy. A safe-by-construction policy is a policy that guarantees before takeoff that the aircraft\u2019s entire route will be clear and a veripad will be available for landing. Since the UAM aircraft have limited energy reserves, it is desirable to use safe-by-construction policies for traffic management purposes [12 ###reference_b12###].\nAny safe-by-construction policy uses the service vectors in , either explicitly or implicitly, to schedule the aircraft.\n\nAlthough it is possible for a safe-by-construction policy to activate multiple service vectors at any time, we may restrict ourselves to policies that activate at most one service vector from at any time. This restriction does not affect the generality of safe-by-construction policies being considered; by activating at most one service vector at any time and rapidly switching between service vectors in , it is possible to achieve an exact or arbitrarily close approximation of any safe schedule while ensuring the safety margins and separation requirements.\nThe next result provides a fundamental limit on the throughput of any safe-by-construction policy.\nIf a safe-by-construction policy keeps the network under-saturated, then the demand must belong to the set\nwhere the vector inequality is considered component-wise.\nSee Appendix B ###reference_###.\n\u220e"
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "IV Simulation Results",
63
+ "text": "###figure_2### ###figure_3### In this section, we demonstrate the performance of the VertiSync policy and compare it with a heuristic traffic management policy from the literature. As a case study, we select the city of Los Angeles, which is anticipated to be an early adopter market due to the severe road congestion, existing infrastructure, and mild weather [10 ###reference_b10###]. All the simulations were performed in MATLAB.\nWe consider four vertiports located in Redondo Beach (vertiport ), Long Beach (vertiport ), and the Downtown Los Angeles area (vertiports and ). The choice of vertiport locations is adopted from [10 ###reference_b10###]. Each vertiport is assumed to have vertipads. Figure 2 ###reference_### shows the network structure, where there are O-D pairs. We let the takeoff and landing separations be [min] and let [min]. We let the flight time for the O-D pairs and be [min], and for the rest of the O-D pairs be [min]. We simulate this network during the morning period from 6:00-AM to 11:00-AM, during which the majority of demand originates from vertiports and to vertiports and . We let the trip requests for each of the O-D pairs , , , and follow a Poisson process with a piece-wise constant rate . The demand for other O-D pairs is set to zero during the morning period. With a slight abuse of notation, we scale to represent the number of trip requests per minutes. From Theorem 2 ###reference_orem2###, given , the necessary condition for the network to remain under-saturated is that trip requests per minutes, i.e., . Figure 3 ###reference_### shows , where we have considered a heavy demand between 7:00-AM to 9:30-AM to model the morning rush hour, i.e., between 7:00-AM to 9:30-AM.\n###figure_4### ###figure_5### We first evaluate the travel time under our policy and the First-Come First-Serve (FCFS) policy [6 ###reference_b6###]. The FCFS policy is a heuristic policy which schedules the trip requests in the order of their arrival at the earliest time that does not violate the safety margins and separation requirements. We let the number of aircraft be , and assume that all of them are initially located at vertiport . We also assume that an aircraft is always available to service a trip request at its scheduled time under the FCFS policy. Finally, the optimization problem (1 ###reference_###)-(1 ###reference_###) in the VertiSync policy is solved analytically. This approach is made possible by the simple symmetrical network structure considered in the simulations.\nFor the above demand and a random simulation seed, trips are requested during the morning period from which the FCFS policy services before 11:00 AM while the VertiSync policy is able to service all of them. Figure 4 ###reference_### shows the (passenger) travel time, which is computed by averaging the travel time of all trips requested within each -minute time interval. The travel time of a trip is computed from the moment that trip is requested until it is completed, i.e., reached its destination. As expected, the VertiSync policy keeps the network under-saturated since for all . However, the FCFS policy fails to keep the network under-saturated due to its greedy use of the vertipads and UAM airspace which is inefficient.\nWe next evaluate the demand threshold at which the VertiSync policy becomes less efficient than ground transportation. Figure 5 ###reference_### shows the travel time under the VertiSync policy when the demand is increased to . By Theorem 2 ###reference_orem2###, the network is in the over-saturated regime from 6:30-AM to 10:00-AM since trip requests per minutes. However, as shown in Figure 5 ###reference_###, the travel time is still less than the ground travel time during the morning period. The ground travel times are collected using the Google Maps service from 6:00-AM to 11:00-AM on Thursday, May 19, 2023 from Long Beach to Downtown Los Angeles (The travel times from Redondo Beach to Downtown Los Angeles were similar)."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusion",
69
+ "text": "In this paper, we provided a traffic management policy for on-demand UAM networks and analyzed its throughput. We conducted a case study for the city of Los Angeles and showed that our policy significantly improves travel time compared to a first-come first-serve policy. We plan to expand our case study to more complex networks with more origin and destination pairs and study the computational requirements of the optimization problem in our policy. We also plan to implement our policy in a high-fidelity air traffic simulator."
70
+ }
71
+ ],
72
+ "appendix": [
73
+ {
74
+ "section_id": "Appendix 1",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix A Proof of Theorem 1",
77
+ "text": "Consider the -th cycle. We first construct a feasible solution to the optimization problem (1 ###reference_###)-(1 ###reference_###) by using the service vectors in . Consider the Linear Program (LP)\nwhere the inequality is considered component-wise. Let , , be the solution to (4 ###reference_###). A feasible solution to the optimization problem (1 ###reference_###)-(1 ###reference_###) can be constructed as follows:\nChoose a service vector with . Without loss of generality, we may assume that is such that for all with , and , where is the opposite O-D pair to the pair . Before activating , distribute the aircraft in the system so that for any with , there is\naircraft at vertiport . The initial distribution of aircraft takes at most minutes, where .\nOnce the initial distribution is completed and the airspace is empty, activate for a duration of minutes. During this time, aircraft with route take off (upon availability) at the rate from vertiport . Hence, at most slots of route will be occupied by an aircraft during this time. In addition, an aircraft needs to have at least energy to traverse route . Since the recharging takes time steps, if aircraft with energy of at least are available at vertiport , then we can ensure continuous takeoffs at the rate . From step 1, the number of aircraft at vertiport is . Similarly, there is also enough aircraft at veriport , and from the assumption , they can simultaneously take off at the rate . Therefore, we can ensure continuous takeoffs at the rate for route , which implies that, at the end of this step, requests will be serviced for the O-D pair .\nOnce step 2 is completed and the airspace is empty, repeat steps 1 and 2 for another vector in . The amount of time it takes for the airspace to become empty at the end of step 2 is at most minutes. Once each service vector with have been activated, requests will be serviced for each O-D pair . From the constraint of the LP (4 ###reference_###), , i.e., all the requests for the -th cycle will be serviced and the cycle ends.\nBy combining the time each of the above steps takes, it follows that\nwhere .\nWithout loss of generality, we assume that the ordering by which \u2019s are chosen at each cycle are fixed and, when a cycle ends, the next cycle starts once the airspace becomes empty and the start time of is a multiple of . Finally, we assume that the initial distribution of aircraft before each is activated takes minutes. With these assumptions, we can cast the network as a discrete-time Markov chain with the state . Since the state is reachable from all other states, and , the chain is irreducible and aperiodic. Consider the function\nwhere is the set of -tuples of non-negative integers. Note that is a non-negative integer from our earlier assumption that the cycle start times are a multiple of . We let for brevity.\nWe start by showing that\nTo show (6 ###reference_###), let , and let be the cumulative number of trip requests for the O-D pair during the time interval . Note that , which implies from the strong law of large numbers that, with probability one,\nBy the assumption of the theorem, . Hence, with probability one, there exists such that for all we have . Since is an open set, for a given , there exists non-negative with such that . For , define . Then, \u2019s are a feasible solution to the LP (4 ###reference_###), and . Therefore, from (5 ###reference_###) and with probability one, it follows for all that\nwhich in turn implies, with probability one, that\nFinally, since the number of trip requests for each O-D pair is at most per minutes, the sequence is upper bounded by an integrable function. Hence, from (7 ###reference_###) and the Fatou\u2019s Lemma (6 ###reference_###) follows.\nWe will now use (6 ###reference_###) to show that the network is under-saturated. Note that (6 ###reference_###) implies that there exists and such that for all we have\nwhich in turn implies that\nFurthermore, for all , where the first inequality follows from the fact that for any O-D pair . Therefore, , where . Finally, if , then . Therefore,\nwhere we have used . Combining all the previous steps gives\nwhere (a finite set). From this and the well-known Foster-Lyapunov drift criterion [13 ###reference_b13###, Theorem 14.0.1], it follows that for all , i.e., the network is under-saturated."
78
+ },
79
+ {
80
+ "section_id": "Appendix 2",
81
+ "parent_section_id": null,
82
+ "section_name": "Appendix B Proof of Theorem 2",
83
+ "text": "We use a proof by contradiction. Suppose that some safe-by-construction policy keeps the network under-saturated but . Then, for any non-negative with , there exists some O-D pair such that .\nWithout loss of generality, we may assume that whenever the service vector becomes active, it remains active for a time interval that is a multiple of . Given , let , and let be the proportion of time that the service vector has been active under the policy up to time . Then, for all and . Therefore, there exists such that . Note that when the service vector is active, the flight requests for the O-D pair are serviced at the rate of at most . Hence, the number flight requests for the O-D pair that have been serviced by up to time is at most up to time . Let be the cumulative number of flight requests for the O-D pair up to time . We have\nwhich implies\nBy letting , it follows from the strong law of large numbers that, with probability one,\nSince , then, with probability one, is bounded away from zero. Hence,\nCombining this with Fatou\u2019s Lemma imply that the expected number of flight requests for the O-D pair grows unbounded. This contradicts the network being under-saturated."
84
+ }
85
+ ],
86
+ "tables": {},
87
+ "image_paths": {
88
+ "1": {
89
+ "figure_path": "2309.07139v2_figure_1.png",
90
+ "caption": "Figure 1: A UAM network with V=4\ud835\udc494V=4italic_V = 4 vertiports (blue circles) and P=8\ud835\udc438P=8italic_P = 8 O-D pairs (1,3)13(1,3)( 1 , 3 ), (1,4)14(1,4)( 1 , 4 ), (2,3)23(2,3)( 2 , 3 ), (2,4)24(2,4)( 2 , 4 ), (3,1)31(3,1)( 3 , 1 ), (4,2)42(4,2)( 4 , 2 ), (1,2)12(1,2)( 1 , 2 ), and (2,1)21(2,1)( 2 , 1 ).",
91
+ "url": "http://arxiv.org/html/2309.07139v2/extracted/5486928/Figures/graph-example.png"
92
+ },
93
+ "2": {
94
+ "figure_path": "2309.07139v2_figure_2.png",
95
+ "caption": "Figure 2: The UAM network for the city of Los Angeles. The blue circles show the vertiports and the orange arrows show the links.",
96
+ "url": "http://arxiv.org/html/2309.07139v2/extracted/5486928/Figures/LA-case-study.png"
97
+ },
98
+ "3": {
99
+ "figure_path": "2309.07139v2_figure_3.png",
100
+ "caption": "Figure 3: The rate of trip requests per \u03c4\ud835\udf0f\\tauitalic_\u03c4 minutes (\u03bb\u2062(t)\ud835\udf06\ud835\udc61\\lambda(t)italic_\u03bb ( italic_t )).",
101
+ "url": "http://arxiv.org/html/2309.07139v2/extracted/5486928/Figures/arrival-rate.png"
102
+ },
103
+ "4": {
104
+ "figure_path": "2309.07139v2_figure_4.png",
105
+ "caption": "Figure 4: The travel time under the VertiSync and FCFS policies for the demand \u03bb\u2062(t)\ud835\udf06\ud835\udc61\\lambda(t)italic_\u03bb ( italic_t ).",
106
+ "url": "http://arxiv.org/html/2309.07139v2/extracted/5486928/Figures/travel-time-comparison.png"
107
+ },
108
+ "5": {
109
+ "figure_path": "2309.07139v2_figure_5.png",
110
+ "caption": "Figure 5: The travel time under the VertiSync policy when the demand is increased to 1.2\u2062\u03bb\u2062(t)1.2\ud835\udf06\ud835\udc611.2\\lambda(t)1.2 italic_\u03bb ( italic_t ) (over-saturated regime), and the ground transportation travel time.",
111
+ "url": "http://arxiv.org/html/2309.07139v2/extracted/5486928/Figures/travel-time-ground.png"
112
+ }
113
+ },
114
+ "validation": true,
115
+ "references": [
116
+ {
117
+ "1": {
118
+ "title": "Springer Science & Business Media, 2012.",
119
+ "author": "S. P. Meyn and R. L. Tweedie, Markov chains and stochastic stability.",
120
+ "venue": null,
121
+ "url": null
122
+ }
123
+ }
124
+ ],
125
+ "url": "http://arxiv.org/html/2309.07139v2"
126
+ }
20240322/2309.07289v3.json ADDED
@@ -0,0 +1,659 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "User Training with Error Augmentation for sEMG-based Gesture Classification",
3
+ "abstract": "We designed and tested a system for real-time control of a user interface by extracting surface electromyographic (sEMG) activity from eight electrodes in a wristband configuration. sEMG data were streamed into a machine-learning algorithm that classified hand gestures in real-time. After an initial model calibration, participants were presented with one of three types of feedback during a human-learning stage: veridical feedback, in which predicted probabilities from the gesture classification algorithm were displayed without alteration; modified feedback, in which we applied a hidden augmentation of error to these probabilities; and no feedback. User performance was then evaluated in a series of minigames, in which subjects were required to use eight gestures to manipulate their game avatar to complete a task. Experimental results indicated that relative to the baseline, the modified feedback condition led to significantly improved accuracy. Class separation also improved, though this trend was not significant.\nThese findings suggest that real-time feedback in a gamified user interface with manipulation of feedback may enable intuitive, rapid, and accurate task acquisition for sEMG-based gesture recognition applications.\u2020\u2020Code and data are available on-line at: https://github.com/neu-spiral/emg-feedback-user-training",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Surface electromyography (sEMG) provides a convenient sensor modality for human-computer interaction (HCI) applications [1 ###reference_b1###]. In the past two decades, research efforts have sought to translate the electrical activity associated with muscle contraction into control commands for general use computing, prosthetic control, and motor rehabilitation [2 ###reference_b2###, 3 ###reference_b3###]. As the demand for more intuitive and responsive interfaces has grown, the focus on sEMG-based gesture recognition has intensified.\nTraditional approaches to sEMG-based gesture recognition assumed stationarity of the mapping between muscle activation and gestures, and did not consider the user\u2019s ability to adapt their behavior based on feedback about gesture classification performance. The emergence of co-adaptive learning algorithms in the past decade represented a marked shift, acknowledging both human and machine learning as parts of an integrated system [4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###].\nOne key finding from these approaches is that when the human receives continuous feedback about the mapping of muscle activation to gesture, they can increase classification performance through behavioral adaptations [10 ###reference_b10###, 11 ###reference_b11###]. These adaptations can result in increased class separability [12 ###reference_b12###] and increased movement repeatability [13 ###reference_b13###]. However, the relationship between feature space adaptations and classifier performance is complex. Increased real-time classifier performance has also been found even in the absence of EMG feature space changes in relative class distributions [14 ###reference_b14###]. Despite the complex relationship between feature space class distributions and classifier performance, the influence of human learning on myoelectric gesture classification remains a compelling target of investigation.\nHuman learning about myoelectric gesture classification can be considered as a form of motor skill learning. In the literature on motor learning, the canonical view is that humans use a combination of intrinsic feedback (sensory information) and augmented feedback (information that is not readily accessible through intrinsic feedback) [15 ###reference_b15###].\nAugmented feedback can be further categorized as providing \u2018knowledge of performance\u2019 (information about specific movements and muscle activations), or \u2018knowledge of results\u2019 (information about outcomes) [16 ###reference_b16###, 17 ###reference_b17###].\nIn the present study, we focus on myoelectric control, where providing knowledge of results corresponds to providing output from a classifier, while knowledge of performance corresponds to descriptions of the features extracted from the sEMG. The ability to shape human behavior in traditional motor skill learning settings through the use of augmented feedback is well established. Strategies such as error augmentation [18 ###reference_b18###, 19 ###reference_b19###, 20 ###reference_b20###] and reward manipulation [21 ###reference_b21###, 22 ###reference_b22###] have been shown to affect the rate and retention of learning as well as behavioral variability. Yet, to our knowledge, the use of error-augmented feedback has not been tested for co-adaptation approaches to sEMG-based gesture recognition.\nIn this study, we conducted an experiment to test whether modified feedback about class posterior probabilities affects performance in a myoelectric control task.\nWe provided subjects with a form of error-augmented knowledge of results; by altering class probabilities, we diminished the differences between classes, making it harder for the target gesture class to exceed a predefined decision threshold. In particular, we softened probabilities towards a uniform distribution. This form of feedback manipulation is closely related to previous uses of error augmentation, also referred to as error amplification [23 ###reference_b23###, 24 ###reference_b24###, 25 ###reference_b25###].\nAs mentioned, this form of feedback has been shown to hasten learning and improve the quality of self-evaluation [18 ###reference_b18###, 26 ###reference_b26###] and increase retention of learned skills [27 ###reference_b27###, 23 ###reference_b23###].\nWe therefore hypothesized that error amplification by softening probabilities would increase subsequent gesture classification performance by enhancing human skill learning.\nThe knowledge gained from this investigation has broad potential applications for use in myoelectric prosthetics, assistive devices, and human-computer interfaces where users perform only a brief 4-minute calibration, and human learning may be critical to the success of model performance."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Experimental Design",
15
+ "text": "All protocols were approved by the Northeastern University Institutional Review Board (IRB number 15-10-22) in conformance with the declaration of Helsinki."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Subjects",
21
+ "text": "Forty-four right-handed subjects (21 male / 23 female, mean age 1 standard deviation: years) participated after providing IRB-approved written informed consent. Subjects were free of orthopedic or neurological diseases that could interfere with the task and had normal or corrected-to-normal vision."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Experimental Setup",
27
+ "text": "###figure_1### Subjects viewed a computer display while seated at a table with their right arm positioned comfortably in an armrest trough.\nSurface electromyography (sEMG) (Trigno, Delsys Inc., sampling frequency: Hz) was collected from the muscles of the right forearm.\nEight sEMG electrodes were placed at equidistant positions around the circumference of the forearm, at a four finger-width distance from the ulnar styloid (the subject\u2019s left hand was wrapped around the right forearm at the ulnar styloid to determine the sEMG placement).\nThe first electrode was placed mid-line on the dorsal aspect of the forearm, and the other electrodes were then equally spaced (see Figure 1 ###reference_###)."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Data Acquisition",
33
+ "text": ""
34
+ },
35
+ {
36
+ "section_id": "2.3.1",
37
+ "parent_section_id": "2.3",
38
+ "section_name": "II-C1 Subject Group Assignment",
39
+ "text": "Subjects were randomly assigned to one of three groups and performed a series of tasks as described below.\nSubjects who were unable to complete all tasks were excluded from further analysis.\nEach subject group was assigned a different feedback condition: no feedback (\u201cControl\u201d, N=), veridical feedback (\u201cVeridical\u201d, N=), or modified feedback (\u201cModified\u201d, N=) (see Section II-C5 ###reference_.SSS5### for details).\nSubject group assignments were randomized before enrollment. In order to control for the possible confounding effect of biological variation in baseline performance across groups, we adopted a within-subject normalization strategy (see Section IV-A ###reference_###)."
40
+ },
41
+ {
42
+ "section_id": "2.3.2",
43
+ "parent_section_id": "2.3",
44
+ "section_name": "II-C2 Gesture Timing",
45
+ "text": "Subjects performed a series of tasks composed of one or more gesture trials to move an avatar dice (see details of user interface below).\nPrior to the start of a trial, the subject\u2019s forearm and wrist rested in a pronated position on the trough with the wrist neutral. In each trial, subjects were required to rest or to produce one of eight active gestures (label and action provided in brackets): index-thumb pinch [\u201cPinch\u201d, decrease number on avatar dice], index-thumb key press [\u201cThumb\u201d, increase the number on avatar dice], closed fist [\u201cFist\u201d, decrease size of avatar dice], full finger extension [\u201cOpen\u201d, increase size of avatar dice], wrist extension [\u201cUp\u201d, move up], wrist flexion [\u201cDown\u201d, move down], wrist radial deviation [\u201cLeft\u201d, move left], wrist ulnar deviation [\u201cRight\u201d, move right].\nEach trial began with a \u2018prompting\u2019 epoch ( sec) cued by a yellow bounding box the participant\u2019s display and a picture of the instructed gesture (Calibration and Instructed blocks only, see below), a \u2018gesture production\u2019 epoch ( sec) cued by a green bounding box, and a \u2018recovery\u2019 epoch ( sec) cued by a red bounding box.\nThe final milliseconds of the gesture production epoch were used for feature extraction and classification.\nFigure 2 ###reference_### shows the timing of an example gesture trial.\n###figure_2### This trial timing structure was chosen empirically to give enough time for subjects to prepare for each upcoming trial while keeping the total experiment duration short. Gesture trial timing was kept consistent to ensure that subject reaction times were not a source of variation in performance.\nEach experimental session was divided into four blocks.\nBlocks one, two, and four used the trial timing described above. By contrast, in block three (in which some subjects received model feedback) the gesture production epoch lasted seconds for each gesture.\nDuring this time period, continuous feedback was provided by applying a classifier model on a sliding window of data, with a step size of milliseconds (based on the frequency of data packets delivered by our sEMG sensors)."
46
+ },
47
+ {
48
+ "section_id": "2.3.3",
49
+ "parent_section_id": "2.3",
50
+ "section_name": "II-C3 Block One: Calibration",
51
+ "text": "Subjects from all groups were instructed to perform five consecutive repetitions of each active gesture and eight repetitions of a rest gesture in which they were asked to relax the hand. This consecutive structure was chosen to help keep the task simple while the participant initially learned the set of available gestures. A classification model was trained on this small dataset before continuing to the next experimental block."
52
+ },
53
+ {
54
+ "section_id": "2.3.4",
55
+ "parent_section_id": "2.3",
56
+ "section_name": "II-C4 Block Two: Instructed Games",
57
+ "text": "Subjects from all groups engaged in four practice mini-games. In each mini-game, subjects were instructed to perform a sequence of six gestures to bring an avatar that was shown on the computer screen from a starting position to a desired goal state (e.g. see Figure 3 ###reference_###).\nThe trial timing epochs (prompting, gesture production, and rest) were as shown in Figure 2 ###reference_###. In this block, the classifier model\u2019s predicted probabilities were displayed as post-hoc feedback to the user, but were not used to modify the avatar position or state; the avatar always moved one step closer to the goal after each trial, so that each game lasted exactly six moves.\nThese games were structured so that the total gestures ( games with moves each) were evenly distributed among the active gestures.\nAfter this block, the classification model was retrained from scratch using the labeled data from blocks one and two.\nThis training set comprised examples for each of the classes ( active gestures and \u201cRest\u201d).\n###figure_3###"
58
+ },
59
+ {
60
+ "section_id": "2.3.5",
61
+ "parent_section_id": "2.3",
62
+ "section_name": "II-C5 Block Three: Live Feedback",
63
+ "text": "Only subjects in the veridical feedback and modified feedback groups participated in this block. Subjects performed only one extended trial for each gesture while viewing real-time feedback; in these trials, the gesture production epoch lasted seconds. Subjects were asked to freely explore their hand posture in order to maximize the predicted probability of the current gesture class, shown on a real-time histogram of the trained model\u2019s output.\nFor the veridical feedback group, predicted class probabilities were displayed without modification. For the modified feedback group, probabilities were softened towards a uniform distribution as described in Section III-C ###reference_###. As discussed previously, the motivation behind this softening procedure was to encourage participants to compensate by performing more precise gestures.\nSubjects in the modified feedback group were not informed about this softening procedure."
64
+ },
65
+ {
66
+ "section_id": "2.3.6",
67
+ "parent_section_id": "2.3",
68
+ "section_name": "II-C6 Block Four: Free Games",
69
+ "text": "All subjects were instructed to perform a series of mini-games. The mini-games had the same structure as in block two, with each game requiring a minimum of six moves to bring the avatar from its starting position to a desired goal state. However, unlike the practice mini-games of block two, subjects were tasked with bringing the avatar to its goal state by planning and performing a gesture sequence of their choice. Critically, the avatar only changed its state when the classifier assigned one class a predicted probability above a decision threshold of .\nThe experimenter manually recorded each attempted gesture to serve as labels for subsequent analysis, and the participant\u2019s hand movements were also recorded on video to cross-check these labels."
70
+ },
71
+ {
72
+ "section_id": "3",
73
+ "parent_section_id": null,
74
+ "section_name": "III Signal Modeling",
75
+ "text": ""
76
+ },
77
+ {
78
+ "section_id": "3.1",
79
+ "parent_section_id": "3",
80
+ "section_name": "III-A Feature Extraction",
81
+ "text": "As described in Section II-C2 ###reference_.SSS2###, we extracted raw data for classification from the final ms of the active gesture production period of each gesture trial.\nFrom each of the sensor channels of raw sEMG, we computed the Root-Mean-Square (RMS) value and the median frequency of the Fourier spectrum, resulting in -dimensional features.\nGiven a data vector , RMS is defined as:\nThe Median Power Frequency is defined as the frequency value that divides the Power Spectral Density (PSD) into two regions with equal power [28 ###reference_b28###]:"
82
+ },
83
+ {
84
+ "section_id": "3.2",
85
+ "parent_section_id": "3",
86
+ "section_name": "III-B Classification Model",
87
+ "text": "Given extracted features, we used a two-stage classification pipeline to predict among possible gestures: Up, Thumb, Right, Pinch, Down, Fist, Left, Open, and Rest.\nThe classification model consisted of an encoder formed from Support Vector Machine (SVM) models that produced a latent representation, and a logistic regression classifier that produced predicted class probabilities.\nIn the encoder portion of the model, we trained a one-vs-one (OVO) SVM classifier [29 ###reference_b29###] for each of the pairs of gestures.\nEach of these OVO-SVM models produced a scalar output (representing the probability of assigning to the first of its two classes); these scalars were stacked into a latent vector and passed to the logistic regression model.\nGiven a supervised training dataset, we first fit the one-vs-one SVM models using linear programming with the CVXPY Python library [30 ###reference_b30###]. The linear programming objective we used was based on the semi-supervised SVM formulation of [31 ###reference_b31###], to allow future semi-supervised extensions. Specifically, the SVM parameters were trained according to the following optimization problem:\nwhere were the parameters to be optimized, were slack variables allowing misclassification of individual points, and is a fixed penalty parameter controlling the margin\u2019s strictness.\nWe implemented the logistic regression classifier with the PyTorch Python library [32 ###reference_b32###] using a single linear layer and a SoftMax function. After the SVM encoder portion of the model was trained, it was held fixed while the logistic regression classifier model was trained by stochastic gradient descent to minimize the cross-entropy loss. We trained the classifier model for epochs with a batch size of and AdamW [33 ###reference_b33###] optimizer. See Algorithm 1 ###reference_### for a summary of our classifier training procedure.\nAs noted, participants in the veridical feedback and modified feedback groups were shown real-time output from the model.\nDue to the high sampling frequency of the sEMG sensors used, and the relatively computationally simple prediction model, the system was capable of making very fast adjustments to the predicted output, which can result in unwanted jitter due to slight fluctuations in raw signal or hand positioning.\nTherefore, we used an exponential moving average (EMA) to smooth the model\u2019s predictions in time.\nAt time-step , the model produces a raw probability vector , which is then mixed with the previous probability vector using a momentum parameter to produce a smoothed vector :\nFor values of close to , this causes the probability vector to update more slowly and smoothly. We used a value of , which alleviated the issue of jitter in the model output, while still allowing model outputs to change quickly between different gestures."
88
+ },
89
+ {
90
+ "section_id": "3.3",
91
+ "parent_section_id": "3",
92
+ "section_name": "III-C Modified Feedback",
93
+ "text": "As mentioned above, subjects in the modified feedback group were shown modified real-time output from the trained classifier during block three of the experiment.\nSpecifically, the vector of smoothed predicted probabilities from the model was modified according to the following formula:\nwhere the modification exponent was set to , and represents the classes used.\nThe value of was chosen subjectively to make a noticeable effect while not being too extreme; since subjects must still be able to exceed a decision threshold of for a gesture to be correct.\nNote that this feedback can be viewed as a form of error augmentation. When asked to perform a certain target gesture, we can consider the error to be the distance (e.g. cross-entropy distance or L2 norm) between the model\u2019s predicted probability vector and an idealized probability vector in which all mass is concentrated on the target class. Subjects in both feedback groups were instructed to explore gestures and maximize the predicted probability of the target class; thus they were instructed to minimize this error. However, subjects in the modified feedback group viewed a flattened probability vector; this flattening causes the vector to appear to have greater error. See Figure 5 ###reference_### for an example."
94
+ },
95
+ {
96
+ "section_id": "3.4",
97
+ "parent_section_id": "3",
98
+ "section_name": "III-D User Interface and Software Design",
99
+ "text": "Figure 4 ###reference_### shows the user interface (UI) displayed to participants. All components of the UI were implemented using PyQt Python package [34 ###reference_b34###].\nData collection and real-time processing were performed using the LabGraph Python package [35 ###reference_b35###].\nOn the top left, the UI displayed an instructed gesture via image and text during blocks one and two (see Section II-C3 ###reference_.SSS3### and II-C4 ###reference_.SSS4###).\nOn the bottom left, the UI showed post-hoc predicted probabilities for each gesture as a radial plot.\nThe length of each line was scaled according to the value; the outer circle represented a value of , and the inner circle represented a value of (i.e. the model\u2019s decision threshold).\nThe opacity of gesture images around the radial plot was also scaled according to the value.\nThe outer edge of the UI was colored yellow, green, or red to indicate gesture timing epoch as described in Section II-C2 ###reference_.SSS2###.\nOn the right of the UI was the task window in which the mini-games were played during blocks two and four (see Section II-C4 ###reference_.SSS4### and II-C6 ###reference_.SSS6###).\nAs described previously, participants used one of active gestures to move their avatar (the blue die).\nThe goal of each mini-game in blocks two and four was to use these gestures to match the blue die to the gray target die.\n###figure_4### During block three (see Section II-C5 ###reference_.SSS5###), participants who received real-time feedback were presented with a different display, as shown in Figure 5 ###reference_###.\nHere, the probability of each class was displayed using a bar plot that was updated in real-time.\nThe participant\u2019s goal during this block of the experiment was to explore hand positions in order to maximize the predicted probability of the current gesture class. For participants in the modified feedback group, model outputs were flattened towards a uniform distribution using Equation 5 ###reference_###.\n###figure_5### ###figure_6###"
100
+ },
101
+ {
102
+ "section_id": "3.5",
103
+ "parent_section_id": "3",
104
+ "section_name": "III-E Classifier Metrics",
105
+ "text": "As mentioned in Section II-C6 ###reference_.SSS6###, the experimenter recorded each intended gesture made by the participant, so that model accuracy could be evaluated after-the-fact.\nAccuracy was defined as the fraction of correctly classified items.\nIn addition to the active gestures and the \u201crest\u201d class, the decision threshold of that was used resulted in another possible outcome for gesture trials when no gesture rose above the decision threshold, which we refer to as \u201cNoClass\u201d.\nGesture trials in which the subject was not prepared to make a gesture during the \u201cgesture production\u201d epoch were recorded as having a true label of \u201cRest\u201d."
106
+ },
107
+ {
108
+ "section_id": "3.6",
109
+ "parent_section_id": "3",
110
+ "section_name": "III-F Feature-Space Class Structure",
111
+ "text": "To evaluate how feedback affects human learning, we analyzed the feature-space distribution of trials from different gestures performed in block four of the experiment. This feature-space representation does not depend on the model, since these features are obtained using simple, deterministic transformations of the raw data (RMS and median frequency after Fourier transform). The differences in feature-space class structure across treatment groups can therefore give information about human learning.\nPrevious research has introduced a variety of feature space metrics for similar tasks, such as separability index and repeatability index [14 ###reference_b14###, 12 ###reference_b12###].\nSuch metrics are based on the Mahalanobis distance and require computing a class covariance matrix. Since our experiment is focused on short calibration times and we operated in a regime of limited data, we do not have enough samples to compute reasonable estimates of class covariance matrices, even with shrinkage techniques. We therefore used feature-space metrics based on pairwise comparisons between samples.\nWe base our analysis of feature-space structure on a Radial Basis Function (RBF) kernel similarity measure. The RBF kernel computes a similarity measure that corresponds to an implicit infinite-dimensional vector space. For two feature vectors belonging to a dataset and a length scale parameter , the RBF kernel similarity is computed as:\nThe length scale is an important hyperparameter that determines the rate at which similarities decay as two points are moved farther apart.\nWe follow the so-called \u201cmedian heuristic\u201d [36 ###reference_b36###], in which is set based on the median length scale of a dataset :\nWe set individually for each subject, based on all of their pooled gesture trials.\nNote that this approach is effectively a non-linear rescaling of pairwise Euclidean distances, and also handles the potential issue of outlier points having extremely large Euclidean distances.\nWe use this notion of kernel similarity to construct a class similarity matrix for each subject. For classes , we build a square, symmetric matrix such that the entry at position describes the average RBF kernel similarity between items in classes and :\nAfter computing the entries in a similarity matrix, we normalize the entries to the range so that these matrices may be easily compared across subjects and groups.\nClasses that are closer together in feature space will have a higher average similarity and therefore a larger entry in this similarity matrix.\nA subject whose gestures are easily classifiable may tend to have precise gestures that are also well-separated from each other.\nThis would result in having a high average similarity between trials in the same gesture class (diagonal entries of the class similarity matrix) and a low average similarity between trials of different classes (off-diagonal entries).\nSee Section IV-D ###reference_### for class similarity matrices from each experimental group, and see Figure 6 ###reference_### for didactic examples of similarity matrix .\nIn order to look for trends in the feature-space distribution over time and to identify global trends across groups, we also summarize these normalized class similarity matrices using a scalar class separation measure, , which we define as the average within-class similarity divided by the average between-class similarity. Given a normalized similarity matrix as described above,\nAs indicated above, larger within-class similarities indicate that trials from the same gesture are precise and repeated with high-fidelity, while smaller between-class similarities indicate that trials from different gestures are easily distinguished.\nThus, a dataset with a larger value of may contain gestures that will be more easily classified.\nIn Figure 6 ###reference_###, we show examples of class similarity matrix and scalar similarity measure . To produce an example that can be easily visualized, we select a subject from the \u201cModified\u201d condition that showed a large improvement in feature-space separation.\nFor this subject, we select three gestures (\u201cLeft\u201d, \u201cDown\u201d, and \u201cRight\u201d) and three features (RMS value from electrodes 1, 4, and 7). In the top row, we show metrics for this subject\u2019s data during the \u201cCalibration\u201d and \u201cInstructed\u201d blocks, and in the bottom row, we show metrics from the \u201cFree\u201d block; recall that the subject experiences live feedback training after the \u201cInstructed\u201d block.\nWe observe that the features of each class become more distinct after the user performs live feedback training; this is captured as an increase in the similarities on the diagonal of and a decrease in similarities off-diagonal. These changes in are also summarized in , which increases from to .\n###figure_7### ###figure_8### ###figure_9### ###figure_10###"
112
+ },
113
+ {
114
+ "section_id": "3.7",
115
+ "parent_section_id": "3",
116
+ "section_name": "III-G Within-Subject Normalization",
117
+ "text": "The focus of this work is to measure the effect of the proposed veridical and modified feedback strategies on subject performance.\nWe note that overall subject performance may be influenced by a relatively large number of factors of variation, such as factors affecting dexterity and motor precision, subject motor learning speed, and subject-intrinsic factors affecting raw sEMG signal-to-noise ratio.\nThus, a prohibitively large sample size may be required to account for this variation without normalization.\nWe therefore adopt a within-subject normalization strategy, obtaining baseline statistics for each subject using only data measured before our interventions.\nFor each subject, we measure baseline accuracy by training a model from scratch using that subject\u2019s block one data (calibration, Section II-C3 ###reference_.SSS3###), and testing this model\u2019s classification accuracy on the subject\u2019s block two data (instructed games, Section II-C4 ###reference_.SSS4###).\nWe obtain baselines for class similarity matrices in the same manner. Within each subject, we collect all gesture trials from the first two experimental blocks, and compute a normalized class similarity matrix. This is subtracted from the matrix computed using data from block four (free games, Section II-C6 ###reference_.SSS6###) to visualize the difference in similarity for each class. Note that due to the short experimental design, we have relatively few samples per class with which to construct each matrix, and therefore this representation may be somewhat noisy.\nWe transform the normalized similarity matrix describing blocks one and two into the scalar class separation measure , and likewise transform the similarity matrix describing block four. This results in a baseline-subtracted class separation measure.\nOverall, we measure changes from baseline as follows:"
118
+ },
119
+ {
120
+ "section_id": "3.8",
121
+ "parent_section_id": "3",
122
+ "section_name": "III-H Statistical Analysis",
123
+ "text": "We performed several statistical analyses to determine the effect of feedback on classification accuracy and feature space class separation. Differences between feedback groups at baseline (, ) were analyzed using one-way ANOVAs. Likewise, the effect of the feedback group on change scores (, ) was analyzed with one-way ANOVAs (). Alpha level was set at 0.05. Significant findings were further analyzed using post-hoc paired comparisons with Bonferroni correction for multiple comparisons. One-sided one-sample t-tests with Bonferroni correction for multiple comparisons () were used on change scores to test whether each feedback group significantly increased accuracy and distance."
124
+ },
125
+ {
126
+ "section_id": "4",
127
+ "parent_section_id": null,
128
+ "section_name": "IV Results",
129
+ "text": "All participants were able to successfully complete the experiment, with no reported adverse events."
130
+ },
131
+ {
132
+ "section_id": "4.1",
133
+ "parent_section_id": "4",
134
+ "section_name": "IV-A Group Baselines",
135
+ "text": "In order to check whether random group assignment was a potential confounding factor in our comparisons between groups, we analyzed baseline metrics for each experimental group.\nOne-way ANOVA indicated no significant differences in baseline accuracy (, ) or class separation (, ) between experimental groups.\nFigure 7 ###reference_### shows a group-level summary of the baseline accuracy and class separation measure. Though no significant differences were found, mean baseline accuracy and class separation scores were greatest in the Control group and smallest in the Modified group.\n###figure_11###"
136
+ },
137
+ {
138
+ "section_id": "4.2",
139
+ "parent_section_id": "4",
140
+ "section_name": "IV-B Effects of Feedback",
141
+ "text": "Individual one-sided one-sample t-tests were used to test for significant improvement in Free block performance from baseline (Bonferroni corrected for 3 comparisons, ). For accuracy, only the Modified group showed significant improvement (, ). No group showed a significant improvement in class separation. One-way ANOVAs indicated no significant between-group differences in (, ) or (, ).\nFigure 8 ###reference_### shows the average change from baseline performance in each experimental group, as measured in the accuracy of gesture classification (left panel) and feature-space class separation measure (right panel).\nThese data demonstrate that, on average, the increase in performance over the course of the experiment was greatest for subjects in the modified feedback group.\nNote that the variation between subjects is relatively high, resulting in overlapping estimates of mean performance.\nWe observe that both groups that received real-time feedback exhibited larger variation; in particular, the interquartile range for these two groups ( and units for Veridical and Modified, respectively) is nearly twice the range of the control group ( units). This may indicate that some subjects are better at learning from this form of visual feedback than others, or that some subjects were adversely affected by feedback while others were positively affected.\n###figure_12###"
142
+ },
143
+ {
144
+ "section_id": "4.3",
145
+ "parent_section_id": "4",
146
+ "section_name": "IV-C Class Confusion",
147
+ "text": "###figure_13### ###figure_14### ###figure_15### Figure 9 ###reference_### shows the group average confusion matrices of gesture trials during block four (free games) for each group. Rows represent the classification of the attempted gesture, normalized to .\nThere are notable similarities across the groups, indicating several gestures that are intrinsically difficult and gesture pairs that are inherently close. In particular, the \u201cthumb\u201d, \u201cpinch\u201d, and \u201cfist\u201d gestures all have a large fraction (about ) of gestures that fall below the decision threshold. Similarly, there was an overall trend that these three gestures tended to be confused, resulting in non-zero entries for the off-diagonal entries (fist, thumb), (fist, pinch), (thumb, pinch), etc.\nThe similarity between groups is an indication that feedback did not grossly disrupt subject behavior for certain gesture classes or cause substantially different effects for different classes."
148
+ },
149
+ {
150
+ "section_id": "4.4",
151
+ "parent_section_id": "4",
152
+ "section_name": "IV-D Class Feature Space Similarity",
153
+ "text": "###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### Figure 10 ###reference_### shows the average normalized class similarity matrix of each group.\nBy examining the diagonal entries, we can understand the repeatability of gestures (i.e. the similarity between items of the same class); by examining the off-diagonal entries, we can understand the separability of gestures (i.e. the similarity across different classes).\nAs described previously, a \u201cdesirable\u201d pattern for easy downstream classification (in which the subject produced consistent and well-separated gestures) would consist of larger entries on the diagonal and smaller entries off-diagonal.\nEach group demonstrated a consistent pattern in which the diagonal entries were brighter than the off-diagonal entries, indicating that the gestures were generally repeatable and well-separated. There was also a consistent pattern of bright off-diagonal cells, indicating high similarity between three specific gestures: \u201cpinch\u201d, \u201cfist\u201d, and \u201cthumb\u201d.\nThese patterns match well with the patterns visible in the class confusion matrices shown in Figure 9 ###reference_###.\nThis correspondence between our similarity metrics and confusion matrices may indicate that our chosen similarity metric is well-suited to this setting and aligns well with model performance.\nWe did not observe any gross changes in the structure of class similarity between groups; note that such a change could have occurred if feedback affected gestures differently, and this effect may not have been visible by only inspecting the scalar metric."
154
+ },
155
+ {
156
+ "section_id": "5",
157
+ "parent_section_id": null,
158
+ "section_name": "Discussion and Future Work",
159
+ "text": "This study tested the potential of modified continuous feedback of model performance in a gamified user interface for rapid user training on a sEMG-based gesture recognition system for controlling actions on a computer display.\nWe hypothesized that we could use manipulation of feedback about the gesture class probabilities in a short (4-minute) online learning session to shape user behavior in a manner that would increase the separation between muscle activation patterns of different gestures and increase the accuracy of model performance on future attempts. Overall, our results demonstrate that a short user training session using modified feedback has the potential to increase post-calibration performance (accuracy and class separation relative) when compared to veridical feedback and a no-feedback control."
160
+ },
161
+ {
162
+ "section_id": "5.1",
163
+ "parent_section_id": "5",
164
+ "section_name": "User Calibration",
165
+ "text": "Despite the emergence of research into methods for co-adaptive learning for sEMG-based gesture recognition, there have been few investigations specifically testing the effect of user training as a means of rapid calibration. Numerous studies have shown that extended user training on an sEMG-based controller results in significant gains in performance [37 ###reference_b37###, 13 ###reference_b13###, 12 ###reference_b12###]. The majority of these studies have found that increased model performance was accompanied by changes in muscle activation patterns that are theoretically favorable to better classification (such as improvements in class separability, variability, or repeatability). However, feature space characteristics of class distributions are not necessarily predictive of classifier performance, and this relationship is likely strongly dependent on the classifier used and the relationship between training and test data. For example, a recent investigation showed that the relationship between performance and feature-space metrics can be complex; these authors found that the real-time performance of an LDA classifier was only weakly correlated with class separability, but was not correlated with variability or repeatability [14 ###reference_b14###]. Krasoulis et. al. first demonstrated that short-term adaptation through biofeedback user training could positively impact prosthetic finger control using sEMG-based decoding [10 ###reference_b10###]. Our results demonstrate that subjects who received modified live feedback experienced a significant increase in classification accuracy. We also found that both veridical and modified feedback provided a trend of improvement in our feature space metric , though this effect was not statistically significant."
166
+ },
167
+ {
168
+ "section_id": "5.2",
169
+ "parent_section_id": "5",
170
+ "section_name": "Influence of Feedback Manipulation on User Behavior.",
171
+ "text": "In our experiments, the Modified feedback group showed the largest change in classification accuracy and class separability. Flattening of the class probabilities as was done here can be considered a form of error augmentation, since subjects were led to believe that the separation between classes was smaller than it actually was. This approach is most closely related to techniques involving feedback with \u201cerror amplification,\u201d which has been studied extensively. Feedback of performance outcomes that are worse than actual performance (i.e. error amplification) has been found to expedite motor adaptations to novel task constraints compared to accurate feedback [38 ###reference_b38###, 39 ###reference_b39###]. Amplification of task errors has also shown promise as an approach to facilitate motor recovery in patients with neurological disorders [25 ###reference_b25###, 40 ###reference_b40###]. Faster or more complete learning with error amplification has been attributed to more brain processes associated with greater attention to execution of the motor task [41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###] and reduction of sensorimotor noise [20 ###reference_b20###]. We speculate that improvement in classification accuracy with Modified feedback in this study may be a product of similar mechanisms."
172
+ },
173
+ {
174
+ "section_id": "5.3",
175
+ "parent_section_id": "5",
176
+ "section_name": "Selected Gestures",
177
+ "text": "We selected gestures that mimicked the manipulation of commonplace items such as remote controls and cell phones. No subject commented that the gestures were unfamiliar or difficult to perform.\nDirectional gestures using wrist movements (\u201cUp\u201d, \u201cDown\u201d, \u201cLeft\u201d, \u201cRight\u201d) were generally more separable and yielded higher classification accuracy compared to gestures using grasping movements (\u201cPinch\u201d, \u201cThumb\u201d, \u201cOpen\u201d, \u201cFist\u201d).\nThe extrinsic hand muscle groups used by each of these grasping gestures are similar, which may explain why subjects had a difficult time performing them accurately while also creating separation in muscle activation patterns.\nThus the feature-space similarity that we observed for these grasping gestures is somewhat expected."
178
+ },
179
+ {
180
+ "section_id": "5.4",
181
+ "parent_section_id": "5",
182
+ "section_name": "Limitations",
183
+ "text": "There were several limitations of the current work that may have affected the results and interpretations. Only a single classification model was used. Several machine learning methods, including artificial neural networks, linear discriminant analysis, support vector machines (SVM), and Gaussian mixture models have been previously used for sEMG-based control. The choice to use a model based on SVM and logistic regression was due to its simplicity and the popularity of SVM for this application. It is possible that the choice of classifier model affects both calibration accuracy and the way that users explore the mapping of muscle activation to gestures. Nevertheless, the user training scheme employed here likely has general benefits for use and understanding of human co-adaptive behavior.\nThere are a number of possible changes in the signal processing pipeline that may yield improvements in overall model performance. The active window for feature extraction may be tuned, and additional features such as time-frequency domain or higher-dimensional feature vectors may be extracted. The selected features (RMS, and median frequency) were chosen based on their common use for sEMG-based gesture classification and initial pilot testing. Future work should evaluate how sEMG feature selection affects user training."
184
+ },
185
+ {
186
+ "section_id": "5.5",
187
+ "parent_section_id": "5",
188
+ "section_name": "Designing Improved Feedback",
189
+ "text": "Only a single type of feedback manipulation was tested. We used a feedback manipulation that flattened probabilities across classes, making it more difficult to achieve a correct classification. This approach was selected as it was expected that participants would respond by increasing the separation between muscle activation patterns for different gestures. While we observed a non-significant trend of improvement in class separation, the manipulation was not directly optimized for this purpose. Future research should explore the optimization of feedback manipulation for shaping user behavior during co-adaptive sEMG-gesture recognition. Adaptive feedback manipulation based on user and model performance characteristics to target specific class confusions is an attractive future direction. Further improvement may come from iterating between rounds of visual feedback to induce human learning, and rounds of model re-training using the subject\u2019s most recent data.\nThe approach we used was a form of modified knowledge of results; future work could explore using modified knowledge of performance by giving the user feedback about feature space characteristics such as distance between the current feature vector and a representative item from the target class, or aggregate feature metrics describing properties like separability and repeatability."
190
+ }
191
+ ],
192
+ "appendix": [],
193
+ "tables": {},
194
+ "image_paths": {
195
+ "1": {
196
+ "figure_path": "2309.07289v3_figure_1.png",
197
+ "caption": "Figure 1: Electrode Placement. sEMG data is collected using 8888 Delsys Trigno sEMG sensors uniformly spaced around the right forearm.",
198
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/current/emg-placement-new.png"
199
+ },
200
+ "2": {
201
+ "figure_path": "2309.07289v3_figure_2.png",
202
+ "caption": "Figure 2: Gesture Trial Timing. In the yellow \u2018prompting\u2019 epoch, the subject sees an instruction. In the green \u2018gesture production\u2019 epoch, the subject performs the gesture. In the red \u2018recovery\u2019 epoch, the subject returns to the rest position. Features for classification are extracted from the last 500500500500 ms of gesture production to help ensure that steady-state features are collected.",
203
+ "url": "http://arxiv.org/html/2309.07289v3/x1.png"
204
+ },
205
+ "3": {
206
+ "figure_path": "2309.07289v3_figure_3.png",
207
+ "caption": "Figure 3: Example mini game. The blue player avatar must be moved to match the gray target avatar. The minimal path includes moving right, down twice, decreasing the die number (using a pinch gesture), and reducing size (using a fist gesture).",
208
+ "url": "http://arxiv.org/html/2309.07289v3/"
209
+ },
210
+ "4": {
211
+ "figure_path": "2309.07289v3_figure_4.png",
212
+ "caption": "Figure 4: The participant User Interface. Top left: instructed gesture. Bottom left: predicted gesture probabilities. Right: Task window including subject\u2019s avatar and target. Outer edge: gesture epoch indicator.",
213
+ "url": "http://arxiv.org/html/2309.07289v3/x3.png"
214
+ },
215
+ "5(a)": {
216
+ "figure_path": "2309.07289v3_figure_5(a).png",
217
+ "caption": "Figure 5: Top: Real-time probability feedback window. The horizontal line at 0.50.50.50.5 shows the decision threshold. Bottom: Example of probability values without modification (\u201cVeridical\u201d) and with modification (\u201cModified\u201d) as described in Sec. III-C for several hypothetical values of m\ud835\udc5amitalic_m. m=0.75\ud835\udc5a0.75m=0.75italic_m = 0.75 used for real experiments. Arrows highlight an example case where modification causes the gesture to become sub-threshold; participant may compensate by improving gesture quality.",
218
+ "url": "http://arxiv.org/html/2309.07289v3/x4.png"
219
+ },
220
+ "5(b)": {
221
+ "figure_path": "2309.07289v3_figure_5(b).png",
222
+ "caption": "Figure 5: Top: Real-time probability feedback window. The horizontal line at 0.50.50.50.5 shows the decision threshold. Bottom: Example of probability values without modification (\u201cVeridical\u201d) and with modification (\u201cModified\u201d) as described in Sec. III-C for several hypothetical values of m\ud835\udc5amitalic_m. m=0.75\ud835\udc5a0.75m=0.75italic_m = 0.75 used for real experiments. Arrows highlight an example case where modification causes the gesture to become sub-threshold; participant may compensate by improving gesture quality.",
223
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/veridical_and_modified.png"
224
+ },
225
+ "6(a)": {
226
+ "figure_path": "2309.07289v3_figure_6(a).png",
227
+ "caption": "Figure 6: Didactic example for class similarity matrices D\ud835\udc37Ditalic_D and scalar class separation measure dsepsubscript\ud835\udc51sepd_{\\textsc{sep}}italic_d start_POSTSUBSCRIPT sep end_POSTSUBSCRIPT. For a chosen subject from the Modified condition, we analyze 3333 of the original 16161616 features (RMS value from electrodes 1, 4, and 7) and a subset of gestures (\u201cLeft\u201d, \u201cDown\u201d, and \u201cRight\u201d). Top row: features from calibration and instructed blocks. Bottom row: features from free games. Left: Scatter plot of 3333-dimensional features, and scalar class separation value. Right: The corresponding class separation matrix.",
228
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/toy-distances-scatter-before.png"
229
+ },
230
+ "6(b)": {
231
+ "figure_path": "2309.07289v3_figure_6(b).png",
232
+ "caption": "Figure 6: Didactic example for class similarity matrices D\ud835\udc37Ditalic_D and scalar class separation measure dsepsubscript\ud835\udc51sepd_{\\textsc{sep}}italic_d start_POSTSUBSCRIPT sep end_POSTSUBSCRIPT. For a chosen subject from the Modified condition, we analyze 3333 of the original 16161616 features (RMS value from electrodes 1, 4, and 7) and a subset of gestures (\u201cLeft\u201d, \u201cDown\u201d, and \u201cRight\u201d). Top row: features from calibration and instructed blocks. Bottom row: features from free games. Left: Scatter plot of 3333-dimensional features, and scalar class separation value. Right: The corresponding class separation matrix.",
233
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/toy-distances-heatmap-before.png"
234
+ },
235
+ "6(c)": {
236
+ "figure_path": "2309.07289v3_figure_6(c).png",
237
+ "caption": "Figure 6: Didactic example for class similarity matrices D\ud835\udc37Ditalic_D and scalar class separation measure dsepsubscript\ud835\udc51sepd_{\\textsc{sep}}italic_d start_POSTSUBSCRIPT sep end_POSTSUBSCRIPT. For a chosen subject from the Modified condition, we analyze 3333 of the original 16161616 features (RMS value from electrodes 1, 4, and 7) and a subset of gestures (\u201cLeft\u201d, \u201cDown\u201d, and \u201cRight\u201d). Top row: features from calibration and instructed blocks. Bottom row: features from free games. Left: Scatter plot of 3333-dimensional features, and scalar class separation value. Right: The corresponding class separation matrix.",
238
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/toy-distances-scatter-after.png"
239
+ },
240
+ "6(d)": {
241
+ "figure_path": "2309.07289v3_figure_6(d).png",
242
+ "caption": "Figure 6: Didactic example for class similarity matrices D\ud835\udc37Ditalic_D and scalar class separation measure dsepsubscript\ud835\udc51sepd_{\\textsc{sep}}italic_d start_POSTSUBSCRIPT sep end_POSTSUBSCRIPT. For a chosen subject from the Modified condition, we analyze 3333 of the original 16161616 features (RMS value from electrodes 1, 4, and 7) and a subset of gestures (\u201cLeft\u201d, \u201cDown\u201d, and \u201cRight\u201d). Top row: features from calibration and instructed blocks. Bottom row: features from free games. Left: Scatter plot of 3333-dimensional features, and scalar class separation value. Right: The corresponding class separation matrix.",
243
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/toy-distances-heatmap-after.png"
244
+ },
245
+ "7": {
246
+ "figure_path": "2309.07289v3_figure_7.png",
247
+ "caption": "Figure 7: Baseline Performance. Left: Accuracy. Right: Scalar class separation measure dsepsubscript\ud835\udc51sepd_{\\textsc{sep}}italic_d start_POSTSUBSCRIPT sep end_POSTSUBSCRIPT. Boxplots show the median and quartiles; dotted lines show the mean. Note the relative difference in subject baseline task performance, visible as a gap in baseline accuracy. This discrepancy (due to random group assignment and low subject number) indicates the need for within-subject normalization, as described in Section III-G. See Section IV-A for statistical analysis.",
248
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/group-baselines-bar.rbf.png"
249
+ },
250
+ "8": {
251
+ "figure_path": "2309.07289v3_figure_8.png",
252
+ "caption": "Figure 8: Overall Changes from Baseline Performance. Left: Change in accuracy. Right: Change in scalar class separation measure dsepsubscript\ud835\udc51sepd_{\\textsc{sep}}italic_d start_POSTSUBSCRIPT sep end_POSTSUBSCRIPT.\nBoxplots show median and quartiles; dotted lines show mean.\nFor each subject, we perform baseline subtraction as described in Section III-G.\nChange in accuracy for the modified group was significantly greater than zero using; see Section IV-B for statistical analysis.",
253
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/group-overall-change-bar.rbf.png"
254
+ },
255
+ "9(a)": {
256
+ "figure_path": "2309.07289v3_figure_9(a).png",
257
+ "caption": "Figure 9: Confusion Matrices averaged across subjects and normalized within each row. No within-subject correction is applied. Class confusion structure is largely similar across groups. Left: Control subject. Middle: Veridical feedback. Right: Modified Feedback.",
258
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/overall_conf_mat.Control.png"
259
+ },
260
+ "9(b)": {
261
+ "figure_path": "2309.07289v3_figure_9(b).png",
262
+ "caption": "Figure 9: Confusion Matrices averaged across subjects and normalized within each row. No within-subject correction is applied. Class confusion structure is largely similar across groups. Left: Control subject. Middle: Veridical feedback. Right: Modified Feedback.",
263
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/overall_conf_mat.Veridical.png"
264
+ },
265
+ "9(c)": {
266
+ "figure_path": "2309.07289v3_figure_9(c).png",
267
+ "caption": "Figure 9: Confusion Matrices averaged across subjects and normalized within each row. No within-subject correction is applied. Class confusion structure is largely similar across groups. Left: Control subject. Middle: Veridical feedback. Right: Modified Feedback.",
268
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/overall_conf_mat.Modified.png"
269
+ },
270
+ "10(a)": {
271
+ "figure_path": "2309.07289v3_figure_10(a).png",
272
+ "caption": "Figure 10: Normalized Class Similarity Matrices. Top row: Raw similarities from block four (free games, see section II-C6). Class similarity matrix D\ud835\udc37Ditalic_D is computed for each subject, normalized to [0,1]01[0,1][ 0 , 1 ], and then averaged across subjects in a group. Large values on the diagonal indicate tight clusters for each class. Small values off-diagonal indicate well-separated clusters. Bottom row: Change in similarity matrix from baseline \u0394\u2062D\u0394\ud835\udc37\\Delta Droman_\u0394 italic_D, as described in Equation 10. Positive values indicate pairs that became closer in feature space, compared to baseline; subjects whose structure improved would show positive values on the diagonal and negative values off-diagonal. See Section III-F for further details. Left: Control group. Middle: Veridical feedback. Right: Modified feedback. Upper triangular parts are omitted due to symmetry.",
273
+ "url": "http://arxiv.org/html/2309.07289v3/"
274
+ },
275
+ "10(b)": {
276
+ "figure_path": "2309.07289v3_figure_10(b).png",
277
+ "caption": "Figure 10: Normalized Class Similarity Matrices. Top row: Raw similarities from block four (free games, see section II-C6). Class similarity matrix D\ud835\udc37Ditalic_D is computed for each subject, normalized to [0,1]01[0,1][ 0 , 1 ], and then averaged across subjects in a group. Large values on the diagonal indicate tight clusters for each class. Small values off-diagonal indicate well-separated clusters. Bottom row: Change in similarity matrix from baseline \u0394\u2062D\u0394\ud835\udc37\\Delta Droman_\u0394 italic_D, as described in Equation 10. Positive values indicate pairs that became closer in feature space, compared to baseline; subjects whose structure improved would show positive values on the diagonal and negative values off-diagonal. See Section III-F for further details. Left: Control group. Middle: Veridical feedback. Right: Modified feedback. Upper triangular parts are omitted due to symmetry.",
278
+ "url": "http://arxiv.org/html/2309.07289v3/"
279
+ },
280
+ "10(c)": {
281
+ "figure_path": "2309.07289v3_figure_10(c).png",
282
+ "caption": "Figure 10: Normalized Class Similarity Matrices. Top row: Raw similarities from block four (free games, see section II-C6). Class similarity matrix D\ud835\udc37Ditalic_D is computed for each subject, normalized to [0,1]01[0,1][ 0 , 1 ], and then averaged across subjects in a group. Large values on the diagonal indicate tight clusters for each class. Small values off-diagonal indicate well-separated clusters. Bottom row: Change in similarity matrix from baseline \u0394\u2062D\u0394\ud835\udc37\\Delta Droman_\u0394 italic_D, as described in Equation 10. Positive values indicate pairs that became closer in feature space, compared to baseline; subjects whose structure improved would show positive values on the diagonal and negative values off-diagonal. See Section III-F for further details. Left: Control group. Middle: Veridical feedback. Right: Modified feedback. Upper triangular parts are omitted due to symmetry.",
283
+ "url": "http://arxiv.org/html/2309.07289v3/"
284
+ },
285
+ "10(d)": {
286
+ "figure_path": "2309.07289v3_figure_10(d).png",
287
+ "caption": "Figure 10: Normalized Class Similarity Matrices. Top row: Raw similarities from block four (free games, see section II-C6). Class similarity matrix D\ud835\udc37Ditalic_D is computed for each subject, normalized to [0,1]01[0,1][ 0 , 1 ], and then averaged across subjects in a group. Large values on the diagonal indicate tight clusters for each class. Small values off-diagonal indicate well-separated clusters. Bottom row: Change in similarity matrix from baseline \u0394\u2062D\u0394\ud835\udc37\\Delta Droman_\u0394 italic_D, as described in Equation 10. Positive values indicate pairs that became closer in feature space, compared to baseline; subjects whose structure improved would show positive values on the diagonal and negative values off-diagonal. See Section III-F for further details. Left: Control group. Middle: Veridical feedback. Right: Modified feedback. Upper triangular parts are omitted due to symmetry.",
288
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/overall_distances.rbf.legend.png"
289
+ },
290
+ "10(e)": {
291
+ "figure_path": "2309.07289v3_figure_10(e).png",
292
+ "caption": "Figure 10: Normalized Class Similarity Matrices. Top row: Raw similarities from block four (free games, see section II-C6). Class similarity matrix D\ud835\udc37Ditalic_D is computed for each subject, normalized to [0,1]01[0,1][ 0 , 1 ], and then averaged across subjects in a group. Large values on the diagonal indicate tight clusters for each class. Small values off-diagonal indicate well-separated clusters. Bottom row: Change in similarity matrix from baseline \u0394\u2062D\u0394\ud835\udc37\\Delta Droman_\u0394 italic_D, as described in Equation 10. Positive values indicate pairs that became closer in feature space, compared to baseline; subjects whose structure improved would show positive values on the diagonal and negative values off-diagonal. See Section III-F for further details. Left: Control group. Middle: Veridical feedback. Right: Modified feedback. Upper triangular parts are omitted due to symmetry.",
293
+ "url": "http://arxiv.org/html/2309.07289v3/"
294
+ },
295
+ "10(f)": {
296
+ "figure_path": "2309.07289v3_figure_10(f).png",
297
+ "caption": "Figure 10: Normalized Class Similarity Matrices. Top row: Raw similarities from block four (free games, see section II-C6). Class similarity matrix D\ud835\udc37Ditalic_D is computed for each subject, normalized to [0,1]01[0,1][ 0 , 1 ], and then averaged across subjects in a group. Large values on the diagonal indicate tight clusters for each class. Small values off-diagonal indicate well-separated clusters. Bottom row: Change in similarity matrix from baseline \u0394\u2062D\u0394\ud835\udc37\\Delta Droman_\u0394 italic_D, as described in Equation 10. Positive values indicate pairs that became closer in feature space, compared to baseline; subjects whose structure improved would show positive values on the diagonal and negative values off-diagonal. See Section III-F for further details. Left: Control group. Middle: Veridical feedback. Right: Modified feedback. Upper triangular parts are omitted due to symmetry.",
298
+ "url": "http://arxiv.org/html/2309.07289v3/"
299
+ },
300
+ "10(g)": {
301
+ "figure_path": "2309.07289v3_figure_10(g).png",
302
+ "caption": "Figure 10: Normalized Class Similarity Matrices. Top row: Raw similarities from block four (free games, see section II-C6). Class similarity matrix D\ud835\udc37Ditalic_D is computed for each subject, normalized to [0,1]01[0,1][ 0 , 1 ], and then averaged across subjects in a group. Large values on the diagonal indicate tight clusters for each class. Small values off-diagonal indicate well-separated clusters. Bottom row: Change in similarity matrix from baseline \u0394\u2062D\u0394\ud835\udc37\\Delta Droman_\u0394 italic_D, as described in Equation 10. Positive values indicate pairs that became closer in feature space, compared to baseline; subjects whose structure improved would show positive values on the diagonal and negative values off-diagonal. See Section III-F for further details. Left: Control group. Middle: Veridical feedback. Right: Modified feedback. Upper triangular parts are omitted due to symmetry.",
303
+ "url": "http://arxiv.org/html/2309.07289v3/"
304
+ },
305
+ "10(h)": {
306
+ "figure_path": "2309.07289v3_figure_10(h).png",
307
+ "caption": "Figure 10: Normalized Class Similarity Matrices. Top row: Raw similarities from block four (free games, see section II-C6). Class similarity matrix D\ud835\udc37Ditalic_D is computed for each subject, normalized to [0,1]01[0,1][ 0 , 1 ], and then averaged across subjects in a group. Large values on the diagonal indicate tight clusters for each class. Small values off-diagonal indicate well-separated clusters. Bottom row: Change in similarity matrix from baseline \u0394\u2062D\u0394\ud835\udc37\\Delta Droman_\u0394 italic_D, as described in Equation 10. Positive values indicate pairs that became closer in feature space, compared to baseline; subjects whose structure improved would show positive values on the diagonal and negative values off-diagonal. See Section III-F for further details. Left: Control group. Middle: Veridical feedback. Right: Modified feedback. Upper triangular parts are omitted due to symmetry.",
308
+ "url": "http://arxiv.org/html/2309.07289v3/extracted/5489928/figures/results/overall_distances.baseline_subtracted.rbf.legend.png"
309
+ }
310
+ },
311
+ "validation": true,
312
+ "references": [
313
+ {
314
+ "1": {
315
+ "title": "Dynamic gesture recognition using surface emg signals based on\nmulti-stream residual network.",
316
+ "author": "Zhiwen Yang, Ying Sun Du Jiang, Bo Tao, Xiliang Tong, Guozhang Jiang, Manman\nXu, Juntong Yun, Ying Liu, Baojia Chen, and Jianyi Kong.",
317
+ "venue": "Frontiers in Bioengineering and Biotechnology, 9, 2021.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "2": {
323
+ "title": "Deep learning for emg-based human-machine interaction: a review.",
324
+ "author": "Dezhen Xiong, Daohui Zhang, Xingang Zhao, and Yiwen Zhao.",
325
+ "venue": "IEEE/CAA Journal of Automatica Sinica, 8(3):512\u2013533, 2021.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "3": {
331
+ "title": "Intelligent human-computer interaction based on surface emg gesture\nrecognition.",
332
+ "author": "Jinxian Qi, Guozhang Jiang, Gongfa Li, Ying Sun, and Bo Tao.",
333
+ "venue": "IEEE Access, 7:61378\u201361387, 2019.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "4": {
339
+ "title": "A framework for optimizing co-adaptation in body-machine interfaces.",
340
+ "author": "Dalia De Santis.",
341
+ "venue": "Frontiers in Neurorobotics, 15:40, 2021.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "5": {
347
+ "title": "Concurrent adaptation of human and machine improves simultaneous and\nproportional myoelectric control.",
348
+ "author": "Janne M Hahne, Sven D\u00e4hne, Han-Jeong Hwang, Klaus-Robert M\u00fcller, and\nLucas C Parra.",
349
+ "venue": "IEEE Transactions on Neural Systems and Rehabilitation\nEngineering, 23(4):618\u2013627, 2015.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "6": {
355
+ "title": "Model and experiments to optimize co-adaptation in a simplified\nmyoelectric control system.",
356
+ "author": "Mathilde Couraud, Daniel Cattaert, Florent Paclet, Pierre-Yves Oudeyer, and\nAymar De Rugy.",
357
+ "venue": "Journal of neural engineering, 15(2):026006, 2018.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "7": {
363
+ "title": "Directional forgetting for stable co-adaptation in myoelectric\ncontrol.",
364
+ "author": "Dennis Yeung, Dario Farina, and Ivan Vujaklija.",
365
+ "venue": "Sensors, 19(9):2203, 2019.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "8": {
371
+ "title": "Co-adaptive control of bionic limbs via unsupervised adaptation of\nmuscle synergies.",
372
+ "author": "Dennis Yeung, Irene Mendez Guerra, Ian Barner-Rasmussen, Emilia Siponen, Dario\nFarina, and Ivan Vujaklija.",
373
+ "venue": "IEEE Transactions on Biomedical Engineering, 2022.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "9": {
379
+ "title": "The influence of training with visual biofeedback on the\npredictability of myoelectric control usability.",
380
+ "author": "Jena L Nawfel, Kevin B Englehart, and Erik J Scheme.",
381
+ "venue": "IEEE Transactions on Neural Systems and Rehabilitation\nEngineering, 30:878\u2013892, 2022.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "10": {
387
+ "title": "Effect of user practice on prosthetic finger control with an\nintuitive myoelectric decoder.",
388
+ "author": "Agamemnon Krasoulis, Sethu Vijayakumar, and Kianoush Nazarpour.",
389
+ "venue": "Frontiers in neuroscience, page 891, 2019.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "11": {
395
+ "title": "Guiding the training of users with a pattern similarity biofeedback\nto improve the performance of myoelectric pattern recognition.",
396
+ "author": "Etienne de Montalivet, Kevin Bailly, Am\u00e9lie Touillet, No\u00ebl Martinet,\nJean Paysant, and Nathanael Jarrasse.",
397
+ "venue": "IEEE Transactions on Neural Systems and Rehabilitation\nEngineering, 28(8):1731\u20131741, 2020.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "12": {
403
+ "title": "Quantification of feature space changes with experience during\nelectromyogram pattern recognition control.",
404
+ "author": "Nathan E Bunderson and Todd A Kuiken.",
405
+ "venue": "IEEE Transactions on Neural Systems and Rehabilitation\nEngineering, 20(3):239\u2013246, 2012.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "13": {
411
+ "title": "User training for pattern recognition-based myoelectric prostheses:\nImproving phantom limb movement consistency and distinguishability.",
412
+ "author": "Michael A Powell, Rahul R Kaliki, and Nitish V Thakor.",
413
+ "venue": "IEEE Transactions on Neural Systems and Rehabilitation\nEngineering, 22(3):522\u2013532, 2013.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "14": {
419
+ "title": "Exploring the relationship between emg feature space characteristics\nand control performance in machine learning myoelectric control.",
420
+ "author": "Andreas W Franzke, Morten B Kristoffersen, Vinay Jayaram, Corry K van der\nSluis, Alessio Murgia, and Raoul M Bongers.",
421
+ "venue": "IEEE Transactions on Neural Systems and Rehabilitation\nEngineering, 29:21\u201330, 2020.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "15": {
427
+ "title": "Motor learning and control.",
428
+ "author": "Richard Magill and David I Anderson.",
429
+ "venue": "McGraw-Hill Publishing New York, 2010.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "16": {
435
+ "title": "Improving motor performance: Selected aspects of augmented feedback\nin exercise and health.",
436
+ "author": "Benedikt Lauber and Martin Keller.",
437
+ "venue": "European journal of sport science, 14(1):36\u201343, 2014.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "17": {
443
+ "title": "Effectiveness of knowledge of result and knowledge of performance in\nthe learning of a skilled motor activity by healthy young adults.",
444
+ "author": "Dhara A Sharma, Mohamed Faisal Chevidikunnan, Fayaz Rahman Khan, and\nRiziq Allah Gaowgzeh.",
445
+ "venue": "Journal of physical therapy science, 28(5):1482\u20131486, 2016.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "18": {
451
+ "title": "Visual error augmentation for enhancing motor learning and\nrehabilitative relearning.",
452
+ "author": "Yejun Wei, Preeti Bajaj, Robert Scheidt, and James Patton.",
453
+ "venue": "In 9th International Conference on Rehabilitation Robotics,\n2005. ICORR 2005., pages 505\u2013510. IEEE, 2005.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "19": {
459
+ "title": "Augmented feedback presented in a virtual environment accelerates\nlearning of a difficult motor task.",
460
+ "author": "Emanuel Todorov, Reza Shadmehr, and Emilio Bizzi.",
461
+ "venue": "Journal of motor behavior, 29(2):147\u2013158, 1997.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "20": {
467
+ "title": "Neuromotor noise is malleable by amplifying perceived errors.",
468
+ "author": "Christopher J Hasson, Zhaoran Zhang, Masaki O Abe, and Dagmar Sternad.",
469
+ "venue": "PLoS computational biology, 12(8):e1005044, 2016.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "21": {
475
+ "title": "Evidence for hyperbolic temporal discounting of reward in control of\nmovements.",
476
+ "author": "Adrian M Haith, Thomas R Reppert, and Reza Shadmehr.",
477
+ "venue": "Journal of neuroscience, 32(34):11727\u201311736, 2012.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "22": {
483
+ "title": "Persistence of reduced neuromotor noise in long-term motor skill\nlearning.",
484
+ "author": "Meghan E Huber, Nikita Kuznetsov, and Dagmar Sternad.",
485
+ "venue": "Journal of Neurophysiology, 116(6):2922\u20132935, 2016.",
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "23": {
491
+ "title": "Sensory-motor interactions and the manipulation of movement error.",
492
+ "author": "Pritesh N Parmar, Felix C Huang, and James L Patton.",
493
+ "venue": "In Neurorehabilitation Technology, pages 223\u2013246. Springer,\n2022.",
494
+ "url": null
495
+ }
496
+ },
497
+ {
498
+ "24": {
499
+ "title": "The role of augmented feedback on motor learning: A systematic\nreview.",
500
+ "author": "Arsalan Moinuddin, Ashish Goel, and Yashendra Sethi.",
501
+ "venue": "Cureus, 13(11), 2021.",
502
+ "url": null
503
+ }
504
+ },
505
+ {
506
+ "25": {
507
+ "title": "Error augmentation as a possible technique for improving upper\nextremity motor performance after a stroke\u2013a systematic review.",
508
+ "author": "Sharon Israely and Eli Carmeli.",
509
+ "venue": "Topics in stroke rehabilitation, 23(2):116\u2013125, 2016.",
510
+ "url": null
511
+ }
512
+ },
513
+ {
514
+ "26": {
515
+ "title": "Visuomotor learning enhanced by augmenting instantaneous trajectory\nerror feedback during reaching.",
516
+ "author": "James L Patton, Yejun John Wei, Preeti Bajaj, and Robert A Scheidt.",
517
+ "venue": "PloS one, 8(1):e46466, 2013.",
518
+ "url": null
519
+ }
520
+ },
521
+ {
522
+ "27": {
523
+ "title": "Improving the retention of motor skills after reward-based\nreinforcement by incorporating haptic guidance and error augmentation.",
524
+ "author": "Dylan P Losey, Laura H Blumenschein, and Marcia K O\u2019Malley.",
525
+ "venue": "In 2016 6th IEEE International Conference on Biomedical Robotics\nand Biomechatronics (BioRob), pages 857\u2013863. IEEE, 2016.",
526
+ "url": null
527
+ }
528
+ },
529
+ {
530
+ "28": {
531
+ "title": "The median frequency of the surface emg power spectrum in relation to\nmotor unit firing and action potential properties.",
532
+ "author": "Hermanus J Hermens, TAMv Bruggen, Christian TM Baten, WLC Rutten, and HBK Boom.",
533
+ "venue": "Journal of Electromyography and Kinesiology, 2(1):15\u201325, 1992.",
534
+ "url": null
535
+ }
536
+ },
537
+ {
538
+ "29": {
539
+ "title": "Pairwise classification and support vector machines.",
540
+ "author": "U Kre\u00dfel.",
541
+ "venue": "In B. Sch\u00f6lkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods \u2014 Support Vector Learning, pages 255\u2013268,\nCambridge, MA, 1999. MIT Press.",
542
+ "url": null
543
+ }
544
+ },
545
+ {
546
+ "30": {
547
+ "title": "CVXPY: A Python-embedded modeling language for convex\noptimization.",
548
+ "author": "Steven Diamond and Stephen Boyd.",
549
+ "venue": "Journal of Machine Learning Research, 17(83):1\u20135, 2016.",
550
+ "url": null
551
+ }
552
+ },
553
+ {
554
+ "31": {
555
+ "title": "Semi-supervised support vector machines.",
556
+ "author": "Kristin Bennett and Ayhan Demiriz.",
557
+ "venue": "Advances in Neural Information processing systems, 11, 1998.",
558
+ "url": null
559
+ }
560
+ },
561
+ {
562
+ "32": {
563
+ "title": "Pytorch: An imperative style, high-performance deep learning library.",
564
+ "author": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory\nChanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al.",
565
+ "venue": "Advances in neural information processing systems, 32, 2019.",
566
+ "url": null
567
+ }
568
+ },
569
+ {
570
+ "33": {
571
+ "title": "Decoupled weight decay regularization.",
572
+ "author": "Ilya Loshchilov and Frank Hutter.",
573
+ "venue": "arXiv preprint arXiv:1711.05101, 2017.",
574
+ "url": null
575
+ }
576
+ },
577
+ {
578
+ "34": {
579
+ "title": "PyQt.",
580
+ "author": "Riverbank Computing.",
581
+ "venue": "https://www.riverbankcomputing.com/software/pyqt/, 1998.",
582
+ "url": null
583
+ }
584
+ },
585
+ {
586
+ "35": {
587
+ "title": "Labgraph.",
588
+ "author": "Jimmy Feng, Pradeep Damodara, George Gensure, Ryan Catoen, and Allen Yin.",
589
+ "venue": "https://github.com/facebookresearch/labgraph, 2021.",
590
+ "url": null
591
+ }
592
+ },
593
+ {
594
+ "36": {
595
+ "title": "Large sample analysis of the median heuristic.",
596
+ "author": "Damien Garreau, Wittawat Jitkrittum, and Motonobu Kanagawa.",
597
+ "venue": "arXiv preprint arXiv:1707.07269, 2017.",
598
+ "url": null
599
+ }
600
+ },
601
+ {
602
+ "37": {
603
+ "title": "User adaptation in long-term, open-loop myoelectric training:\nimplications for emg pattern recognition in prosthesis control.",
604
+ "author": "Jiayuan He, Dingguo Zhang, Ning Jiang, Xinjun Sheng, Dario Farina, and\nXiangyang Zhu.",
605
+ "venue": "Journal of neural engineering, 12(4):046005, 2015.",
606
+ "url": null
607
+ }
608
+ },
609
+ {
610
+ "38": {
611
+ "title": "The effects of error augmentation on learning to walk on a narrow\nbalance beam.",
612
+ "author": "Antoinette Domingo and Daniel P Ferris.",
613
+ "venue": "Experimental brain research, 206(4):359\u2013370, 2010.",
614
+ "url": null
615
+ }
616
+ },
617
+ {
618
+ "39": {
619
+ "title": "Evaluation of robotic training forces that either enhance or reduce\nerror in chronic hemiparetic stroke survivors.",
620
+ "author": "James L Patton, Mary Ellen Stoykov, Mark Kovic, and Ferdinando A Mussa-Ivaldi.",
621
+ "venue": "Experimental brain research, 168(3):368\u2013383, 2006.",
622
+ "url": null
623
+ }
624
+ },
625
+ {
626
+ "40": {
627
+ "title": "Sensorimotor training in virtual reality: a review.",
628
+ "author": "Sergei V Adamovich, Gerard G Fluet, Eugene Tunik, and Alma S Merians.",
629
+ "venue": "NeuroRehabilitation, 25(1):29\u201344, 2009.",
630
+ "url": null
631
+ }
632
+ },
633
+ {
634
+ "41": {
635
+ "title": "The primate striatum: neuronal activity in relation to spatial\nattention versus motor preparation.",
636
+ "author": "Driss Boussaoud and Imane Kermadi.",
637
+ "venue": "European Journal of Neuroscience, 9(10):2152\u20132168, 1997.",
638
+ "url": null
639
+ }
640
+ },
641
+ {
642
+ "42": {
643
+ "title": "A review of differences between basal ganglia and cerebellar control\nof movements as revealed by functional imaging studies.",
644
+ "author": "Markus Jueptner and Cornelius Weiller.",
645
+ "venue": "Brain: a journal of neurology, 121(8):1437\u20131449, 1998.",
646
+ "url": null
647
+ }
648
+ },
649
+ {
650
+ "43": {
651
+ "title": "Error amplification to promote motor learning and motivation in\ntherapy robotics.",
652
+ "author": "Navid Shirzad and HF Machiel Van der Loos.",
653
+ "venue": "In 2012 Annual International Conference of the IEEE Engineering\nin Medicine and Biology Society, pages 3907\u20133910. IEEE, 2012.",
654
+ "url": null
655
+ }
656
+ }
657
+ ],
658
+ "url": "http://arxiv.org/html/2309.07289v3"
659
+ }
20240322/2309.09510v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2309.11639v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2309.13456v2.json ADDED
@@ -0,0 +1,298 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "An Optimal Control Framework for Influencing Human Driving Behavior in Mixed-Autonomy Traffic",
3
+ "abstract": "As autonomous vehicles (AVs) become increasingly prevalent, their interaction with human drivers presents a critical challenge.\nCurrent AVs lack social awareness, causing behavior that is often awkward or unsafe.\nTo combat this, social AVs, which are proactive rather than reactive in their behavior, have been explored in recent years.\nWith knowledge of robot-human interaction dynamics, a social AV can influence a human driver to exhibit desired behaviors by strategically altering its own behaviors.\nIn this paper, we present a novel framework for achieving human influence.\nThe foundation of our framework lies in an innovative use of control barrier functions to formulate the desired objectives of influence as constraints in an optimal control problem.\nThe computed controls gradually push the system state toward satisfaction of the objectives, e.g. slowing the human down to some desired speed.\nWe demonstrate the proposed framework\u2019s feasibility in a variety of scenarios related to car-following and lane changes, including multi-robot and multi-human configurations.\nIn two case studies, we validate the framework\u2019s effectiveness when applied to the problems of traffic flow optimization and aggressive behavior mitigation.\nGiven these results, the main contribution of our framework is its versatility in a wide spectrum of influence objectives and mixed-autonomy configurations.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Autonomous vehicles (AVs) are increasingly popular on today\u2019s roadways, largely as a result of great strides in perception, planning, and control algorithms in recent years [1 ###reference_b1###].\nHowever, these systems remain far from perfect, particularly because current AVs struggle to deal with human drivers.\nMany existing works treat human drivers as dynamic obstacles rather than decision-making agents [2 ###reference_b2###], and this assumption often yields awkward or unsafe behavior.\nAddressing this problem is crucial, as AVs and human drivers will likely share the roads for many years to come [3 ###reference_b3###].\nRecently, the idea of a social AV has emerged as a potential solution to this problem, with human-AV social interactions being studied extensively [4 ###reference_b4###].\nThe social AV is proactive rather than reactive: it leverages models of human behavior to inform its own decision-making.\nIn particular, a social AV might strategically alter its own behavior so as to influence a human driver to exhibit some desired behavior.\nTo accomplish this, the AV must exploit models not only of the human\u2019s driving behavior but also of the change in the human\u2019s behavior with respect to its own behavior.\nThe concept of social influence has numerous applications in mixed-autonomy settings.\nIn congested traffic, AVs can orchestrate cooperative merging and lane-changing maneuvers, alleviating traffic bottlenecks.\nIn emergency situations, AVs can guide human drivers to take evasive actions that mitigate collision risks.\nHuman influence for general human-robot collaboration is studied in [5 ###reference_b5###], and applications to traffic flow optimization have been widely explored [6 ###reference_b6###, 7 ###reference_b7###].\nFurthermore, a single-agent control framework is provided in [8 ###reference_b8###], and [9 ###reference_b9###, 10 ###reference_b10###] provide frameworks for single-agent influence with simultaneous probing, i.e. learning human driving parameters by observing behavior.\nMore generally, the concept of a socially aware AV has been studied extensively.\nAwareness is a prerequisite for influence: a social AV must first understand human interactions before attempting to alter human behavior.\nWhile human influence is a very active procedure, awareness is more passive.\nThe socially aware AV is cognizant of its effects on human behaviors and it may consider these effects during planning, but it does not necessarily seek to apply this knowledge to altering these behaviors.\nSocial awareness functions as a significant first step toward full human-AV social interaction in mixed-autonomy settings, especially given the extensive work done in learning human driving behavior [11 ###reference_b11###, 12 ###reference_b12###].\nSome studies have achieved superior AV control by considering human car-following [13 ###reference_b13###] and lane-changing [14 ###reference_b14###] behaviors.\nThese ideas have also been employed in designing AV control frameworks [15 ###reference_b15###].\n###figure_1### Control barrier functions (CBFs) are a popular tool for safe AV control.\nIn particular, CBFs have been employed in problems of cooperative AV merging [16 ###reference_b16###, 17 ###reference_b17###] and lane-changing [18 ###reference_b18###].\nFor a thorough overview of CBFs and their applications, see [19 ###reference_b19###].\nIn comparison to existing methods of human influence that traditionally use game-theoretic formulations [20 ###reference_b20###, 21 ###reference_b21###], a CBF-based approach offers guarantees of safety and convergence, resulting in more robust and predictable policies.\nTo the best of the authors\u2019 knowledge, the current literature on human influence is restricted either to localized objectives (e.g. following, merging) or to single-agent configurations.\nIn this work, we aim to fill this gap.\nOur main contributions are: (a) an optimal control framework for influencing human driving behavior using CBFs, with applicability to (b) various objectives of influence and (c) multi-robot and multi-human scenarios.\nSection II ###reference_### formulates the problem of human influence.\nSection III ###reference_### presents our framework.\nSection IV ###reference_### discusses the results of principal experiments using our framework.\nSection V ###reference_### discusses the results of two case studies involving our framework.\nFinally, Section VI ###reference_### concludes our study and presents future directions."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Problem Formulation",
15
+ "text": "We are given a group of human-driven cars, robot cars, and non-interactive background cars that travel along a multi-lane highway with lanes.\nWhile human-driven cars react to the behavior of robot cars, background cars do not.\nOur objective is to compute robot car controls at each timestep that influence the human-driven cars to exhibit some desired behaviors and minimize some cost function .\nIn a state space , let denote the human-driven car positions relative to some fixed starting point along their respective lanes, their velocities in their lanes, and their accelerations in their lanes.\nSimilarly, let denote positions of the robot cars along their lanes, their velocities, and their accelerations, and let denote positions of the background cars along their lanes, their velocities, and their accelerations.\nLet denote the position of the -th human-driven car , the position of the -th robot car , and the position of the -th background car .\nLet denote the current lane of human-driven car , and represent the current lanes of all human-driven cars as .\nDefine for the current lanes of the robot cars and for the current lanes of the background cars in a similar fashion.\nWe will assume that robot cars and background cars maintain their lanes.\nLet us assume that human-driven cars follow double integrator dynamics that can be represented as\nwhere is the system state and each has some known parameters that define human driver \u2019s behavior [22 ###reference_b22###].\nLet us assume that the robot cars are acceleration-controlled.\nTheir dynamics can be posed as\nLet us assume that the background cars travel with constant velocities.\nThis is a reasonable assumption because, in the absence of external influence, cars generally maintain their speed in highway settings.\nThen we have\nThe human-driven cars have lane-change controls .\nLane changing is a discrete event; lanes are updated at each timestep as\nThis definition encodes the actions {left, stay, right}.\nFor human-driven car we have\nwhere is a lane-change safety function, is a lane-change incentive function, and both depend again on parameters .\nFor example, could be a function of the space available between cars in the adjacent lane, and could be a function of the difference in velocity or acceleration between the human\u2019s current lane and the adjacent lane.\nThe symbol exists here to denote that a lane change may occur in either direction.\nWe denote the human-driven cars\u2019 collective state as .\nWe denote the robot cars\u2019 collective state as .\nWe denote the background cars\u2019 collective state as .\nThe total state of our system is denoted as .\nWe can formulate an arbitrary objective of influence as a constraint function such that implies that the objective is satisfied.\nFor example, we can express influencing an upper bound on human-driven car \u2019s velocity as .\nBecause lane changes are discrete events, we will utilize (5 ###reference_###) to derive constraints that influence lane changes, rather than formulating constraints directly in terms of .\nFor example, if is a function of \u2019s velocity, then our constraint will be directly on \u2019s velocity.\nFor simultaneous influence objectives, we can derive constraints .\nThen given some cost function , we can pose the following problem to solve for optimal robot controls that achieve human influence:\nIn the next section, we further derive a control constraint that yields satisfaction of the state constraint ."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III Optimal Control Framework",
21
+ "text": "In this section, we present the optimal control framework in detail.\nWe first describe the intuition motivating the framework\u2019s design.\nWe then review and define some key properties of time derivatives.\nNext, we outline the procedure by which constraints on the robot controls are derived.\nFinally, we pose the general optimal control problem for achieving human influence."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "III-A Preliminary: Control Barrier Functions",
27
+ "text": "In [23 ###reference_b23###], CBFs are employed to compute controls for a group of dog robots that herd a flock of sheep agents.\nThe sheep aim to reach some goal location, and the dogs must prevent the sheep from breaching some spatial protected zone.\nCrucially, the dog robots exploit knowledge of the dog-sheep interaction dynamics to achieve this objective.\nSimilarly, here we have a group of human-driven cars that exhibit some defined behavior, and a group of surrounding robot cars must influence this behavior with the objective of enforcing some abstract \u201cprotected zone.\u201d\nWe claim that CBFs are a useful tool for enforcing this protected zone.\nUsing CBFs, we define our protected zone as a barrier function where implies safety in our system, meaning satisfaction of our influence objective.\nTo push our system toward safety over time, we can enforce , where is a design parameter and .\nTo derive an explicit constraint on robot controls, we might compute further time derivatives of and express our constraint in terms of or .\nIn general, an -th order constraint is given as where is the differentiation operator.\nWe expand this to where denotes the -th time derivative of .\nIt remains that our system tends toward , and thus the protected zone is enforced."
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "III-B Preliminary: Time Derivative Properties",
33
+ "text": "In general, we can make use of the following discrete approximations for computing time derivatives:\nwhere is the discretization timestep.\nWe refer to a human-driven car as directly influenced by a robot car if is dependent on , i.e. .\nConsider a human-driven car that is directly influenced by a robot car .\nThen when computing the time derivative of , we have\nwhere .\nNotice that this gives us an explicit relationship between human actions and robot controls."
34
+ },
35
+ {
36
+ "section_id": "3.3",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-C Constraint Derivation",
39
+ "text": "Let denote the -th time derivative of .\nFor our overall state constraint , recall that each individual -th order constraint is defined as and encodes some desired high-level objective such that iff the objective is satisfied.\nThus, can be treated as a barrier function , as outlined in Section III-A ###reference_###.\nNote that not all terms in are necessarily nonzero, e.g. an objective may only involve a subset of the cars in .\nAlso note that we require the same relative degree for all terms, although this is sometimes aided by approximating higher-order terms using lower-order terms.\nWe compute time derivatives of to obtain\nwhere is the set of all such that human-driven car is directly influenced by some robot car , and is the set of all such that human-driven car is not directly influenced by any robot car .\nNotice that we no longer have a term, since we assume constant velocities for background cars.\nWe first leverage (III-B ###reference_###) to convert terms to terms, yielding the equivalent form .\nThen, we leverage (7 ###reference_###) to convert terms to terms, and (8 ###reference_###) to convert terms to terms, yielding a more usable function .\nFinally, we can express our initial objective as the following CBF constraint:\nUsing CBFs, we have now effectively translated our original state constraint into a control constraint ."
40
+ },
41
+ {
42
+ "section_id": "3.4",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-D Optimization Problem",
45
+ "text": "We use the above process to derive linear constraints:\nWe can then adapt (6 ###reference_###) to pose the following optimization problem to solve for controls that push the system toward , i.e. satisfaction of the objectives of influence:\nThe problem is feasible if (12 ###reference_###) is feasible (i.e. the half-spaces created by intersect) and (12 ###reference_###) intersects with the control limits.\nIn this paper, we assume that (13 ###reference_###) is always feasible.\nTo ensure such feasibility in practice, one must solve the synthesis problem, as introduced in [24 ###reference_b24###, 25 ###reference_b25###]."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "IV Principal Experiments",
51
+ "text": "In this section, we verify the feasibility of our framework under various objectives by simulating nine low-level scenarios.\nWe also provide an example constraint derivation for one scenario to illustrate the process."
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "IV-A Human Behavior Models",
57
+ "text": "Here we specify the behavior models used for human driver control.\nFor longitudinal control (i.e., car-following), we define each using the intelligent driver model (IDM) [22 ###reference_b22###].\nThus, we have\nwith\nwhere is the maximum acceleration of , is its maximum deceleration, is its desired velocity, is its desired following distance, is its current following distance, and is the current difference between its own velocity and the velocity of the preceding car.\nFor lateral control (i.e., lane-changing), we define the safety function for as\nwhere is the position of the car in front of in the adjacent lane, is the position of the car behind in the adjacent lane, and is some minimum distance threshold desired by .\nWe define the incentive function for as\nwhere is the velocity of the car in front of in the adjacent lane, and is some minimum threshold desired by for the velocity difference between the two cars.\nTogether, these two criteria encode that a human driver changes lanes if there is ample space in the adjacent lane and a potential increase in velocity following the lane change.\nThis is similar to real-world highway lane change behavior, where humans seek to increase their speed up to some desired value, as can be seen in IDM."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-B Experimental Scenarios",
63
+ "text": "Here we introduce nine low-level scenarios to demonstrate the feasibility of our framework under various objectives.\nWe examine three single-human single-robot scenarios (S1-S3), three single-human multi-robot scenarios (SM1-SM3), and three multi-human multi-robot scenarios (M1-M3).\nFor all scenarios, we have , car length , and each robot car has bounded velocity and bounded acceleration .\nFor human behavior, we have , , for all , and humans follow the normal driving IDM parameters given in [26 ###reference_b26###].\nFor simplicity, we omit the lane-change incentive criterion in scenarios M1-M3.\nWe use the cost function to minimize control effort in our optimization.\nFig. 2 ###reference_### details S1-S3 and SM1-SM3 and their simulation results, Fig. 3 ###reference_### details M1-M3 and their simulation results, and Fig. 4 ###reference_### visualizes M1.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### ###figure_8### ###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### ###figure_14### ###figure_15### ###figure_16### ###figure_17### ###figure_18### ###figure_19### ###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24### ###figure_25### ###figure_26### ###figure_27### ###figure_28### ###figure_29### ###figure_30### ###figure_31### ###figure_32### ###figure_33### ###figure_34### ###figure_35###"
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-C Example Derivation",
69
+ "text": "Here we provide an example derivation of the optimal control problem for scenario M1.\nThis includes the human driver lane change\u2019s forward and backward safety criteria.\nFirst, we wish to enforce a minimum distance between and to allow for a lane change.\nWe express this as\nThe time derivatives of are as follows:\nNotice that contains but not , so we take an additional time derivative of to obtain a term in .\nThis allows us to exploit the time derivative properties.\nWe apply (III-B ###reference_###) to to obtain .\nIntuitively, we have now translated human actions into robot controls.\nNow, we just need to standardize everything to terms rather than , so we apply (8 ###reference_###) to to obtain\nFinally, we have an explicit function of robot controls and , so we can express our CBF constraint as\nNext, we wish to enforce a minimum distance between and to allow for a lane change.\nWe express this as\nNotice that here, is not a function of the robot state.\nThe time derivatives of are as follows:\nWe reached to obtain , so now we can again exploit the time derivative properties.\nWe apply (III-B ###reference_###) to to obtain\nWe now have an explicit function of and , so we can express our CBF constraint as\nWe have derived the linear constraints\nWe can formulate this as\nwhere\nwith and .\nFinally, we can pose the following QP to solve for the robot controls at each timestep:"
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Case Studies",
75
+ "text": "In this section, we quantify the effectiveness of our framework and demonstrate its real-world applicability to two high-level objectives."
76
+ },
77
+ {
78
+ "section_id": "5.1",
79
+ "parent_section_id": "5",
80
+ "section_name": "Traffic Flow Optimization",
81
+ "text": "Here we consider a three-lane highway setting where each lane initially contains between 1 and 3 robot cars and between 1 and 3 human-driven cars.\nThe objective of the robot cars is to produce an increase in traffic flow by strategically influencing human-driven car lane changes.\nIn particular, we use one-dimensional -means clustering on the humans\u2019 desired velocities to assign each car to a lane, where and each resulting cluster represents a group of cars that should travel in the same lane.\nThe rationale for this lane assignment strategy is that the total traffic flow across a group of lanes is improved when the cars in each lane have similar desired velocities.\nOnce all human-driven cars have been assigned new lanes, each car is influenced to change lanes from its current lane to its assigned lane by the surrounding robot cars (up to ) using our optimal control framework.\nHuman-driven cars are selected individually for influence by order of index.\nAfter all lane changes, each robot car follows IDM control with a desired velocity that matches the highest desired velocity of any human-driven car in its lane.\nThere is an equal likelihood of each number of robot cars and human-driven cars in each lane, and a given car has an equal likelihood of being a robot car and a human-driven car.\nFor each human-driven car , we generate a desired velocity and a lane-change velocity threshold .\nWe introduce a noise to each human\u2019s desired following distance, to the desired velocity, and to the desired acceleration, deceleration, and time headway.\nEach human\u2019s desired lane-change space threshold is equal to the desired following distance.\nWe introduce a noise to the controls of the human-driven cars.\nWe use all cars\u2019 average velocity and the average difference between the humans\u2019 desired and actual velocities as metrics for traffic flow, where an increase in the former and a decrease in the latter indicate an improvement in traffic flow.\nWe simulated 100 trials of this scenario under these conditions, and compared the traffic flow performance prior to and following the human influence maneuvers, at and , respectively.\nFig. 8 ###reference_### shows the average velocity vs. time.\nDependent -tests for paired samples showed a significant increase in average velocity () and a significant decrease in the difference between desired and actual velocity () following the human influence maneuvers.\nThis implies that our framework is effective in improving mixed-autonomy highway traffic flow."
82
+ },
83
+ {
84
+ "section_id": "5.2",
85
+ "parent_section_id": "5",
86
+ "section_name": "Aggressive Behavior Mitigation",
87
+ "text": "Here we consider a single-lane highway setting where a robot car is preceded by a background car and followed by a human-driven car exhibiting aggressive following behavior.\nThis aggressive behavior is encoded as a close following distance, high velocity, and large acceleration and deceleration.\nThe robot car aims to mitigate this behavior by influencing a lower bound on the human-driven car\u2019s following distance, an upper bound on its velocity, and a lower bound and upper bound on its acceleration.\nWe formulate each of these intentions as CBF constraints using our framework.\nWe adopt the IDM parameters provided in [26 ###reference_b26###] to define the human\u2019s aggressive following behavior and the desired normal behavior.\nWe introduce a noise to the human\u2019s desired following distance, to its desired velocity, and to its desired acceleration, deceleration, and time headway.\nWe also introduce a noise to the controls of the human and background cars.\nWe fix the initial robot car position , and generate an initial human-driven car position and background car position .\nWe also generate initial velocities .\nWe use the average magnitude of jerk as a metric for aggressive following behavior.\nHigher values of average jerk magnitude were shown in [27 ###reference_b27###] to correlate with higher levels of aggression.\nWe simulated 100 trials of this scenario under these conditions, and compared the average magnitude of jerk from to with that of a control condition where the robot car instead follows IDM control with normal driving parameters.\nFig. 8 ###reference_### shows human following distance vs. time, Fig. 8 ###reference_### shows human velocity vs. time, and Fig. 8 ###reference_### shows human acceleration vs. time.\nA dependent -test for paired samples showed a significant decrease in average jerk magnitude when human influence was employed ().\nThis implies that our framework is effective in mitigating aggressive human following behavior in highway settings.\n###figure_36### ###figure_37### ###figure_38### ###figure_39###"
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "VI Conclusion",
93
+ "text": "In this work, we presented a novel optimal control framework for influencing human driving behavior in mixed-autonomy traffic.\nWe leveraged control barrier functions to formulate the problem of human influence as a constrained optimization problem on the controls of the surrounding robot cars.\nWe demonstrated our framework\u2019s applicability to various objectives and configurations, including multi-robot and multi-human scenarios.\nWe validated its effectiveness in the two real-world objectives of traffic flow optimization and aggressive behavior mitigation.\nThrough our framework, we advance the state of the art by contributing superior versatility in the autonomous influence of human driving behavior.\nIn future work, we intend to expand our framework\u2019s compatibility with various human behavior models and further study its flexibility and scalability."
94
+ }
95
+ ],
96
+ "appendix": [],
97
+ "tables": {},
98
+ "image_paths": {
99
+ "1": {
100
+ "figure_path": "2309.13456v2_figure_1.png",
101
+ "caption": "Figure 1: A robot car is being tailgated by a human-driven car. How can the robot influence the human to increase following distance or to change lanes?",
102
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/human_influence_example.png"
103
+ },
104
+ "2(a)": {
105
+ "figure_path": "2309.13456v2_figure_2(a).png",
106
+ "caption": "(a) Illust. (S1)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
107
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/s1_illust.png"
108
+ },
109
+ "2(b)": {
110
+ "figure_path": "2309.13456v2_figure_2(b).png",
111
+ "caption": "(b) Fol. dist. (S1)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
112
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_11_p.png"
113
+ },
114
+ "2(c)": {
115
+ "figure_path": "2309.13456v2_figure_2(c).png",
116
+ "caption": "(c) Velocity (S1)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
117
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_11_v.png"
118
+ },
119
+ "2(d)": {
120
+ "figure_path": "2309.13456v2_figure_2(d).png",
121
+ "caption": "(d) Illust. (S2)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
122
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/s2_illust.png"
123
+ },
124
+ "2(e)": {
125
+ "figure_path": "2309.13456v2_figure_2(e).png",
126
+ "caption": "(e) Fol. dist. (S2)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
127
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_12_p.png"
128
+ },
129
+ "2(f)": {
130
+ "figure_path": "2309.13456v2_figure_2(f).png",
131
+ "caption": "(f) Velocity (S2)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
132
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_12_v.png"
133
+ },
134
+ "2(g)": {
135
+ "figure_path": "2309.13456v2_figure_2(g).png",
136
+ "caption": "(g) Illust. (S3)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
137
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/s3_illust.png"
138
+ },
139
+ "2(h)": {
140
+ "figure_path": "2309.13456v2_figure_2(h).png",
141
+ "caption": "(h) Rel. pos. (S3)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
142
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_13_p.png"
143
+ },
144
+ "2(i)": {
145
+ "figure_path": "2309.13456v2_figure_2(i).png",
146
+ "caption": "(i) Velocity (S3)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
147
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_13_v.png"
148
+ },
149
+ "2(j)": {
150
+ "figure_path": "2309.13456v2_figure_2(j).png",
151
+ "caption": "(j) Illust. (SM1)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
152
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/sm1_illust.png"
153
+ },
154
+ "2(k)": {
155
+ "figure_path": "2309.13456v2_figure_2(k).png",
156
+ "caption": "(k) Rel. pos. (SM1)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
157
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_21_p.png"
158
+ },
159
+ "2(l)": {
160
+ "figure_path": "2309.13456v2_figure_2(l).png",
161
+ "caption": "(l) Velocity (SM1)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
162
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_21_v.png"
163
+ },
164
+ "2(m)": {
165
+ "figure_path": "2309.13456v2_figure_2(m).png",
166
+ "caption": "(m) Illust. (SM2)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
167
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/sm2_illust.png"
168
+ },
169
+ "2(n)": {
170
+ "figure_path": "2309.13456v2_figure_2(n).png",
171
+ "caption": "(n) Rel. pos. (SM2)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
172
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_22_p.png"
173
+ },
174
+ "2(o)": {
175
+ "figure_path": "2309.13456v2_figure_2(o).png",
176
+ "caption": "(o) Velocity (SM2)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
177
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_22_v.png"
178
+ },
179
+ "2(p)": {
180
+ "figure_path": "2309.13456v2_figure_2(p).png",
181
+ "caption": "(p) Illust. (SM3)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
182
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/sm3_illust.png"
183
+ },
184
+ "2(q)": {
185
+ "figure_path": "2309.13456v2_figure_2(q).png",
186
+ "caption": "(q) Rel. pos. (SM3)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
187
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_23_p.png"
188
+ },
189
+ "2(r)": {
190
+ "figure_path": "2309.13456v2_figure_2(r).png",
191
+ "caption": "(r) Velocity (SM3)\nFigure 2: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios S1-S3 and SM1-SM3. In illustrations, red boxes are robot cars, blue boxes are human-driven cars, and green boxes are background cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
192
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_23_v.png"
193
+ },
194
+ "3(a)": {
195
+ "figure_path": "2309.13456v2_figure_3(a).png",
196
+ "caption": "(a) Illustration (M1)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
197
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/m1_illust.png"
198
+ },
199
+ "3(b)": {
200
+ "figure_path": "2309.13456v2_figure_3(b).png",
201
+ "caption": "(b) H1subscript\ud835\udc3b1H_{1}italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-relative position (M1)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
202
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_31_p1.png"
203
+ },
204
+ "3(c)": {
205
+ "figure_path": "2309.13456v2_figure_3(c).png",
206
+ "caption": "(c) H2subscript\ud835\udc3b2H_{2}italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-relative position (M1)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
207
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_31_p2.png"
208
+ },
209
+ "3(d)": {
210
+ "figure_path": "2309.13456v2_figure_3(d).png",
211
+ "caption": "(d) Velocity (M1)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
212
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_31_v.png"
213
+ },
214
+ "3(e)": {
215
+ "figure_path": "2309.13456v2_figure_3(e).png",
216
+ "caption": "(e) Illustration (M2)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
217
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/m2_illust.png"
218
+ },
219
+ "3(f)": {
220
+ "figure_path": "2309.13456v2_figure_3(f).png",
221
+ "caption": "(f) H1subscript\ud835\udc3b1H_{1}italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-relative position (M2)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
222
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_32_p1.png"
223
+ },
224
+ "3(g)": {
225
+ "figure_path": "2309.13456v2_figure_3(g).png",
226
+ "caption": "(g) H2subscript\ud835\udc3b2H_{2}italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-relative position (M2)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
227
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_32_p2.png"
228
+ },
229
+ "3(h)": {
230
+ "figure_path": "2309.13456v2_figure_3(h).png",
231
+ "caption": "(h) Velocity (M2)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
232
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_32_v.png"
233
+ },
234
+ "3(i)": {
235
+ "figure_path": "2309.13456v2_figure_3(i).png",
236
+ "caption": "(i) Illustration (M3)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
237
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/m3_illust.png"
238
+ },
239
+ "3(j)": {
240
+ "figure_path": "2309.13456v2_figure_3(j).png",
241
+ "caption": "(j) H1subscript\ud835\udc3b1H_{1}italic_H start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT-relative position (M3)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
242
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_33_p1.png"
243
+ },
244
+ "3(k)": {
245
+ "figure_path": "2309.13456v2_figure_3(k).png",
246
+ "caption": "(k) H2subscript\ud835\udc3b2H_{2}italic_H start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-relative position (M3)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
247
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_33_p2.png"
248
+ },
249
+ "3(l)": {
250
+ "figure_path": "2309.13456v2_figure_3(l).png",
251
+ "caption": "(l) Velocity (M3)\nFigure 3: Illustrations, human-relative position vs. time graphs, and velocity vs. time graphs for scenarios M1-M3. In illustrations, red boxes are robot cars, and blue boxes are human-driven cars. In graphs, black dotted horizontal lines denote lower or upper bounds on the value via the influence objective, and gray dotted vertical lines denote the time at which a lane change occurs.",
252
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/cbf_33_v.png"
253
+ },
254
+ "4(a)": {
255
+ "figure_path": "2309.13456v2_figure_4(a).png",
256
+ "caption": "(a) t=0\ud835\udc610t=0italic_t = 0 ss\\mathrm{s}roman_s\nFigure 4: Visualization of scenario M1, where two adjacent lanes each contain a robot car followed by a human-driven car, and the objective is to influence the human in the right lane to merge in between the two cars in the left lane. Red boxes are robot cars and blue boxes are human-driven cars. Overlaying each box is the car\u2019s velocity.",
257
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/m1_vis1.png"
258
+ },
259
+ "4(b)": {
260
+ "figure_path": "2309.13456v2_figure_4(b).png",
261
+ "caption": "(b) t=10\ud835\udc6110t=10italic_t = 10 ss\\mathrm{s}roman_s\nFigure 4: Visualization of scenario M1, where two adjacent lanes each contain a robot car followed by a human-driven car, and the objective is to influence the human in the right lane to merge in between the two cars in the left lane. Red boxes are robot cars and blue boxes are human-driven cars. Overlaying each box is the car\u2019s velocity.",
262
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/m1_vis2.png"
263
+ },
264
+ "4(c)": {
265
+ "figure_path": "2309.13456v2_figure_4(c).png",
266
+ "caption": "(c) t=20\ud835\udc6120t=20italic_t = 20 ss\\mathrm{s}roman_s\nFigure 4: Visualization of scenario M1, where two adjacent lanes each contain a robot car followed by a human-driven car, and the objective is to influence the human in the right lane to merge in between the two cars in the left lane. Red boxes are robot cars and blue boxes are human-driven cars. Overlaying each box is the car\u2019s velocity.",
267
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/m1_vis3.png"
268
+ },
269
+ "4(d)": {
270
+ "figure_path": "2309.13456v2_figure_4(d).png",
271
+ "caption": "(d) t=30\ud835\udc6130t=30italic_t = 30 ss\\mathrm{s}roman_s\nFigure 4: Visualization of scenario M1, where two adjacent lanes each contain a robot car followed by a human-driven car, and the objective is to influence the human in the right lane to merge in between the two cars in the left lane. Red boxes are robot cars and blue boxes are human-driven cars. Overlaying each box is the car\u2019s velocity.",
272
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/m1_vis4.png"
273
+ },
274
+ "5(a)": {
275
+ "figure_path": "2309.13456v2_figure_5(a).png",
276
+ "caption": "Figure 5: Average velocity vs. time for traffic flow optimization.",
277
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/flow_avg_vel.png"
278
+ },
279
+ "5(b)": {
280
+ "figure_path": "2309.13456v2_figure_5(b).png",
281
+ "caption": "Figure 5: Average velocity vs. time for traffic flow optimization.",
282
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/aggression_foldist.png"
283
+ },
284
+ "5(c)": {
285
+ "figure_path": "2309.13456v2_figure_5(c).png",
286
+ "caption": "Figure 5: Average velocity vs. time for traffic flow optimization.",
287
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/aggression_vel.png"
288
+ },
289
+ "5(d)": {
290
+ "figure_path": "2309.13456v2_figure_5(d).png",
291
+ "caption": "Figure 5: Average velocity vs. time for traffic flow optimization.",
292
+ "url": "http://arxiv.org/html/2309.13456v2/extracted/5488150/img/aggression_acc.png"
293
+ }
294
+ },
295
+ "validation": true,
296
+ "references": [],
297
+ "url": "http://arxiv.org/html/2309.13456v2"
298
+ }
20240322/2309.13950v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2309.14913v2.json ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Robustness of the Random Language Model",
3
+ "abstract": "The Random Language Model (De Giuli 2019) DeGiuli (2019) is an ensemble of stochastic context-free grammars, quantifying the syntax of human and computer languages. The model suggests a simple picture of first language learning as a type of annealing in the vast space of potential languages. In its simplest formulation, it implies a single continuous transition to grammatical syntax, at which the symmetry among potential words and categories is spontaneously broken. Here this picture is scrutinized by considering its robustness against extensions of the original model, and trajectories through parameter space different from those originally considered. It is shown here that (i) the scenario is robust to explicit symmetry breaking, an inevitable component of learning in the real world; and (ii) the transition to grammatical syntax can be encountered by fixing the deep (hidden) structure while varying the surface (observable) properties. It is also argued that the transition becomes a sharp thermodynamic transition in an idealized limit. Moreover, comparison with human data on the clustering coefficient of syntax networks suggests that the observed transition is equivalent to that normally experienced by children at age 24 months. The results are discussed in light of theory of first-language acquisition in linguistics, and recent successes in machine learning.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Brief review of the Random Language Model",
9
+ "text": "To establish notation, here we briefly review the RLM. Without loss of generality CFGs are assumed to be in Chomsky normal form, so that rules either take one hidden symbol to two hidden symbols , or one hidden symbol to an observable one, . These are quantified by weights and , respectively. For a sentence with derivation on the tree , define as the (unnormalized) usage frequency of rule and as the (unnormalized) usage frequency of . Let the number of hidden symbols be , and the number of observable symbols be . Then consider the energy function\nThe Boltzmann weight counts derivations with a multiplicative weight for each usage of the interior rule , and weight for each usage of the surface rule . We furthermore assign a weight to the tree itself: if each hidden node gets a weight and each surface node gets a weight , then a rooted tree with leaves gets a weight . The relative probability controls the size of trees; as in Ref.DeGiuli (2019 ###reference_b1###) we fix and set where to get large trees.\nGiven the grammar, the probability of a derivation is then\nNote that although we write the weight of a derivation in a Boltzmann-like form, the actual form of the weight is simply the conventional definition of a stochastic context-free grammar.\nThe RLM is an ensemble of CFGs. In Ref.DeGiuli (2019 ###reference_b1###) it was argued that a generic model will have lognormally distributed weights, so that the probability of a grammar is\nwhere and are defined by\nand . Here and . It is straightforward to show that and satisfy\nwhere denotes a grammar average and , .\nTwo arguments were given in Ref.DeGiuli (2019 ###reference_b1###) for the lognormal distribution: first, since languages must be comprehensible to a variety of speakers at any moment, they cannot evolve rapidly. If they evolve slowly under independent multiplicative adjustments to the weights, then a lognormal distribution follows by the multiplicative version of the central limit theorem Sornette and Cont (1997 ###reference_b10###). Indeed the lognormal distribution is ubiquitous for the distributions of positive random variables, such as transition weights, in real-world systems Broido and Clauset (2019 ###reference_b11###). In this interpretation, and are general control parameters for the ensemble.\nA second independent argument is to assume that and are the relevant quantities to characterize grammars in the course of learning; then a lognormal follows by a maximum entropy argument. The quantities and could be motivated a priori as appropriate measures of heterogeneity, or a posteriori by the observation that they control the Shannon entropy of sequences (along with and ). In this interpretation, and are Lagrange multipliers that enforce the expected values of and .\n###figure_1### Let us show how can be scaled out of the problem. Consider the grammar and derivation average of a generic observable of a derivation :\nMaking a change of variable , we get\nwhere are defined as in (4 ###reference_###) with the replacement , . It follows that the parameters , , and do not affect observables independently, but only in the ratios and , up to the other trivial modifications. In particular, increasing temperature is equivalent to increasing and . For this reason, in Ref.DeGiuli (2019 ###reference_b1###) these parameters were called deep and surface temperatures, respectively. From now on we set .\nThe model (3 ###reference_###) was called in Ref.DeGiuli (2019 ###reference_b1###) the Random Language Model (RLM). The properties of the sentences as a function of grammar heterogeneity were studied in Refs.DeGiuli (2019 ###reference_b1###); De Giuli (2019 ###reference_b12###, 2022 ###reference_b13###). The main result of Ref.DeGiuli (2019 ###reference_b1###) is that as is lowered, there is a transition between two regimes at where or depending on the quantity considered. Theory in Refs.De Giuli (2019 ###reference_b12###, 2022 ###reference_b13###) predicts this scaling (with ) and also predicts that the transition can be reached by fixing but lowering .\nTheory for the RLM was developed in Refs.De Giuli (2019 ###reference_b12###, 2022 ###reference_b13###), with final results obtained in the replica-symmetric approximation. For a text of sentences and total length , the result of Refs.De Giuli (2019 ###reference_b12###, 2022 ###reference_b13###) is that the Boltzmann entropy of configurations is\nwhere is a combinatorial coefficient independent of the other parameters, and and are couplings that control the size of trees. In the considered limit of large trees .\n###figure_2### Now, by a standard argument Parisi (1988 ###reference_b14###) the Boltzmann entropy of configurations is equal to the Shannon entropy of the probability distribution over configurations. This latter quantity can be written as the entropy of forests at given and , plus the conditional entropy of hidden configurations on those trees, plus the conditional entropy of leaves on those configurations. Each of these entropies can be written as the corresponding rate multiplied by the number of symbols. There are observable symbols and hidden symbols, but all roots are set to the start symbol. Thus\nwhere is the entropy rate of hidden symbols and is the conditional entropy rate of observable symbols, given the hidden ones. These configurational entropies are trivial at so that we can write\nThe factors of and cancel from this equality, as they must. As a result we obtain and finally\nComparing with (9 ###reference_###) and noting that this equation must hold for all (with finite ), we deduce\nin the replica-symmetric approximation. Since these entropies cannot be negative, they give lower bounds on the validity of the replica-symmetric approximation (in space). At small enough or , the approximations used to derive (11 ###reference_###) must break down. It also follows from this that the normalized entropies and should collapse with and , respectively.\nNote that Ref.DeGiuli (2019 ###reference_b1###) measured , not . In general the Bayes rule for conditional entropy is . When is small, then knowing the observable symbols also fixes their POS tags, so and . However when is large, then knowing the hidden symbol tells you nothing about the observable symbol, so . Thus generally we expect that as a function of , behaves similarly to .\nWe emphasize that although and play parallel roles in the distribution, and in many aspects of theory, they are distinct parameters with asymmetric control over observables, since the hidden structure of trees affects sentences but not vice-versa. Roughly speaking, we can demarcate four regimes. To explain these, we use the example of phrase structure, where observable symbols are words and hidden symbols are abstract categories, like noun phrase (NP), verb phrase (VP), verb (V), and so on. In syntax trees, the hidden symbols that appear just above the leaves are called part-of-speech (POS) tags \u2013 symbols like verb, noun, adjective, and so on.\nIf while , then sentences will consistently match words with their POS tags, but there will be no syntactic structure connecting the words together. Conversely if while , then sentences have structure, but the final observable words are randomly assigned from POS categories. If both of these parameter combinations are large, then sentences lack all structure; while if both are small, then sentence structure is complete. This phase diagram is sketched in Fig.2 ###reference_###, along with 3 paths through the space.\nAs a consequence, one can discuss learning by different routes through ( space (in addition to variations in and which could also be considered). In particular, theory predicts that the RLM transition can be probed by fixing and lowering . We now show that this prediction is verified by numerics."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "The RLM transition is encountered by increasing surface heterogeneity",
15
+ "text": "We simulated the RLM with and at various values of and . For each parameter value, 60 distinct grammars were constructed, and 200 sentences were sampled for each grammar. The results for the surface entropy are shown in Fig.3 ###reference_###; as predicted by theory, the entropy begins to drop from its trivial value at .\nSince is fixed as varies, there is no variation in the hidden parts of the derivations: the quantities shown in Ref.DeGiuli (2019 ###reference_b1###) to quantify the RLM transition, like the deep entropy and the order parameter , are flat as varies. Instead the transition can be quantified by the surface analog of the order parameter . For a surface rule define\naveraged over all surface vertices and over all derivations. Here is the hidden symbol and the observable one. measures how the application of this rule differs from a uniform distrbution. An Edwards-Anderson type order parameter for surface structure is\nwhere is an average over grammars. This quantity is shown in Fig.3 ###reference_###b. As expected, increases from a small value at high around the transition point."
16
+ },
17
+ {
18
+ "section_id": "2",
19
+ "parent_section_id": null,
20
+ "section_name": "II Learning a context-free grammar",
21
+ "text": "Now we consider the learning problem. How does a child actually learn the specific grammar of its environment?\nOur goal is not to completely answer this question, but simply to motivate why and how the symmetry of symbols should be explicitly broken. As a simple model, we suppose that the speaker utters sentences by drawing them from a stochastic grammar, which we take to be context-free. In a stochastic grammar, the weights quantify their frequency of use, which, for learners, is a proxy for their correctness. When all the weights are equal, nothing is known, and the grammar samples uniform noise (\u2018babbling\u2019). In contrast, when the weights have a wide distribution, the grammar is highly restrictive and the output sentences are highly non-random.\nThe learning scenario suggested in Ref.DeGiuli (2019 ###reference_b1###) was quite generic: suppose the child knows, possibly due to physiological constraints, that she is learning a CFG. Initially she knows nothing of weights, so she starts at . Her initial speech will be uniform random noise. Now, as she tries to mimic her caregivers, we assume that she tunes the grammar weights. In doing so the corresponding values of and , which could be defined from (5 ###reference_###), will inevitably decrease. Then, the prediction of the RLM is that the entropy of her speech will remain high for some time, until quite suddenly it begins to decrease. At this point her speech begins to convey information.\nThis scenario is quite schematic. Let us try to make it more concrete.\nConsider first an optimal learning scenario. She hears sentences , with words , and wants to find the optimal grammar that produces them. It is natural to maximize the log-likelihood of the grammar, given the data, given by\nwhich is considered as a function of the grammar, with fixed sentences . We assume that the space of grammars that she searches is the full set of possibilities, but of course physiological constraints may also play a role. The sentence probability is\nwhere is then a partition function restricted to the given sentence . In principle, she can estimate these quantities by speaking: every sentence she speaks adds a contribution to the denominator . If she feels that her caregiver understood it, then she also adds a contribution to the numerator .\nUnfortunately computing these restricted partition functions is difficult, both analytically, and for the child. So we consider a simpler, more idealized scenario. She keeps track of a lexicon\nhow many times she\u2019s heard each word, and also the categories to which each word belongs\ncalled part-of-speech (POS) tags.\n###figure_3### She thus obtains an estimate of the joint word & POS frequency, . Then she maximizes the likelihood of ,\nwhere is the count of word and POS tag in the text of total length , i.e.\nThe Kronecker in (18 ###reference_###) counts only texts with the right number of each word and POS tag. We have\nThe energy depends on the words through the term\nwhich has the same dependence on the text and POS tags. So we can write\nwhere\nis a shifted surface grammar (in the complex plane). Note however that when a saddle point is attained (as will be the case for large texts), will be real, so that the grammar is real-valued as it must be.\nFinally becomes\nso the likelihood depends on a shifted grammar. If we can evaluate this then we can derive the maximum-likelihood learning strategy, under the given assumptions.\nHowever is evaluated, the natural learning strategy on the grammars is simply to go in the gradient of increasing likelihood:\nwhere is the learning rate.\nRoughly speaking, is a difference of (minus) free energies: that of the RLM in the presence of a biased grammar (to match the observed ), but subtracting off the original RLM free energy. Thus the simple picture of DeGiuli (2019 ###reference_b1###) is slightly modified: the learning scenario can be viewed as a free energy descent, but only along the directions that lower the free energy coupled to the correct biased grammar; if a change in the grammar equally affects and , then it will cancel out of .\nLet us try to understand (26 ###reference_###) better. It involves the RLM partition function for a biased matrix. Note in general that\nNow it is known that natural languages exhibit Zipf\u2019s law: the probability of a word decreases as a power law of its rank. Thus will exhibit such behavior, and by this computation, so should the dependence of on . Thus to understand we should simulate the RLM in the presence of a bias , which we take to have a Zipfian form. We consider this next.\n###figure_4###"
22
+ },
23
+ {
24
+ "section_id": "3",
25
+ "parent_section_id": null,
26
+ "section_name": "III RLM with a bias",
27
+ "text": "The learning scenario motivates considering the RLM with a bias in the surface grammar. Consider\nwhere is the bias, and is given the distribution from the RLM. Then has the distribution\nIn order to disentangle the effect of the bias from that of , we take . As a Zipfian form, we consider\nwhere we arbitrarily order the words in decreasing rank. The scalar is the bias strength.\nWe simulated the RLM with Zipfian bias and a variety of field strengths, for and . The resulting is shown in Fig.6 ###reference_###. The RLM transition is present in all cases, but its position depends on the bias strength . A larger bias causes the transition to occur earlier (at higher ). This is intuitively clear, as the RLM transition was shown to induce the breaking of symmetries among symbols DeGiuli (2019 ###reference_b1###); since the bias breaks this symmetry explicitly, the transition occurs at higher .\nInspecting Fig.6 ###reference_###a, it appears as though the data for different magnitudes of (\u2018bias strengths\u2019) should collapse with some rescaled version of . This suggests that a simple model may capture the dependence on the bias. The transition discussed in DeGiuli (2019 ###reference_b1###); De Giuli (2019 ###reference_b12###, 2022 ###reference_b13###) is controlled by the heterogeneity of the grammar, measured in the original model by (4 ###reference_###), which satisfy (5 ###reference_###). Thus we can see how is renormalized by the bias. We evaluate\nWe can define a renormalized by\nAs shown in Figs. 3b, this approximately collapses the initial decay of from its trivial value. Looking at this initial decay on a logarithmic scale (Fig 3c), all curves appear to cross at a common point .\nWe also simulated the RLM with a staggered field of the form , where takes only three values and , for the first, second, and third third of the symbols, respectively. The form and scaling is chosen to have a similar overall amplitude as the Zipfian bias. We found that for the same values of as above, there was no effect of the bias on . We return to this later."
28
+ },
29
+ {
30
+ "section_id": "4",
31
+ "parent_section_id": null,
32
+ "section_name": "IV Comparison with human data",
33
+ "text": "###figure_5### ###figure_6### How does the RLM compare to first language acquisition in children?\nIn previous work, syntactic networks were built from data of children\u2019s utterances between 22 and 32 months of age Corominas-Murtra et al. (2009 ###reference_b9###), with data from the Peters corpora Bloom et al. (1974 ###reference_b15###, 1975 ###reference_b16###). The networks were built from dependency structures, with a mix of automated and manual procedures. These structures are graphs that connect observable symbols, related to but distinct from phrase structure trees. Their aim is to represent, in a linear fashion, the dominant relationships between words; for example subject-verb, or modifier-head. In Ref.Corominas-Murtra et al. (2009 ###reference_b9###), a variety of network-theoretic quantities showed a clear transition around 24 months of age; for example, both the word degree (the number of other words used with a given word) and the clustering coefficient (measuring the extent to which words are clustered) increase dramatically at this transition. Quantitatively, the clustering is found to be less than 0.01 before age 22.5 months, and above 0.08 after 24 months. The maximal value shown is 0.2, at age 26.5 months.\nIf the RLM is to apply to first language acquisition, then we should be able to see similar behaviors in these quantities, in appropriate graphs constructed from syntax trees. However, the latter are not equivalent to the dependency graphs. In the setting of the RLM where words have no semantic meaning, there is no unambiguous way to create dependency structures. Therefore we build \u2018sentence graphs\u2019 as follows: we take the observed sentences and add a link to the graph from to if , for some observable symbols and and index . This directed graph includes many true dependency relations, but also spurious ones that would be absent in a more complete analysis. It gives a first approximation to the dependency graphs.\nTo illustrate the similarities and differences between our graphs and those in Corominas-Murtra et al. (2009 ###reference_b9###), in Fig.5 ###reference_###a we reproduce the subset of syntax trees shown in Corominas-Murtra et al. (2009 ###reference_b9###), along with their dependency graph in Fig.5 ###reference_###b. In Fig.5 ###reference_###c we show our directed sentence graph. One can see that the undirected structure of the graphs is very similar, while the direction of links is not always the same. For example, for the phrase \u201ctelephone go right there\u201d the dependency graph identifies \u2018go\u2019 as the head and points links towards it, while in our directed graph the links follow the final phrase ordering. As a result of this incomplete matching of the edge directions, we investigated both the directed graph described above, along with the undirected version where edges are not directed.\nIn first language acquisition, both the size of the vocabulary and the manner in which the words are used changes as the child learns. For simplicity, in comparison with the RLM we will consider a situation where the vocabulary is fixed. This is motivated by the fact that, in the RLM, the position of the transition scales with if controlled by and , or if controlled by and : these show a weak logarithmic dependence on the number of symbols/words, so that we expect the \u2019s to characterize the dominant changes during learning; future work could consider an explicit model for how and change during learning. Therefore in what follows we focus on the clustering coefficient and the degree distribution, both of which can be meaningfully compared regardless of and .\nInitially, we consider the path in Fig.2 ###reference_###. We confirmed that the results are very similar along path (results not shown).\nWe investigated the clustering coefficient both for the directed graph, constructed as above, and the undirected graph constructed by adding the reverse links. The resulting clustering coefficients are shown in Fig.6 ###reference_### and Fig.7 ###reference_###. As the bias is varied, a clear increase is observed around , consistent with the drop in sentence entropy. Similarly, as is varied the clustering also increases around the transition point. Since a very similar behavior is observed for both our directed and undirected graphs, we expect that the match between this result and that found in Corominas-Murtra et al. (2009 ###reference_b9###) is not a coincidence.\nThe linguistic interpretation of this behavior is interesting Corominas-Murtra et al. (2009 ###reference_b9###): the transition marks the point where the child begins to use functional items like or to connect many words. It thus represents the learning of a particular class of grammatical rules.\nRef. Corominas-Murtra et al. (2009 ###reference_b9###) also looked at the degree distribution of dependency graphs, finding that below the transition graphs were scale-free with . No information was given on the behavior of the distribution during learning. To compare with the degree distributions measured in Corominas-Murtra et al. (2009 ###reference_b9###), we measured the degree distribution of our sentence graphs, shown in Fig.8 ###reference_###. We find that a power-law regime can be discerned, , but with an exponent that depends on . In general, we find that the exponent decreases in magnitude as decreases. At , the exponent matches what was observed in Corominas-Murtra et al. (2009 ###reference_b9###), but we note that this result does not appear to be stable at lower , where a hump develops at large degree. Moreover other corpora show various exponents: in Ref.i Cancho et al. (2004 ###reference_b17###) texts from Czech, German, and Romanian show exponents , and , respectively 111These are the exponents of undirected graphs; exponents for in-degree and out-degree graphs are similar.. Therefore, both the human data and the RLM show scale-free behavior in the nontrivial regime. A more complete analysis of the human data over the course of learning would permit a more refined comparison.\nFinally, we also looked at the clustering coefficient across paths and in Fig.2 ###reference_### (data not shown). We find that along , is consistently high , while along , the trajectory is very similar to that along shown in Fig.7 ###reference_###. This supports that the first-language-acquisition learning curve does not take place at fixed small .\n###figure_7### ###figure_8### Overall, these results support that the RLM captures the initial onset of learning grammatical structure in first language acquisition."
34
+ },
35
+ {
36
+ "section_id": "5",
37
+ "parent_section_id": null,
38
+ "section_name": "Finite-size scaling",
39
+ "text": "True thermodynamic phase transitions only occur in the thermodynamic limit, because in a finite system, the partition function is an analytic function of control parameters. In the RLM, there are 2 distinct ways in which systems can be large: first, the sentence size gives the size of derivation structures, while and are the alphabet sizes, controlling the potential complexity of grammars. For this reason, in DeGiuli (2019 ###reference_b1###) the senior author tuned the control parameters such that sentences were large (with a cutoff ), and moreover crucial observables were shown at various . The existence of finite-size scaling in over an appreciable range from to , and here up to , shows that the basic phenomena of the RLM are not particular to small or large .\nA recent work Nakaishi and Hukushima (2022 ###reference_b18###) questioned whether the RLM shows a true thermodynamic phase transition. By a combination of analytic and numerical arguments, the authors argue that there is no phase transition at finite and finite in the RLM. However, as already shown in DeGiuli (2019 ###reference_b1###), to obtain satisfactory collapse of the data, quantities need to be collapsed with , where or depending on the quantity considered. This is confirmed by theory that predicts , see for example (11 ###reference_###) (after division by to compare with numerical results).\nRef.Nakaishi and Hukushima (2022 ###reference_b18###) measured in particular the Binder cumulant\nwhich is 0 if is Gaussian, and nonzero otherwise. Here is the empirical probability of observing hidden symbol , related to the order parameter . Ref.Nakaishi and Hukushima (2022 ###reference_b18###) found that has a dip at the transition, which becomes infinitely deep as , suggesting that the RLM becomes a true thermodynamic phase transition in this limit. Ref.Nakaishi and Hukushima (2022 ###reference_b18###) suggest that the at which the minimum of is obtained goes to zero as but their fit is suspect: at the largest values of that they use (only ) the plot of versus has a distinct curvature, indicating that functional dependence on is not a power-law. It would indeed be very strange if did not collapse with as all other quantities do. The difference between and in the range of small considered by Ref.Nakaishi and Hukushima (2022 ###reference_b18###) is slight.\nWe measured the same quantity over an ensemble controlled by but found that the fluctuations in this quantity were huge, indicating that it is not self-averaging. Instead we found cleaner measurements of the Binder cumulant of , the distribution of observable symbols, in the ensemble considered above, dependent upon . As shown in Fig.9 ###reference_###, begins to differ from zero at the transition. On logarithmic axes, this onset appears to collapse with a logarithmic factor of , but not the power law reported in Ref.Nakaishi and Hukushima (2022 ###reference_b18###); the much larger range of considered here allows us to distinguish these collapses much easier than would be possible in the range . When the bias is varied, a similar behavior is observed (not shown).\nIt was mentioned in Ref.Nakaishi and Hukushima (2022 ###reference_b18###) that the behavior of the Binder cumulant is similar to that observed in the 3D Heisenberg spin glass Imagawa and Kawamura (2002 ###reference_b19###). Thus, contrary to the title of Ref.Nakaishi and Hukushima (2022 ###reference_b18###), the results within actually support the existence of the RLM transition, in the limit , in appropriately rescaled variables. Since true thermodynamic phase transitions reside in universality classes, with a whole host of irrelevant variables, this further supports the robustness of the RLM as a simple model of syntax."
40
+ },
41
+ {
42
+ "section_id": "6",
43
+ "parent_section_id": null,
44
+ "section_name": "VI Discussion",
45
+ "text": "The RLM encompasses all stochastic context-free grammars and, as such, is versatile. However, different applications may suggest different parameter ranges. This connects with ongoing discussion in linguistics on the relevant formalism to capture syntax of human languages. For example, in the classic rules-based approach of generative grammars, a child has to learn both the syntactic rules and the lexicon; in the RLM this means that their initial grammar would have large and large .\nIn the 1990\u2019s, Chomsky attempted to unify the CFGs of human languages by proposing in the Minimalist Program Chomsky (2014 ###reference_b20###) that their deep structure was essentially identical, captured by a Merge function that allows one to create tree-like derivation structures. Then variety among human languages would be captured by variety in the lexicon. More generally, this represented a shift from rules-based to constraint-based grammars. Although the associated merge grammars are, strictly speaking, different from CFGs, they maintain the core property of creating trees, and are similar to fixing a small so that deep structure is fixed. Then the learning problem would fix a small and allow the other parameters to vary, for example like path in Fig.2 ###reference_###.\nAlong with the shift to constraint-based grammar, the Minimalist Program proposed that syntax requires an optimality computation, which was not specified in detail. This has been criticized as being unmotivated by core linguistic data Johnson and Lappin (1997 ###reference_b21###), so it is not accepted as mainstream by linguists. For this reason, here we stay agnostic on the detailed description of learning and the relevant parameter ranges in the RLM, and focus on universal aspects.\nTo learn a human language within the CFG framework, the Principles & Parameters (P&P) scenario for first language learning was proposed Chomsky (1993 ###reference_b22###). In it, the task of learning syntax is reduced to the setting of a small number of discrete parameters, usually considered to be binary Shlonsky (2010 ###reference_b23###). Ongoing debate surrounds the detailed taxonomy of parameters and associated categorization of language, but regardless of these details, the scenario suggests that learning will occur in a series of discrete steps. Observables that quantify learning should then also show discrete steps.\nMeanwhile, connectionist models based on the physiology of the brain use continuous variables to learn McClelland and Rumelhart (1981 ###reference_b24###). Debate on how people learn past-tense suggested the utility of stochastic rule-based models Pinker and Prince (1988 ###reference_b25###), like those considered in the RLM. While the early connectionist models gave poor performance, recent models do much better, without significant change in the underlying structure Kirov and Cotterell (2018 ###reference_b26###). Thus debate continues on the correct approach to learn syntax, with some calling for a more symbiotic approach between connectionism and generative grammar Pater (2019 ###reference_b27###).\nThe recent success of machine learning models at learning language has further ignited this debate Piantadosi (2023 ###reference_b28###). But while such models are an existence proof of the ability to learn language without significant constraints, they currently rely on a huge database to learn, and struggle with formal reasoning. Their connection to first-language acquisition in humans is thus unclear.\nAt variance with the P&P approach, but more aligned with stochastic models and neural networks,\nhuman data analyzed in Ref.Corominas-Murtra et al. (2009 ###reference_b9###) as well as the RLM both suggest a single learning transition, with continuous (although in some cases quite abrupt) variation in observables. In the RLM this statement is robust to the inclusion of a bias, reflecting heterogeneity in the environment 222One may wonder if the specific Zipfian bias considered above is itself too smooth to see a series of discrete transitions. To this end, we also tested a bias taking on only 3 values. Over the same range of bias strengths shown above, this bias did not have any effect on the sentence entropy.. Thus, in all cases considered, the RLM transition is unimodal, matching that seen in human data.\nThese results suggest two possibilities. The first is that learning is truly a continuous process, in which\nwhat is learned are weights (or probabilities) rather than discrete rules. Frequency effects are indeed ubiquitous in first language acquisition Ambridge et al. (2015 ###reference_b29###), and there are proposals on how measured frequencies can be used to infer rules Yang et al. (2017 ###reference_b30###). Moreover, the recent successes of machine learning in natural language processing Chang and Bergen (2023 ###reference_b31###) invariably use approaches with parameters that can be continuously tuned during the training process. Thus the notion of discrete syntactic parameters that are set during learning appears overly simplistic, and may fail to account for the diversity of human languages, as has been argued by linguists and psychologists, with vociferous debate Evans and Levinson (2009 ###reference_b32###). Instead our results suggest that learning is continuous; after the RLM transition, the entropy of children\u2019s speech continuously decreases, and concominantly, the grammar becomes more and more certain.\nThe second possibility is that discrete rules are indeed learned, but they are only detectible by sufficiently sensitive order parameters. Recent work on learning of semantic information showed a mechanism for discrete-like transitions hidden within a continuous process Saxe et al. (2019 ###reference_b33###). Focusing on an input-output correlation matrix, it was found that singular values of this matrix are learned in a stepwise fashion; moreover when data is hierarchical, then these singular values are strongly graded, leading to distinct learning transitions. If this scenario also applies to learning of syntax, then it remains elusive in the data."
46
+ },
47
+ {
48
+ "section_id": "7",
49
+ "parent_section_id": null,
50
+ "section_name": "VII Conclusion",
51
+ "text": "The Random Language Model was introduced in DeGiuli (2019 ###reference_b1###) as a simple model of language. We showed here that the RLM transition: (i) can be encountered by a change in properties of observable sentences; (ii) is robust to the inclusion of a bias, and (iii) is apparently a sharp thermodynamic transition as , in appropriately rescaled variables. A comparison with human data Corominas-Murtra et al. (2009 ###reference_b9###) supports that the RLM transition is equivalent to that experienced by most children in the age 22-26 months in the course of first language acquisition.\nIn future work, two avenues look promising: first, although limited by availability of quantitative data, more attempts to make a quantitative comparison with human data would be worthwhile; second, the astounding success of machine learning models to model natural language, and the lack of a theory to explain this, suggest that the RLM might shed light on this process. Indeed, the RLM captures several features of real-world data (long-range correlations, hierarchy, and combinatorial structure) that are missing from most physics models, and needed to understand modern deep neural networks M\u00e9zard (2023 ###reference_b34###).\nFinally, the search for an analytical solution to the RLM is ongoing. A promising approach De Giuli (2019 ###reference_b12###, 2022 ###reference_b13###) represents syntax trees as Feynman diagrams for an appropriate field theory, but this falls short of a complete solution. The results of Nakaishi and Hukushima (2022 ###reference_b18###), as well as the results here, suggest that one should look for a solution in the idealized limit .\nAcknowledgments: EDG is supported by NSERC Discovery Grant RGPIN-2020-04762."
52
+ }
53
+ ],
54
+ "appendix": [],
55
+ "tables": {},
56
+ "image_paths": {
57
+ "1": {
58
+ "figure_path": "2309.14913v2_figure_1.png",
59
+ "caption": "Figure 1: Illustrative derivation trees for (a) simple English sentence, and (b) RNA secondary structure (after Searls (2002)). The latter is a derivation of the sequence \u2018gacuaagcugaguc\u2019 and shows its folded structure. Terminal symbols are encircled. Figure reproduced from DeGiuli (2019).",
60
+ "url": "http://arxiv.org/html/2309.14913v2/x1.png"
61
+ },
62
+ "2": {
63
+ "figure_path": "2309.14913v2_figure_2.png",
64
+ "caption": "Figure 2: Phase diagram of the RLM, in the replica-symmetric approximation. Text is grammatical in the lower-left region, demarcated approximately by \u03f5~s\u2062log\u2061T\u22481,\u03f5~d\u2062log\u2061N\u22481formulae-sequencesubscript~italic-\u03f5\ud835\udc60\ud835\udc471subscript~italic-\u03f5\ud835\udc51\ud835\udc411\\tilde{\\epsilon}_{s}\\log T\\approx 1,\\tilde{\\epsilon}_{d}\\log N\\approx 1over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT roman_log italic_T \u2248 1 , over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT roman_log italic_N \u2248 1 (light dotted). Three paths \u03b3jsubscript\ud835\udefe\ud835\udc57\\gamma_{j}italic_\u03b3 start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT through the diagram are sketched: \u03b31subscript\ud835\udefe1\\gamma_{1}italic_\u03b3 start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT at fixed \u03f5~ssubscript~italic-\u03f5\ud835\udc60\\tilde{\\epsilon}_{s}over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT, considered in DeGiuli (2019); \u03b32subscript\ud835\udefe2\\gamma_{2}italic_\u03b3 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT with \u03f5~s=\u03f5~dsubscript~italic-\u03f5\ud835\udc60subscript~italic-\u03f5\ud835\udc51\\tilde{\\epsilon}_{s}=\\tilde{\\epsilon}_{d}over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, discussed below; and \u03b33subscript\ud835\udefe3\\gamma_{3}italic_\u03b3 start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT at fixed \u03f5~dsubscript~italic-\u03f5\ud835\udc51\\tilde{\\epsilon}_{d}over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, also discussed below.",
65
+ "url": "http://arxiv.org/html/2309.14913v2/x2.png"
66
+ },
67
+ "3": {
68
+ "figure_path": "2309.14913v2_figure_3.png",
69
+ "caption": "Figure 3: The RLM transition can be encountered by lowering the surface temperature \u03f5ssubscriptitalic-\u03f5\ud835\udc60\\epsilon_{s}italic_\u03f5 start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT. Curves are shown at T=1000\ud835\udc471000T=1000italic_T = 1000, \u03f5~d\u22480.03subscript~italic-\u03f5\ud835\udc510.03\\tilde{\\epsilon}_{d}\\approx 0.03over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT \u2248 0.03, and indicated values of N\ud835\udc41Nitalic_N; (a) the surface entropy drops around \u03f5~s\u22481/log\u2061Tsubscript~italic-\u03f5\ud835\udc601\ud835\udc47\\tilde{\\epsilon}_{s}\\approx 1/\\log Tover~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2248 1 / roman_log italic_T, while (b) the surface order parameter P2subscript\ud835\udc432P_{2}italic_P start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT increases as \u03f5~ssubscript~italic-\u03f5\ud835\udc60\\tilde{\\epsilon}_{s}over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT is lowered.",
70
+ "url": "http://arxiv.org/html/2309.14913v2/x3.png"
71
+ },
72
+ "4": {
73
+ "figure_path": "2309.14913v2_figure_4.png",
74
+ "caption": "Figure 4: The RLM transition is robust to the addition of a Zipfian surface bias. Curves are shown at T=100\ud835\udc47100T=100italic_T = 100, \u03f5~d\u22480.03subscript~italic-\u03f5\ud835\udc510.03\\tilde{\\epsilon}_{d}\\approx 0.03over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT \u2248 0.03, and indicated values of bias strength h\u210ehitalic_h; (a) the surface entropy versus \u03f5~ssubscript~italic-\u03f5\ud835\udc60\\tilde{\\epsilon}_{s}over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT; bias increases from left to right; (b) the surface entropy versus an effective \u03f5~se\u2062f\u2062f\u2062(\u03f5~s,h)superscriptsubscript~italic-\u03f5\ud835\udc60\ud835\udc52\ud835\udc53\ud835\udc53subscript~italic-\u03f5\ud835\udc60\u210e\\tilde{\\epsilon}_{s}^{eff}(\\tilde{\\epsilon}_{s},h)over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_e italic_f italic_f end_POSTSUPERSCRIPT ( over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT , italic_h ) (see text). The onset of nontrivial surface entropy occurs at approximately \u03f5~se\u2062f\u2062f\u22481superscriptsubscript~italic-\u03f5\ud835\udc60\ud835\udc52\ud835\udc53\ud835\udc531\\tilde{\\epsilon}_{s}^{eff}\\approx 1over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_e italic_f italic_f end_POSTSUPERSCRIPT \u2248 1, but its development is weaker at larger biases. In (c) the same data from (b) is shown as an approach to the trivial value Hs\u2192log\u2061T\u2192subscript\ud835\udc3b\ud835\udc60\ud835\udc47H_{s}\\to\\log Titalic_H start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2192 roman_log italic_T, valid as \u03f5~s\u2192\u221e\u2192subscript~italic-\u03f5\ud835\udc60\\tilde{\\epsilon}_{s}\\to\\inftyover~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2192 \u221e. All curves intersect approximately at \u03f5~s\u22481subscript~italic-\u03f5\ud835\udc601\\tilde{\\epsilon}_{s}\\approx 1over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2248 1.",
75
+ "url": "http://arxiv.org/html/2309.14913v2/x4.png"
76
+ },
77
+ "5": {
78
+ "figure_path": "2309.14913v2_figure_5.png",
79
+ "caption": "Figure 5: Example syntax forest (a), dependency graph (b), and directed sentence graph (c) obtained from human data. Note that the word \u2018fix\u2019 appeared in the dependency graph of Corominas-Murtra et al. (2009) but not in the syntax tree shown therein.",
80
+ "url": "http://arxiv.org/html/2309.14913v2/x5.png"
81
+ },
82
+ "6": {
83
+ "figure_path": "2309.14913v2_figure_6.png",
84
+ "caption": "Figure 6: The clustering coefficient of word graphs increases at the RLM transition, for T=100\ud835\udc47100T=100italic_T = 100 and indicated Zipfian biases, with strength h\u210ehitalic_h. Both (a) directed and (b) undirected graphs show a similar increase of clustering around the transition point \u03f5~se\u2062f\u2062f\u22481superscriptsubscript~italic-\u03f5\ud835\udc60\ud835\udc52\ud835\udc53\ud835\udc531\\tilde{\\epsilon}_{s}^{eff}\\approx 1over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_e italic_f italic_f end_POSTSUPERSCRIPT \u2248 1. In both plots, the bias increases from right to left at the top.",
85
+ "url": "http://arxiv.org/html/2309.14913v2/x6.png"
86
+ },
87
+ "7": {
88
+ "figure_path": "2309.14913v2_figure_7.png",
89
+ "caption": "Figure 7: The clustering coefficient of word graphs increases at the RLM transition, for T=1000\ud835\udc471000T=1000italic_T = 1000 and indicated N\ud835\udc41Nitalic_N. Both (a) directed and (b) undirected graphs show a similar increase of clustering around \u03f5~s\u22480.1subscript~italic-\u03f5\ud835\udc600.1\\tilde{\\epsilon}_{s}\\approx 0.1over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2248 0.1.",
90
+ "url": "http://arxiv.org/html/2309.14913v2/x7.png"
91
+ },
92
+ "8": {
93
+ "figure_path": "2309.14913v2_figure_8.png",
94
+ "caption": "Figure 8: Degree distribution of sentence graphs at indicated values of N\ud835\udc41Nitalic_N and (a) \u03f5~s=10\u22122.2subscript~italic-\u03f5\ud835\udc60superscript102.2\\tilde{\\epsilon}_{s}=10^{-2.2}over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 10 start_POSTSUPERSCRIPT - 2.2 end_POSTSUPERSCRIPT, (b) \u03f5~s=10\u22121.6subscript~italic-\u03f5\ud835\udc60superscript101.6\\tilde{\\epsilon}_{s}=10^{-1.6}over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 10 start_POSTSUPERSCRIPT - 1.6 end_POSTSUPERSCRIPT, (c) \u03f5~s=10\u22120.99subscript~italic-\u03f5\ud835\udc60superscript100.99\\tilde{\\epsilon}_{s}=10^{-0.99}over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = 10 start_POSTSUPERSCRIPT - 0.99 end_POSTSUPERSCRIPT. In all cases an approximate power-law regime can be discerned. The shown lines have exponents 1.3,2,1.321.3,2,1.3 , 2 , and 3333, for (a,b,c), respectively.",
95
+ "url": "http://arxiv.org/html/2309.14913v2/x8.png"
96
+ },
97
+ "9": {
98
+ "figure_path": "2309.14913v2_figure_9.png",
99
+ "caption": "Figure 9: Binder cumulant of observable word distribution, for T=1000\ud835\udc471000T=1000italic_T = 1000 and indicated N\ud835\udc41Nitalic_N. (a) This quantity begins to differ from 0 around \u03f5~s\u22481subscript~italic-\u03f5\ud835\udc601\\tilde{\\epsilon}_{s}\\approx 1over~ start_ARG italic_\u03f5 end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT \u2248 1, as expected. (b,c) On a logarithmic axis, the onset appears to collapse with a logarithmic factor of N\ud835\udc41Nitalic_N, but not the power-law suggested in Nakaishi and Hukushima (2022).",
100
+ "url": "http://arxiv.org/html/2309.14913v2/x9.png"
101
+ }
102
+ },
103
+ "validation": true,
104
+ "references": [],
105
+ "url": "http://arxiv.org/html/2309.14913v2"
106
+ }
20240322/2309.15271v2.json ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "On the Modularity of Elementary Dynamic Actions",
3
+ "abstract": "In this paper, a kinematically modular approach to robot control is presented.\nThe method involves structures called Elementary Dynamic Actions and a network model combining these elements.\nWith this control framework, a rich repertoire of movements can be generated by combination of basic modules.\nThe problems of solving inverse kinematics, managing kinematic singularity and kinematic redundancy are avoided.\nThe modular approach is robust against contact and physical interaction, which makes it particularly effective for contact-rich manipulation.\nEach kinematic module can be learned by Imitation Learning, thereby resulting in a modular learning strategy for robot control.\nThe theoretical foundations and their real robot implementation are presented.\nUsing a KUKA LBR iiwa14 robot, three tasks were considered: (1) generating a sequence of discrete movements, (2) generating a combination of discrete and rhythmic movements, and (3) a drawing and erasing task.\nThe results obtained indicate that this modular approach has the potential to simplify the generation of a diverse range of robot actions.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "INTRODUCTION",
9
+ "text": "To generate complex motor behavior that can match that of humans, robot control based on motor primitives has been proposed [1 ###reference_b1###, 2 ###reference_b2###, 3 ###reference_b3###, 4 ###reference_b4###, 5 ###reference_b5###, 6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 10 ###reference_b10###].\nThe method originates from motor neuroscience research, where the complex motor behavior of biological systems appears to be generated by a combination of fundamental building blocks [11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###].\nBy parameterizing a controller using motor primitives, robots can efficiently learn, adapt, and execute a wide range of tasks.\nIn robotics, two distinct motor-primitive approaches have been identified: Elementary Dynamic Actions (EDA)111The original name suggested by Hogan and Sternad [6 ###reference_b6###] was \u201cDynamic Motor Primitives.\u201d However, to avoid confusion due to the similarity to \u201cDynamic Movement Primitives,\u201d we instead use \u201cElementary Dynamic Actions.\u201d [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###] and Dynamic Movement Primitives (DMP) [4 ###reference_b4###, 17 ###reference_b17###, 10 ###reference_b10###].\nEDA provides a modular framework for robot control that also accounts for physical interaction [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###, 9 ###reference_b9###, 18 ###reference_b18###].\nOne of its applications, impedance control [19 ###reference_b19###], has been a prominent approach for tasks involving contact and physical interaction.\nDMP provides a rigorous mathematical framework to generate movements of arbitrary complexity [4 ###reference_b4###].\nIts prominent application, Imitation Learning (or Learning from Demonstration [3 ###reference_b3###]), provides a systematic method to learn (or imitate) trajectories that are provided by demonstration.\nAlthough both EDA and DMP have provided useful frameworks for robot control, the potential advantages of integrating these two approaches have not yet been thoroughly explored.\nEDA enhances modularity in motion planning and robot command execution, which greatly simplifies robot control. Nevertheless, programming at the kinematic level remains challenging [19 ###reference_b19###, 20 ###reference_b20###]. Therefore, incorporating learning-based methods, such as Imitation Learning, could significantly improve the usability of this approach.\nDMP is a prominent method to generate a rich repertoire of movements. Yet, mapping these movements to robot commands is not trivial and requires additional consideration. For DMP trajectories generated in task-space, additional methods must be included to map these trajectories to joint position or torque commands, e.g., managing kinematic singularities [21 ###reference_b21###] and kinematic redundancy [22 ###reference_b22###] of the robot. Therefore, merging DMP with the modularity of EDA will facilitate robot control.\nIn this paper, we combine EDA and DMP to achieve a modular learning approach for robot control. We show how a wide range of robot movements can be produced by combining basic modules. This has the potential to facilitate the programming and control of more difficult robot tasks. The approach presented preserves the advantages of EDA for tasks involving contact and physical interaction. Hence, the approach can be employed for contact-rich manipulation.\nWe demonstrate our approach with an implementation on a real robot, using a KUKA LBR iiwa for modular task-space control. These demonstrations illustrate how the proposed approach can simplify the generation of a range of different robot tasks.\nWe present three different combinations of modules: (1) a sequence of discrete movements, (2) a combination of discrete and rhythmic movements, and (3) a combination of rhythmic movements with Imitation Learning for a drawing and erasing task. Examples (1) and (2) highlight the kinematic modularity offered by EDA.\nExample (3) shows how the kinematic programming with EDA can be enhanced by imitation learning.\nThe task also includes contact and physical interaction to showcase EDA\u2019s robustness in dealing with physical interaction."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Theoretical Foundations",
15
+ "text": "For the remainder of the paper, a torque-actuated degrees of freedom (DOFs) open-chain robotic manipulator will be considered. How to use the approach with position-actuated robots will briefly be discussed in Section V ###reference_###."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Elementary Dynamic Actions and the Norton Equivalent Network Model",
21
+ "text": "EDA, introduced by Hogan and Sternad [6 ###reference_b6###, 7 ###reference_b7###, 8 ###reference_b8###], consist of (at least) three distinct classes of primitives (Figure 1 ###reference_###A):\nSubmovements for discrete movements [23 ###reference_b23###].\nOscillations for rhythmic movements [23 ###reference_b23###].\nMechanical impedances to manage physical interaction [9 ###reference_b9###].\nSubmovements and oscillations constitute the kinematic primitives, and mechanical impedance constitutes the interactive primitives of EDA.\n###figure_1###"
22
+ },
23
+ {
24
+ "section_id": "2.1.1",
25
+ "parent_section_id": "2.1",
26
+ "section_name": "II-A1 Kinematic Primitives \u2014 Submovements and Oscillations",
27
+ "text": "A submovement is a smooth trajectory with its time derivative defined by:\nIn this equation, is time; denotes a smooth unimodal basis function in which the integration over the time domain is 1, i.e., ;\n is the velocity amplitude array.\nSince submovements represent discrete movement, has a finite support, i.e., for .\nGiven an initial position and goal location , .\nHence, submovement is defined by:\nIn this equation, is an integral of , i.e., .\nAccounting for the definition of , .\nAn oscillation is a smooth non-zero trajectory which is a periodic function:\nCompared to submovements, oscillations represent rhythmic and repetitive motions."
28
+ },
29
+ {
30
+ "section_id": "2.1.2",
31
+ "parent_section_id": "2.1",
32
+ "section_name": "II-A2 Interactive Primitive \u2014 Mechanical Impedances",
33
+ "text": "Mechanical impedance is an operator which maps (generalized) displacement to (generalized) force [8 ###reference_b8###, 9 ###reference_b9###, 18 ###reference_b18###]:\nIn this equation, is the displacement of an actual trajectory of (generalized) position from a virtual trajectory to which the mechanical impedance is connected, i.e., .\nLoosely speaking, mechanical impedance is a generalization of stiffness to encompass the dynamic relation of force to displacement and its derivatives.\nThe definition of generalized displacement accounts for the actual space in which resides.\nFor instance, if one considers the displacements, , where are the virtual and actual end-effector position.\nIf one considers the displacement in , , where are the virtual and actual end-effector orientation of the robot (Section III-A ###reference_###).\nCompared to the kinematic primitives, mechanical impedance is an interactive primitive which regulates the dynamics of physical interaction.\nFor instance, tactile exploration and manipulation of fragile objects should evoke the use of low stiffness, while tasks such as drilling a hole on a surface require high stiffness for object stabilization [9 ###reference_b9###].\nParameterizing control based on mechanical impedances provides beneficial stability properties, making this approach effective for tasks involving contact and physical interaction [19 ###reference_b19###, 20 ###reference_b20###].\nAnother feature of mechanical impedances is that they can be linearly superimposed even though each mechanical impedance is a nonlinear operator.\nThis is the \u201csuperposition principle of mechanical impedances\u201d [19 ###reference_b19###, 8 ###reference_b8###, 9 ###reference_b9###, 18 ###reference_b18###]:\nNote that the impedance operators include transformation maps (i.e., Jacobian matrices) (Section III ###reference_###).\nThis property simplifies a wide range of control tasks, as it provides a modular framework for robot control [24 ###reference_b24###, 25 ###reference_b25###, 26 ###reference_b26###, 8 ###reference_b8###, 18 ###reference_b18###, 27 ###reference_b27###]."
34
+ },
35
+ {
36
+ "section_id": "2.1.3",
37
+ "parent_section_id": "2.1",
38
+ "section_name": "II-A3 Norton Equivalent Network Model",
39
+ "text": "The three primitives of EDA can be combined using a Norton equivalent network model [8 ###reference_b8###], which provides an effective framework to relate the three elements of EDA (Figure 1 ###reference_###B).\nThe forward-path dynamics specifies the virtual trajectory , which consists of submovements and/or oscillations.\nThe mechanical impedance , determines from which is eventually mapped to the robot joint torque command.\nHence, a key objective of EDA is to find appropriate choices of and to generate the desired robot behavior.\nThe Norton equivalent network model clearly distinguishes the actual trajectory from the virtual trajectory to which the mechanical impedance is connected.\nWhile is determined by the interaction with the environment (i.e., bidirectional), the virtual trajectory can be chosen independently with respect to the environment (i.e., unidirectional) [8 ###reference_b8###].\nThis allows submovements and/or oscillations to be directly superimposed at the level of virtual trajectory .\nNot only submovements and/or oscillations, but any trajectory generating methods such as DMP can be seamlessly included to generate :\nIn this equation, , , are submovement, oscillation, and trajectory generated by DMP, respectively.\nAs can be seen in the equation, this concept provides kinematic modularity which is capable of simplifying the generation of a wide range of movements."
40
+ },
41
+ {
42
+ "section_id": "2.2",
43
+ "parent_section_id": "2",
44
+ "section_name": "II-B Dynamic Movement Primitives and Imitation Learning",
45
+ "text": "DMP, introduced by Ijspeert, Schaal, et al.[4 ###reference_b4###] consists of three distinct classes of primitives: canonical system, nonlinear forcing term, and transformation system.\nUsing these three primitives, DMP can generate both discrete and rhythmic movements.\nFor this overview, we focus on DMP for discrete movement, although the generalization to rhythmic movement is straightforward for the application [4 ###reference_b4###].\nIn this paper, DMP is used to generate the virtual trajectory of EDA."
46
+ },
47
+ {
48
+ "section_id": "2.2.1",
49
+ "parent_section_id": "2.2",
50
+ "section_name": "II-B1 Dynamic Movement Primitives",
51
+ "text": "A canonical system for discrete movement is a scalar variable governed by a stable first-order differential equation [10 ###reference_b10###]:\nIn this equation, , where is the duration of the discrete movement.\nA nonlinear forcing term for discrete movement, , which takes the canonical system as the function argument, is defined by:\nIn these equations, is the -th basis function of the nonlinear forcing term which is a Gaussian function;\n is the number of basis functions; is the weight array and , determine the center and width of the -th basis function, respectively.\nThe nonlinear forcing term can be concisely denoted by:\nIn this equation, is the weight matrix with as the -th column; is the -th element of .\nA transformation system is a collection of second-order differential equations with a scaled nonlinear forcing term as an input:\n\nIn these equations, ; denotes the time-scaled velocity of ; constructs a diagonal matrix with its elements defined by its array argument.\nGiven the initial conditions , and the nonlinear forcing term , the differential equation for the transformation system is forward integrated to generate .\nWithout , trajectory follows a response of a stable second-order linear system which converges to .\nTo generate a wider range of movements, the weights of the nonlinear forcing term are learned through various methods [28 ###reference_b28###, 10 ###reference_b10###].\nOne of the prominent methods is Imitation Learning, which learns the weight array by demonstration."
52
+ },
53
+ {
54
+ "section_id": "2.2.2",
55
+ "parent_section_id": "2.2",
56
+ "section_name": "II-B2 Imitation Learning",
57
+ "text": "Imitation Learning generates (or mimics) a demonstrated trajectory from the transformation system by learning the best-fit weights .\nGiven samples of , for , the best-fit weights are calculated using Linear Least Square Regression [10 ###reference_b10###]:\nIn these equations, , .\nFor the initial and goal positions of the demonstrated trajectory, and , the first and last samples are used, i.e., and .\nAlong with Linear Least Square Regression, Locally Weighted Regression can also be used to find the best-fit weights [29 ###reference_b29###, 4 ###reference_b4###].\nHowever, for a higher accuracy of imitating , Linear Least Square Regression is used."
58
+ },
59
+ {
60
+ "section_id": "3",
61
+ "parent_section_id": null,
62
+ "section_name": "III The Three Control Tasks and Methods",
63
+ "text": "In this section, we introduce three control tasks and the corresponding methods to achieve them.\nGenerating a sequence of discrete movements.\nGenerating a combination of discrete and rhythmic movements.\nDrawing and erasing a path on a table.\nThe first two tasks highlight the kinematic modularity of EDA.\nThe final task provides an illustrative example of combining Imitation Learning of DMP with EDA.\nThe task also involves physical contact which illustrates the benefit of EDA for contact-rich manipulation.\nAll three tasks consider task-space control, both position and orientation, using a kinematically redundant robot, i.e., ."
64
+ },
65
+ {
66
+ "section_id": "3.1",
67
+ "parent_section_id": "3",
68
+ "section_name": "III-A The Robot Controller",
69
+ "text": "The torque command of a robot, is defined by superimposing three mechanical impedances (Eq. (4 ###reference_###)):\nwhere:\nIn these equations, , , denote mechancal impedances for joint-space, task-space position and task-space orientation, respectively; denotes the robot\u2019s joint trajectories;\n denote the Jacobian matrices for the translational velocity and rotational velocity of the robot\u2019s end-effector, respectively, i.e., and ;\n denotes the Matrix Logarithm Map [30 ###reference_b30###, 31 ###reference_b31###];\nend-effector position and orientation are derived by the Forward Kinematics Map of the robot.\nTo generate goal-directed discrete movements that (1) maintain stability against contact and physical interaction and (2) manage kinematic redundancy, it is convenient to select positive definite stiffness matrices and damping matrices , [20 ###reference_b20###].\nIt is worth emphasizing that the controller based on EDA only involves the Jacobian transpose map, but not its (generalized)-inverse.\nHence, the controller avoids the problem of solving inverse kinematics, thus managing kinematic singularity and redundancy.\nMoreover, the controller does not involve local representations of (e.g., Euler angles), since it directly uses the spatial rotation matrices .\nHence, the controller avoids the problem of representation singularities [32 ###reference_b32###].\nThese features significantly simplify control in task-space."
70
+ },
71
+ {
72
+ "section_id": "3.2",
73
+ "parent_section_id": "3",
74
+ "section_name": "III-B Generating a Sequence of Discrete Movements",
75
+ "text": "Given an initial end-effector position , let be a goal location which the robot\u2019s end-effector aims to reach.\nThis goal-directed discrete movement can be achieved by setting as [33 ###reference_b33###, 34 ###reference_b34###, 35 ###reference_b35###]:\nIn these equations, is a submovement starting at time with duration (Eq. (1 ###reference_###), (2 ###reference_###))\nAt time , the goal location suddenly changes to a new goal location , which necessitates a second movement.\nUsing the kinematic modularity of EDA (Eq. (5 ###reference_###)),\nthe end-effector can reach by superimposing another submovement , without modifying :\nIn these equations, is a second submovement starting at time with duration .\nWith this new combined with the two submovements and , the convergence of to the new goal is achieved."
76
+ },
77
+ {
78
+ "section_id": "3.3",
79
+ "parent_section_id": "3",
80
+ "section_name": "III-C Generating a Combination of Discrete and Rhythmic Movements",
81
+ "text": "Consider a goal-directed discrete movement from initial position to goal location .\nAs discussed in Section III-B ###reference_###, this movement can be achieved by a single submovement with amplitude .\nOur goal is to overlap a rhythmic movement onto this goal-directed discrete movement.\nThis can be achieved by direct summation of an oscillation (Eq. (3 ###reference_###)) onto :\n###figure_2###"
82
+ },
83
+ {
84
+ "section_id": "3.4",
85
+ "parent_section_id": "3",
86
+ "section_name": "III-D Drawing and Erasing Task",
87
+ "text": "Consider a task of teaching a robot to draw a demonstrated trajectory on a table.\nWithout loss of generality, assume that the drawing table resides on a horizontal -plane.\nAfter drawing , the robot retraces backwards, with an additional oscillatory movement to erase the trajectory.\nFor this, one can combine Imitation Learning with the kinematic modularity of EDA, where an oscillatory movement is directly superimposed (Section III-C ###reference_###) onto the trajectory learned by DMP.\nImitation Learning with a two-dimensional DMP can be used to generate and this trajectory was used as the -, -coordinates of the virtual trajectory to draw .\nOnce is drawn on the plane by , the drawing can be erased by simply superimposing an oscillation on a time-reversed trajectory of :\nIn this equation, is the duration of .\nNote that the -coordinates of , , is not learned through Imitation Learning. Instead, an appropriate value for must be chosen to remain in contact with the table.\nTo elaborate, consider the -coordinate of the drawing table to be .\nThe pen, extending from the robot\u2019s end-effector and facing downward in the direction, requires setting to be lower than the drawing plane by an offset , i.e., . This offset is determined based on the desired contact force. Furthermore, the orientation of the pen, crucial for drawing and erasing tasks, are maintained by setting a constant of the controller (Eq. (6 ###reference_###)).\nIt is important to underline that the selected task not only is of interest to our modular claim (as it combines different kinematic actions as well as managing physical interaction), but it also represents a common scenario for multiple real-world scenarios such as polishing, cleaning, sanding, grinding, or even writing."
88
+ },
89
+ {
90
+ "section_id": "4",
91
+ "parent_section_id": null,
92
+ "section_name": "IV Experimental Results",
93
+ "text": "For the robot experiment, a KUKA LBR iiwa14, with seven torque-actuated DOFs (i.e., ), was utilized. For control, KUKA\u2019s Fast Robot Interface (FRI) was used. The built-in gravity compensation was activated for all three tasks. For the impedance parameters , and (Eq. (6 ###reference_###)), identical values were applied across all tasks.\nThe impedance values were set as follows: Nms/rad, where denotes an identity matrix; Nm/rad, Nms/rad.\nThe robot\u2019s configuration was directly accessed through the FRI interface, and was derived using a first-order finite difference of with a 3ms time step.\nThe Forward Kinematics Map for deriving , , and the Jacobian matrices , was calculated with the Exp[licit]TM-FRI Library.222Github repository: https://github.com/explicit-robotics/Explicit-FRI"
94
+ },
95
+ {
96
+ "section_id": "4.1",
97
+ "parent_section_id": "4",
98
+ "section_name": "IV-A Generating a Sequence of Discrete Movements",
99
+ "text": "For the basis function of the two submovements and , a minimum-jerk trajectory was used [36 ###reference_b36###]:\nThe values , (Eq. (6 ###reference_###)) were N/m, Ns/m, respectively.\nFor the orientation, was set to be constant.\n###figure_3### The results are shown in Figure 2 ###reference_###.\nWith the proposed approach, a convergence of to the new goal location was achieved (Figure 2 ###reference_###A, 2 ###reference_###B) by simply superimposing a second submovement onto the first one (Figure 2 ###reference_###C, 2 ###reference_###D, 2 ###reference_###E).\nNote that the task was achieved without any modification of the first submovement.\nHence, the robot could adapt its movement to reach the new destination without the need for real-time modification of the initiated movements.\nThis flexibility of the approach is advantageous in scenarios where quick adaptation is necessary to reach a new goal location.\nNote that the simplicity of this approach is not guaranteed for other methods.\nFor instance, using DMP, the task requires online modification of the initiated movement [37 ###reference_b37###], which may introduce practical difficulties for implementation.\n###figure_4### ."
100
+ },
101
+ {
102
+ "section_id": "4.2",
103
+ "parent_section_id": "4",
104
+ "section_name": "IV-B Generating a Combination of Discrete and Rhythmic Movements",
105
+ "text": "For the submovement, with minimum-jerk trajectory was employed.\nFor the oscillation, a circular trajectory residing on the -plane was used:\nIn this equation, and are the radius and angular velocity of the circular trajectory, respectively.\nThe values of the impedance parameters were identical to those in Section IV-A ###reference_###, i.e., N/m and Ns/m, respectively (Eq. (6 ###reference_###)).\nFor the orientation, was set to be constant.\nThe results are shown in Figure 3 ###reference_###.\nThe proposed approach enabled a combination of discrete and rhythmic movements of the robot\u2019s end-effector (Figure 3 ###reference_###A) through the superposition of submovement and oscillation (Figure 3 ###reference_###B, 3 ###reference_###C, 3 ###reference_###D).\nIt is important to note that achieving a direct combination of both discrete and rhythmic movements presents a challenge for DMP methodologies, which typically treat these movement types separately [38 ###reference_b38###, 39 ###reference_b39###] (Section II-B ###reference_###)."
106
+ },
107
+ {
108
+ "section_id": "4.3",
109
+ "parent_section_id": "4",
110
+ "section_name": "IV-C Drawing and Erasing Task",
111
+ "text": "For Imitation Learning of , the human-demonstrated data points of were collected with a sampling rate of 333Hz.\nThe velocity and acceleration for Imitation Learning were derived by first-order finite difference of with Gaussian filtering.\nFor the filtering, MATLAB\u2019s smoothdata function with a time window size 165ms was used.\nWith these filtered data, Linear Least Square Regression was used (Section II-B2 ###reference_.SSS2###).\nFor the drawing and erasing tasks, the impedance parameters N/m (respectively N/m) and Ns/m (respectively Ns/m) were used.\nAdditionally, for the erasing task, an oscillation , as described in Eq. (9 ###reference_###) was implemented on the -plane, instead of the -plane.\nThe results are shown in Figure 4 ###reference_###, illustrating the entire process of generating for the drawing and erasing tasks. By merging Imitation Learning with EDA (Figure 4 ###reference_###A, 4 ###reference_###B, 4 ###reference_###C), the drawing (Figure 4 ###reference_###D) and erasing tasks (Figure 4 ###reference_###E) were successfully achieved.\nIt is worth emphasizing that the key to this approach is the combination of Imitation Learning and the modular property of EDA.\nThe trajectory was separately learned with Imitation Learning and directly combined with an oscillation .\nWith modest parameter tuning (i.e., changing the angular velocity ), trajectory used in task IV-B ###reference_### was simply reused.\nUsing appropriate values of mechanical impedances, stability against contact and physical interaction was achieved for both drawing and erasing tasks.\nNote that the simplicity of the approach is not immediately apparent when solely using DMP.\nTo generate discrete and rhythmic movements with DMP requires different types of canonical system and nonlinear forcing term [4 ###reference_b4###, 10 ###reference_b10###].\nHence, one cannot directly combine these two movements.\nMoreover, even if an additional method to merge these movements were devised, mapping this task-space trajectory to joint-space commands would require additional consideration (e.g., managing kinematic redundancy).\nBy merging EDA with DMP, these problems are avoided."
112
+ },
113
+ {
114
+ "section_id": "5",
115
+ "parent_section_id": null,
116
+ "section_name": "Discussion, Limitations and Conclusion",
117
+ "text": "Thanks to the kinematic modularity of EDA, the two tasks of sequencing discrete movements and combining discrete and rhythmic movements were greatly simplified.\nFor the former, the subsequent movement was directly superimposed onto the previous movement without modifying the first submovement.\nFor the latter, the discrete and rhythmic movements were planned separately and then directly superimposed.\nThe authors want to emphasize that the simplicity of this approach is significant and non-trivial.\nFor instance, with DMP, the sequencing task requires additional dynamics to reach the goal .\nThis can lead to practical challenges in real-world robot implementation.\nIn contrast to EDA, for combination tasks, DMP treats discrete and rhythmic movements separately [38 ###reference_b38###, 39 ###reference_b39###, 40 ###reference_b40###, 4 ###reference_b4###], which thereby prevents direct superposition of the two types of movements generated (or learned) by DMP.\nThe robot demonstration of the drawing and erasing task illustrated how to get the best of both motor-primitive approaches.\nDMP and Imitation Learning provided a rigorous mathematical framework to generate (or learn) trajectories of arbitrary complexity.\nEDA and its Norton equivalent network structure provided a modular framework for robot control.\nUsing Imitation Learning to learn the virtual trajectory of EDA, a modular learning strategy for robot control was implemented.\nMerging the methods of EDA with DMP preserved the kinematic modularity of EDA and combined it with favorable behavior during physical interaction.\nThis facilitated the drawing and erasing tasks, which involved physical contact.\nThe key to this approach is the compliant robot behavior that emerges from mechanical impedance, a feature that might be challenging to replicate using position-actuated robots [18 ###reference_b18###].\nHowever, the compliance provided by mechanical impedance led to a non-negligible tracking error between the virtual and actual end-effector trajectories.\nThis issue could be mitigated to some extent by increasing the impedance values in the direction of the desired motion.\nThe modular approach with torque-actuated robots, as presented in this paper, offers significant advantages over position-actuated robots. Utilizing EDA with torque-actuated robots eliminates issues related to inverse kinematics, kinematic singularity, and redundancy. Conversely, employing position-actuated robots for task-space control requires additional methods, such as damped least-square methods [21 ###reference_b21###] and generalized pseudo-inverse matrices [22 ###reference_b22###], which in turn complicates the controller design. However, if only position-actuated robots are available, the torque command can be mapped to accelerations using the Forward Dynamics model (i.e., mapping from torque to position command) of the robot.\nAs discussed, a key objective of EDA is to find appropriate choices of virtual trajectory and the corresponding mechanical impedance.\nThe former property was addressed in this paper through the use of DMP, resulting in modular Imitation Learning.\nHowever, selecting appropriate values for mechanical impedance may not be straightforword; the presented values were identified by trial-and-error.\nA systematic method to choose (or learn) the impedance parameters is an avenue of future research [41 ###reference_b41###].\nIn conclusion, by integrating EDA with DMP\u2019s Imitation Learning, we have established a modular learning strategy for robot control. This approach significantly simplifies the generation of a broad spectrum of robot actions."
118
+ }
119
+ ],
120
+ "appendix": [],
121
+ "tables": {},
122
+ "image_paths": {
123
+ "1": {
124
+ "figure_path": "2309.15271v2_figure_1.png",
125
+ "caption": "Figure 1: (A) Three Elementary Dynamic Actions (EDA). Submovements (orange box) and oscillations (blue box) correspond to kinematic primitives and mechanical impedances (green box) manage physical interaction. (B) EDA combined using a Norton equivalent network model. The virtual trajectory \ud835\udc310\u2062(t)subscript\ud835\udc310\ud835\udc61\\mathbf{x}_{0}(t)bold_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_t ) (yellow box) consists of submovements and/or oscillations, and mechanical impedance \ud835\udc19\ud835\udc19\\mathbf{Z}bold_Z (green box) regulates interactive dynamics.",
126
+ "url": "http://arxiv.org/html/2309.15271v2/x1.png"
127
+ },
128
+ "2": {
129
+ "figure_path": "2309.15271v2_figure_2.png",
130
+ "caption": "Figure 2: A sequence of discrete movements using a KUKA LBR iiwa14. (A, B) Time-frames of the robot movement towards the (A) original (old) and (B) new goal location. Start \ud835\udc291subscript\ud835\udc291\\mathbf{p}_{1}bold_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, original goal \ud835\udc292subscript\ud835\udc292\\mathbf{p}_{2}bold_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and new goal \ud835\udc293subscript\ud835\udc293\\mathbf{p}_{3}bold_p start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT are depicted as orange markers. The origin of the robot\u2019s coordinate frame is attached at the robot base, depicted as a green marker. (C) The end-effector trajectory \ud835\udc29\u2062(t)\ud835\udc29\ud835\udc61\\mathbf{p}(t)bold_p ( italic_t ) (black filled line) and the virtual trajectory (black dashed line) \ud835\udc290\u2062(t)subscript\ud835\udc290\ud835\udc61\\mathbf{p}_{0}(t)bold_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_t ) depicted on the Y\u2062Z\ud835\udc4c\ud835\udc4dYZitalic_Y italic_Z-plane. (D, E) Time t\ud835\udc61titalic_t vs. end-effector velocity \ud835\udc29\u02d9\u2062(t)\u02d9\ud835\udc29\ud835\udc61\\dot{\\mathbf{p}}(t)over\u02d9 start_ARG bold_p end_ARG ( italic_t ) along the (D) Y\ud835\udc4cYitalic_Y-coordinate and (E) Z\ud835\udc4dZitalic_Z-coordinate. Black filled lines show the end-effector velocity, which was derived by a first-order finite difference of \ud835\udc29\u2062(t)\ud835\udc29\ud835\udc61\\mathbf{p}(t)bold_p ( italic_t ) with a sampling interval of 3ms. The two unimodal speed profiles filled in orange depict the two submovements \ud835\udc290,s\u2062u\u2062b\u20621\u2062(t)subscript\ud835\udc290\ud835\udc60\ud835\udc62\ud835\udc4f1\ud835\udc61\\mathbf{p}_{0,sub1}(t)bold_p start_POSTSUBSCRIPT 0 , italic_s italic_u italic_b 1 end_POSTSUBSCRIPT ( italic_t ) (left) and \ud835\udc290,s\u2062u\u2062b\u20622\u2062(t)subscript\ud835\udc290\ud835\udc60\ud835\udc62\ud835\udc4f2\ud835\udc61\\mathbf{p}_{0,sub2}(t)bold_p start_POSTSUBSCRIPT 0 , italic_s italic_u italic_b 2 end_POSTSUBSCRIPT ( italic_t ) (right) (Eq. (7)). As shown in (D) and (E), the second submovement is directly superimposed, without any modification of the first submovement. Parameters of the submovements (Section IV-A): \ud835\udc291=[0.6735,0.1396,0.2048]subscript\ud835\udc2910.67350.13960.2048\\mathbf{p}_{1}=[0.6735,0.1396,0.2048]bold_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = [ 0.6735 , 0.1396 , 0.2048 ]m, \ud835\udc292=[0.6735,0.3396,0.4048]subscript\ud835\udc2920.67350.33960.4048\\mathbf{p}_{2}=[0.6735,0.3396,0.4048]bold_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = [ 0.6735 , 0.3396 , 0.4048 ]m, \ud835\udc293=[0.6735,0.4396,0.3048]subscript\ud835\udc2930.67350.43960.3048\\mathbf{p}_{3}=[0.6735,0.4396,0.3048]bold_p start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT = [ 0.6735 , 0.4396 , 0.3048 ]m, T1=T2=2.0subscript\ud835\udc471subscript\ud835\udc4722.0T_{1}=T_{2}=2.0italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 2.0s, ti=0.5subscript\ud835\udc61\ud835\udc560.5t_{i}=0.5italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0.5s, tg=1.5subscript\ud835\udc61\ud835\udc541.5t_{g}=1.5italic_t start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT = 1.5s.",
131
+ "url": "http://arxiv.org/html/2309.15271v2/"
132
+ },
133
+ "3": {
134
+ "figure_path": "2309.15271v2_figure_3.png",
135
+ "caption": "Figure 3: A combination of discrete and rhythmic movements using a KUKA LBR iiwa14. (A) Elements of the robot movement. The origin of the robot\u2019s coordinate frame is attached at the robot base, depicted as green marker. Orange markers depict \ud835\udc291subscript\ud835\udc291\\mathbf{p}_{1}bold_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (left) and \ud835\udc292subscript\ud835\udc292\\mathbf{p}_{2}bold_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (right) of \ud835\udc290,s\u2062u\u2062b\u2062(t)subscript\ud835\udc290\ud835\udc60\ud835\udc62\ud835\udc4f\ud835\udc61\\mathbf{p}_{0,sub}(t)bold_p start_POSTSUBSCRIPT 0 , italic_s italic_u italic_b end_POSTSUBSCRIPT ( italic_t ). Blue line depicts \ud835\udc290,o\u2062s\u2062c\u2062(t)subscript\ud835\udc290\ud835\udc5c\ud835\udc60\ud835\udc50\ud835\udc61\\mathbf{p}_{0,osc}(t)bold_p start_POSTSUBSCRIPT 0 , italic_o italic_s italic_c end_POSTSUBSCRIPT ( italic_t ) (Eq. (8), (9)). (B) The end-effector trajectory \ud835\udc29\u2062(t)\ud835\udc29\ud835\udc61\\mathbf{p}(t)bold_p ( italic_t ) (black filled line) and the virtual trajectory (black dashed line) \ud835\udc290\u2062(t)subscript\ud835\udc290\ud835\udc61\\mathbf{p}_{0}(t)bold_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_t ) depicted on the Y\u2062Z\ud835\udc4c\ud835\udc4dYZitalic_Y italic_Z-plane. Multiple submovements that move between \ud835\udc291subscript\ud835\udc291\\mathbf{p}_{1}bold_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \ud835\udc292subscript\ud835\udc292\\mathbf{p}_{2}bold_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT were generated. (C, D) Time t\ud835\udc61titalic_t vs. end-effector trajectory \ud835\udc29\u2062(t)\ud835\udc29\ud835\udc61\\mathbf{p}(t)bold_p ( italic_t ) along (C) Y\ud835\udc4cYitalic_Y-coordinate and (D) Z\ud835\udc4dZitalic_Z-coordinate. Black dashed lines depict \ud835\udc290\u2062(t)subscript\ud835\udc290\ud835\udc61\\mathbf{p}_{0}(t)bold_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_t ). Blue lines highlight the duration of a movement without any discrete movement. Parameters of submovement and oscillation (Eq. (9)): \ud835\udc291=[0.5735,0.0,0.5048]subscript\ud835\udc2910.57350.00.5048\\mathbf{p}_{1}=[0.5735,0.0,0.5048]bold_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = [ 0.5735 , 0.0 , 0.5048 ]m, \ud835\udc292=[0.5735,0.35,0.5048]subscript\ud835\udc2920.57350.350.5048\\mathbf{p}_{2}=[0.5735,0.35,0.5048]bold_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = [ 0.5735 , 0.35 , 0.5048 ]m, T1=1.5subscript\ud835\udc4711.5T_{1}=1.5italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 1.5s, r=0.03\ud835\udc5f0.03r=0.03italic_r = 0.03m, \u03c90=3\u2062\u03c0subscript\ud835\udf1403\ud835\udf0b\\omega_{0}=3\\piitalic_\u03c9 start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 3 italic_\u03c0rad/s.",
136
+ "url": "http://arxiv.org/html/2309.15271v2/x3.png"
137
+ },
138
+ "4": {
139
+ "figure_path": "2309.15271v2_figure_4.png",
140
+ "caption": "Figure 4: The drawing and erasing task using a KUKA LBR iiwa. A green pen was used for the drawing. (A) Data collection of human-demonstrated \ud835\udc29(d)\u2062(t)superscript\ud835\udc29\ud835\udc51\ud835\udc61\\mathbf{p}^{(d)}(t)bold_p start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ( italic_t ) which was to be drawn (Section III-D). The end-effector trajectory along X\ud835\udc4bXitalic_X- and Y\ud835\udc4cYitalic_Y-coordinates were collected. (B) Time t\ud835\udc61titalic_t vs. X\ud835\udc4bXitalic_X-coordinate (black line) and Y\ud835\udc4cYitalic_Y-coordinate (purple line) of \ud835\udc29(d)\u2062(t)superscript\ud835\udc29\ud835\udc51\ud835\udc61\\mathbf{p}^{(d)}(t)bold_p start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ( italic_t ) (top row), \ud835\udc29\u02d9(d)\u2062(t)superscript\u02d9\ud835\udc29\ud835\udc51\ud835\udc61\\dot{\\mathbf{p}}^{(d)}(t)over\u02d9 start_ARG bold_p end_ARG start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ( italic_t ) (middle row), \ud835\udc29\u00a8(d)\u2062(t)superscript\u00a8\ud835\udc29\ud835\udc51\ud835\udc61\\ddot{\\mathbf{p}}^{(d)}(t)over\u00a8 start_ARG bold_p end_ARG start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ( italic_t ) (bottom row). With a sampling rate of 333Hz, a first-order finite difference method was used to calculate the velocity and acceleration (left column). These trajectories were Gaussian filtered (right column) using MATLAB\u2019s smoothdata function with a time window size of 165ms. (C) The resulting trajectory \ud835\udc29(d)\u2062(t)superscript\ud835\udc29\ud835\udc51\ud835\udc61\\mathbf{p}^{(d)}(t)bold_p start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ( italic_t ) (black dashed line) generated with Imitation learning. (D) The drawing task was achieved by setting \ud835\udc290\u2062(t)subscript\ud835\udc290\ud835\udc61\\mathbf{p}_{0}(t)bold_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ( italic_t ) as \ud835\udc29(d)\u2062(t)superscript\ud835\udc29\ud835\udc51\ud835\udc61\\mathbf{p}^{(d)}(t)bold_p start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ( italic_t ). The black dashed line depicts \ud835\udc290,d\u2062m\u2062p\u2062(t)subscript\ud835\udc290\ud835\udc51\ud835\udc5a\ud835\udc5d\ud835\udc61\\mathbf{p}_{0,dmp}(t)bold_p start_POSTSUBSCRIPT 0 , italic_d italic_m italic_p end_POSTSUBSCRIPT ( italic_t ) (or \ud835\udc29(d)\u2062(t)superscript\ud835\udc29\ud835\udc51\ud835\udc61\\mathbf{p}^{(d)}(t)bold_p start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT ( italic_t )), the green line depicts \ud835\udc29\u2062(t)\ud835\udc29\ud835\udc61\\mathbf{p}(t)bold_p ( italic_t ). (E) The erasing task was achieved by superimposing an oscillation \ud835\udc290,o\u2062s\u2062c\u2062(t)subscript\ud835\udc290\ud835\udc5c\ud835\udc60\ud835\udc50\ud835\udc61\\mathbf{p}_{0,osc}(t)bold_p start_POSTSUBSCRIPT 0 , italic_o italic_s italic_c end_POSTSUBSCRIPT ( italic_t ) onto a time-reversed \ud835\udc290,d\u2062m\u2062p\u2062(t)subscript\ud835\udc290\ud835\udc51\ud835\udc5a\ud835\udc5d\ud835\udc61\\mathbf{p}_{0,dmp}(t)bold_p start_POSTSUBSCRIPT 0 , italic_d italic_m italic_p end_POSTSUBSCRIPT ( italic_t ). The green pen was replaced by a rectangular eraser. The green line depicts \ud835\udc29\u2062(t)\ud835\udc29\ud835\udc61\\mathbf{p}(t)bold_p ( italic_t ). For (C, D, E), trajectories were plotted in MATLAB and overlapped onto the drawing/erasing table. Parameters of DMP: \u03b1z=1000subscript\ud835\udefc\ud835\udc671000\\alpha_{z}=1000italic_\u03b1 start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT = 1000, \u03b2z=250subscript\ud835\udefd\ud835\udc67250\\beta_{z}=250italic_\u03b2 start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT = 250, N=100\ud835\udc41100N=100italic_N = 100, P=2331\ud835\udc432331P=2331italic_P = 2331, \u03c4=7\ud835\udf0f7\\tau=7italic_\u03c4 = 7, ci=exp\u2061(\u2212\u03b1s\u2062(i\u22121)/(N\u22121))subscript\ud835\udc50\ud835\udc56subscript\ud835\udefc\ud835\udc60\ud835\udc561\ud835\udc411c_{i}=\\exp(-\\alpha_{s}(i-1)/(N-1))italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = roman_exp ( - italic_\u03b1 start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_i - 1 ) / ( italic_N - 1 ) ), hi=1/(ci+1\u2212ci)2subscript\u210e\ud835\udc561superscriptsubscript\ud835\udc50\ud835\udc561subscript\ud835\udc50\ud835\udc562h_{i}=1/(c_{i+1}-c_{i})^{2}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1 / ( italic_c start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT - italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT for i\u2208[1,2,\u22ef,N\u22121]\ud835\udc5612\u22ef\ud835\udc411i\\in[1,2,\\cdots,N-1]italic_i \u2208 [ 1 , 2 , \u22ef , italic_N - 1 ], cN=exp\u2061(\u2212\u03b1s)subscript\ud835\udc50\ud835\udc41subscript\ud835\udefc\ud835\udc60c_{N}=\\exp{(-\\alpha_{s})}italic_c start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT = roman_exp ( - italic_\u03b1 start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ), hN=hN\u22121subscript\u210e\ud835\udc41subscript\u210e\ud835\udc411h_{N}=h_{N-1}italic_h start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT italic_N - 1 end_POSTSUBSCRIPT. Parameters of oscillation: r=0.03\ud835\udc5f0.03r=0.03italic_r = 0.03m, \u03c90=2\u2062\u03c0subscript\ud835\udf1402\ud835\udf0b\\omega_{0}=2\\piitalic_\u03c9 start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 2 italic_\u03c0rad/s.",
141
+ "url": "http://arxiv.org/html/2309.15271v2/"
142
+ }
143
+ },
144
+ "validation": true,
145
+ "references": [],
146
+ "url": "http://arxiv.org/html/2309.15271v2"
147
+ }
20240322/2310.00354v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2310.10065v2.json ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Bridging BRC-20 to Ethereum",
3
+ "abstract": "In this paper, we design, implement, and (partially-) evaluate a lightweight bridge (as a type of middleware) to connect the Bitcoin and Ethereum networks that were heterogeneously uncontactable before. Inspired by the recently introduced Bitcoin Request Comment (BRC-20) standard, we leverage the flexibility of Bitcoin inscriptions by embedding editable operations within each satoshi and mapping them to programmable Ethereum smart contracts. A user can initialize his/her requests from the Bitcoin network, subsequently triggering corresponding actions on the Ethereum network. We validate the lightweight nature of our solution and its ability to facilitate secure and seamless interactions between two heterogeneous ecosystems.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The emergence of Bitcoin revolutionized the field of financial technology by introducing a decentralized network for value transactions. Ethereum further advanced this concept by bringing smart contracts and decentralized applications (DApps). As of July 2023, the market capitalization of Bitcoin is approximately US$584.97 billion, while Ethereum stands at around US$223.32 billion (CoinMarketCap). These cryptocurrencies have not only created significant value themselves but have also paved the way for the development of numerous upper-layer tokens and DApps, which can reach market scales in the thousands. However, the structural differences between Bitcoin\u2019s UTXO model and Ethereum\u2019s account model have resulted in isolated ecosystems. Users are unable to freely transfer tokens between these heterogeneous blockchains and often rely on external intermediaries, such as centralized exchanges (CEX) or decentralized exchanges (DEX), which come with high costs and limitations. This lack of interoperability hinders the widespread adoption and evolution of these technologies, limiting their full potential.\nExisting solutions have made efforts to facilitate interoperability among different blockchains. They often rely on various cryptographic techniques (e.g., zero-knowledge proofs [1 ###reference_b1###] and hash-lock [2 ###reference_b2###][3 ###reference_b3###]), external hardware (e.g., TEE [4 ###reference_b4###][5 ###reference_b5###]) or reconstructing the entire system (e.g., Polkadot [6 ###reference_b6###], Cosmos [7 ###reference_b7###]). However, these approaches come with explicit limitations. Cryptographic approaches are computationally intensive and may introduce significant overhead. External hardware solutions like TEEs can be complex and difficult to implement. Reconstruction of the system requires extensive changes, bringing additional assumptions and complexities. As a result, current solutions suffer from various degrees of impracticability, which impede their wide adoption.\nContributions. To fill the gaps, we propose an innovative lightweight middleware protocol designed to bridge the gap between Bitcoin and Ethereum. The middleware takes advantage of BRC-20 [8 ###reference_b8###][9 ###reference_b9###], an experimental standard for digital tokens on the Bitcoin network akin to Ethereum\u2019s ERC-20. Our idea is to interpret the BRC-20 operations inscribed on Bitcoin\u2019s blockchain (a.k.a., Bitcoin inscription [10 ###reference_b10###]) and reflect them on the Ethereum network, effectively extending Bitcoin\u2019s functionalities within Ethereum\u2019s EVM and enabling the possibility of integrating Bitcoin assets in DeFi applications [11 ###reference_b11###][12 ###reference_b12###]. We approach the goal by completing the following steps:\nWe present a lightweight middleware, MidasTouch (Sec.III ###reference_###), designed to bridge the Bitcoin network and the Ethereum network. MidasTouch enables seamless communication (majorly from Bitcoin to Ethereum) between these networks, empowering users to interact with Ethereum smart contracts through Bitcoin inscriptions defined in the recent BRC-20 standard.\nWe have developed the preliminary version of MidasTouch (Sec.IV ###reference_###) to demonstrate its functionality. The prototype includes the implementation of functional methods on both the Bitcoin and Ethereum sides and the featured events in intermediate middleware, providing detailed insights into key operations.\nWe conduct a partial evaluation to assess the effectiveness and efficiency of MidasTouch (Sec.V ###reference_###), focusing specifically on smart contract-related operations on the Ethereum testnet. Our evaluations are based on three key aspects: scalability and performance with varying committee sizes, gas usage for different contract functionalities, and the frequency of validator processing for requests. The results shed insights into the system\u2019s behavior under different scenarios, which align with people\u2019s intuitive expectations. Additionally, we have discussed the security aspects and potential limitations.\nWe emphasize two additional aspects in our designs:\nU shape. The workflow within our design takes the shape of a \u201cU\u201d: users initiate their requests by inputting inscriptions on the Bitcoin network. The action on inscriptions triggers a state transition within Ethereum. Eventually, the Ethereum contract concludes by furnishing a receipt to Bitcoin, serving as a record for the settlement process.\nLightweight. The operation of the validator-maintained middleware does not inherently demand the involvement of supplementary participants. The validators responsible for upkeeping our middleware can either constitute the same group or form a subset of the Ethereum committee.\n\u2660 We metaphorically refer to the task achieved by our middleware as MidasTouch (cf. our title), drawing inspiration from the tale in Greek mythology: everything King Midas touched turned to gold, symbolizing a valuable connectivity effect."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Before Construction",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Building Blocks",
21
+ "text": "BRC-20. This Bitcoin-native standard [8 ###reference_b8###][9 ###reference_b9###] parallels the Ethereum ERC-20 token standard [13 ###reference_b13###] and signifies a significant shift within the Bitcoin ecosystem, particularly with the emergence of Bitcoin Ordinals. Bitcoin Ordinals revolutionize Bitcoin transactions by assigning an index to each Satoshi (the smallest unit of Bitcoin, 0.00000001) based on its mining order. These indices can be utilized for various purposes, such as unique identifiers or metadata, thereby unlocking new possibilities, including Non-Fungible Tokens (NFTs) [14 ###reference_b14###][15 ###reference_b15###]. Once a Satoshi has been inscribed (TapScript, around 4MB), it can be utilized to create a BRC-20 token. In essence, the BRC-20 standard enables three primary operations: (creation of a new token type), (increasing the supply of the created token), and (trading tokens). We provide a brief overview of each function below, with detailed information available in [16 ###reference_b16###]. These functions collectively enable the creation of a simplified NFT implementation over the Bitcoin network, albeit with some limitations in terms of extensibility.\nDue to the shortage of formal research on BRC-20, Rodarmor [17 ###reference_b17###] was one of the first to introduce a scheme for assigning serial numbers to Bitcoin satoshis, and a comprehensive introduction to ordinal theory can be found in [18 ###reference_b18###]. Additionally, Binance Research has published several pioneering reports [9 ###reference_b9###][19 ###reference_b19###][20 ###reference_b20###] that explore the development of BRC-20. Bertucci [21 ###reference_b21###] conducted an early analysis of transaction fees related to ordinal inscriptions.\nSmart contract. A smart contract (SC) is a distinct form of contract where the agreement\u2019s terms are directly encoded into executable code. Operating as a self-contained white-box, a smart contract guarantees the synchronization of input and output, effectively eliminating the reliance on trustworthy third-party intermediaries [22 ###reference_b22###]. Deployed primarily on blockchain platforms such as Ethereum, smart contracts are executed automatically once predetermined conditions encoded within the code are fulfilled. The versatility of smart contracts enables automation across diverse domains, spanning from financial transactions [23 ###reference_b23###] and governance systems [24 ###reference_b24###] to decentralized organizations [25 ###reference_b25###] and beyond. With their ability to enforce transparent and trustless transactions, smart contracts offer enhanced efficiency, security, and persistence.\nEthereum token standards. Tokens play a vital role in incentivizing users and developers within blockchain ecosystems. These tokens adhere to specific standards, which define the methods for creating, deploying, and issuing new tokens. Ethereum, with its robust smart contract capabilities, has established itself as a leader in token standards [26 ###reference_b26###], driving versatile applications within its ecosystem. The ERC-20 fungible token standard [13 ###reference_b13###] has gained significant traction, leading to the proliferation of ICOs [27 ###reference_b27###] and a flourishing token market [28 ###reference_b28###]. However, different blockchain ecosystems employ incompatible token standards. For instance, a token adhering to the BRC-20 standard on the Bitcoin network cannot be utilized on the Ethereum network. This limitation has motivated us to explore the construction of a potential connection between these disparate ecosystems.\nBitcoin standards.\nBitcoin, operating as a standalone blockchain, lacks native token standards akin to Ethereum\u2019s ERC-20. However, proposals for tokenization methods exist within the Bitcoin Improvement Proposals (BIPs) [29 ###reference_b29###] (e.g., SegWit [30 ###reference_b30###][31 ###reference_b31###], Taproot [32 ###reference_b32###][33 ###reference_b33###]). Establishing a Bitcoin standard is more rigorous than Ethereum, given Bitcoin\u2019s protocol\u2019s limited space for extension. BRC20, an external solution, predominantly handles complex functionalities off-chain, preserving minimal on-chain activity."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Concurrent Solutions",
27
+ "text": "Interoperability in blockchain.\nPolkadot [6 ###reference_b6###] enables the interconnection of subnetworks (the relay chains) through the cross-chain message passing protocol (XCMP). Within this context, a relay is a smart contract residing in a target blockchain that operates as a lightweight client of a source blockchain. Cosmos [7 ###reference_b7###] achieves cross-chain communications via the inter-blockchain communication protocol (IBC) [34 ###reference_b34###]. IBC is designed to consist of two major layers that establish secure connections for data transportation (a.k.a. TAO) and define the way of packaging data (APP layer). However, these solutions are restricted to facilitating interoperability among blockchains within the same ecosystem. Cacti [35 ###reference_b35###] is an integral part of the Hyperledger project. The scheme relies on a network of interoperable validators that validate cross-chain transactions and are entrusted with the task of signing them. To ensure the validity of transactions, a consensus among a quorum of validators is required for their successful signing. Hermes [36 ###reference_b36###] is a middleware for blockchain interoperability that is built on the Open Digital Asset Protocol (ODAP) (recently merged into SATP [37 ###reference_b37###]). The protocol draws inspiration from the two-phase commit protocol (2PC) [38 ###reference_b38###] and goes beyond that by incorporating a flexible log storage API that provides multiple storage options, including local, cloud, and on-chain storage. CheaPay [39 ###reference_b39###] and Herdius [3 ###reference_b3###] focus on a payment channel network that enables off-chain settlement of transactions between blockchains by utilizing the Hash Time-Lock Contract (HTLC) scheme to ensure atomic swaps of assets [40 ###reference_b40###][41 ###reference_b41###]. Tesseract [4 ###reference_b4###] is an exchange protocol that operates in real-time and utilizes trusted hardware as a reliable relay. It facilitates the tokenization of assets and enables the pegging of these assets to cryptocurrencies. Similar solutions also leverage TEEs [5 ###reference_b5###][42 ###reference_b42###] to perform cross-chain transactions. Several studies put focus on the cross-chain bridges [43 ###reference_b43###][44 ###reference_b44###]. More classifications on interoperability can refer to [45 ###reference_b45###][46 ###reference_b46###].\nProjects\nCommunication\nArchitecture\nWitness\nImplementation\nSelection of technical routes. The presence of reliable witnesses is crucial for the successful implementation of a dependable interoperable protocol, especially in ensuring the all-or-nothing settlement of digital assets, also known as atomic swaps. Existing solutions, as we have discovered through our investigation, rely on either trusted parties, such as relayers and validators, or automated algorithms/machines like smart contracts, middleware, TEEs, and hash-locks, to achieve reliable witnesses. However, relying on trusted parties poses a significant risk of compromise. Therefore, we have chosen the alternative route. Nonetheless, hash-locks are a common construct used in atomic cross-chain swap protocols (e.g., [47 ###reference_b47###]) which require strict network requirements, while TEE-based solutions tend to be complex. As a result, we have been motivated to develop a contract-based middleware that serves as an efficient bridge between Bitcoin and Ethereum, providing the desired functionality."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Threat Model and Assumption",
33
+ "text": "Blockchain model. We assume the blockchains, both applicable to Bitcoin and Ethereum chains, consist of a group of nodes that operate the system. In our analysis, we simply consider a fraction of nodes that may behave arbitrarily, but the total number of these unfaithful nodes is below the security threshold of the consensus protocols (e.g., less than 50% in PoS/PoW settings). The blockchain system adheres to the robust assumption as established by previous research [48 ###reference_b48###].\nConsistency. Once an honest node commits transaction before , no honest node will ever commit before .\nLiveness. Once a valid transaction is submitted to an honest node, it will be eventually committed on-chain.\nSmart contract model. By integrating the fundamental functionalities offered by underlying blockchain systems, we present a simplified abstraction of the key features to simulate a smart contract.\nContract deployment. The contract is deployed on-chain, establishing its presence within the blockchain network.\nState update. The state undergoes transitions triggered by input transactions, evolving as new transactions that are packed into the new proposed blocks.\nState consensus. Blockchain maintainers execute consensus to reach an agreement on the global view of the state, ensuring consistency among distributed nodes.\nState query. With a confirmed state, users can retrieve specific transactions and blocks at a given height for analysis or reference.\nCryptography primitives. We require conventional unforgeability for digital signatures (multi-signature [49 ###reference_b49###] included) and collision-resistant hash functions [50 ###reference_b50###].\nGeneral honesty assumption.\nWe make the assumption that the majority of group members in our described systems, whether they belong to the blockchain networks (such as Bitcoin and Ethereum) or the random validator committee, will faithfully adhere to the designated tasks."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "III Middleware Construction",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-A Technical Challenges",
45
+ "text": "Challenge-I: Managing different data formats within heterogeneous blockchains is a non-trivial task. The primary challenge lies in reconciling the stateless UTXO-based model of Bitcoin with Ethereum\u2019s stateful, account-based approach. To address this, we propose leveraging the BRC-20 standard as a middleware to establish a lightweight bridge.\nIn our implementation, BRC-20 utilizes inscriptions to record the state transitions of UTXO states. These inscriptions serve as verifiable proofs and are used to trigger smart contract functions/events. We incorporate a series of operation indicators within the inscriptions and provide corresponding events in the smart contract. This allows users to initiate transactions on the Bitcoin network, which in turn triggers changes in the inscriptions and subsequent state transitions in the smart contracts on the Ethereum network.\nChallenge-II: Determining which side should bear the deduction of fees can give rise to debatable arguments. State-of-the-art cross-chain solutions often overlook the costs involved in exchanges, which is a crucial aspect that users are concerned about.\nIn our approach, the actions are initiated from the Bitcoin side, and the actual transactions are triggered on this network, leading to corresponding state transitions on the Ethereum side. Consequently, users on the Bitcoin side are responsible for bearing the associated exchange fees, which are separate from (equiv. in addition to) the basic transaction fees incurred during the consensus procedures.\nChallenge-III: Implementing cross-chain transactions may pose significant complexity. Existing schemes often rely on a series of complex cryptographic operations (e.g., [4 ###reference_b4###]) or the reconstruction of intricate systems (e.g., [6 ###reference_b6###][7 ###reference_b7###]). Unfortunately, this level of complexity renders the system impractical for widespread adoption and use.\nRather than introducing complex dependencies, our approach focuses on establishing a lightweight middleware that seamlessly bridges actions initiated on the Bitcoin side with state transitions on the Ethereum side. Our implementation leverages the native editable field in Bitcoin, as defined by the BRC-20 standard, and programmable functions written in smart contracts. MidasTouch can work harmoniously with both blockchains, ensuring smooth interoperability without the need for additional intricate dependencies."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-B Warm-up Construction",
51
+ "text": "Roles.\nThe protocol includes four roles: transaction originator, contract owner, validator, and operator.\nTransaction originator (Bitcoin). The Bitcoin transaction originators are the users initiating transactions on the Bitcoin network. Their main role is to inscribe the transaction with specific information regarding Ethereum contract interactions, such as contract address and operation data. This inscription is embedded within Bitcoin transactions and is scanned by validators.\nContract owner (Ethereum). The Ethereum contract owners control the smart contracts on the Ethereum network, which the middleware protocol interacts with. They define the contract operations that can be invoked through inscriptions on the Bitcoin network. Furthermore, they monitor the state updates broadcast by validators.\nValidator (middleware). The validators are responsible for the accurate execution of the middleware protocol. Their duties include registering themselves on the list, validating transactions from the Bitcoin network, and managing the update of Ethereum contract states. They also participate in consensus processes. Notably, validators have to deposit an amount of money in the contract.\nOperator (middleware). The operators are responsible for setting up and maintaining the middleware protocol. They set the system parameters, such as the size of the validator committee, the block intervals for consensus and state updates, the rules for multi-signature validation, and other security features. They also take care of system upgrades.\nNote that in a typical case, middleware validators, operators, and the Ethereum contract owner can be played by the same group of users, wherein settings established by the middleware operator are commonly predefined in the genesis or a designated checkpoint block.\nSystem overview (Fig.1 ###reference_###).\nFor the initial setup, the contract developer deploys a domain-specific smart contract on Ethereum. In this protocol, each token does not have its own smart contract. Smart contracts are organized based on functionality, where each functionality (such as the auction function in DeFi) has its own smart contract. Each contract records the state information of all tokens that use that particular functionality.\n###figure_1### Then, \u2460 any Bitcoin user who is keen to become a validator of the middleware layer will deposit a certain amount of ETH to this smart contract with a valid Ethereum address that is bounded with his Bitcoin address. Until enough validators get registered (while the validator committee size is pre-defined and can be ranged from one to multiple), the system initialization is completed. The committee will take charge of all interaction with the related smart contracts, on behalf of the Bitcoin transaction originators who thus do not need to own any Ethereum addresses. A user sends an inscribed satoshi that claims the validity of any functions (equiv. operations). This sat should contain a script (TapScript). The functions, formatted as func_signature (as in the example), need to match the unified interfaces defined in corresponding smart contracts. \u2461 The committee consistently monitors the Bitcoin network and periodically collects the inscribed satoshi (acts as an explorer), sorts them as per the timestamp, and constructs a sat bundle. \u2462 For every increment of block heights in the Bitcoin network, the committee engages in a consensus process related to the sat bundle. Following this, it invokes the respective Ethereum network\u2019s smart contracts using the contract addresses corresponding to each sat bundle. \u2463 The protocol employs a multi-signature approach to update the state in the smart contracts. Auditing is processed to ensure that the penalty is properly applied (by slashing the deposit) to any misbehaviors. Meanwhile, the gas fee awarded to validators will be deducted from the satoshi bundle with a certain percentage, e.g., 5%. Note that the gas fee will be calculated for each individual satoshi. \u2464 Finally, the committee gathers the emitted events from the invoked contracts and broadcasts the post-operation inscriptions on the Bitcoin network. These broadcasts act as receipts for the executed bundle, signaling the completion of the originated inscription requests."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-C Detailed Construction",
57
+ "text": "Validator registration. The MidasTouch protocol initiates with defining core parameters and proceeds to the validator registration phase. Validators are required to register and deposit a specified amount of ETH into a designated deposit contract. The registration is inscribed on the Bitcoin network, after which the newly registered validator is added to the validator set. The size and requirements of the validation committee can vary significantly based on the desired level of system security. For instance, a system requiring high security might necessitate a large committee and more intricate consensus mechanisms. Conversely, a system prioritizing efficiency over security might operate with a smaller committee or even a single validator.\nInscription-contract interactions. Once the validator committee is established, the middleware protocol begins managing transactions from the Bitcoin network. For each output in every transaction of the newly obtained Bitcoin block, it searches for potential inscriptions. Valid inscriptions are added to the inscription bundle set , and the corresponding contract addresses are accumulated into the contracts set in terms of different functionalities.\nState update. The consensus process (if the validator committee size reaches the lower bound for consensus) and state update occur at predetermined block intervals. During this consensus process, the inscriptions bundle is sorted based on the timestamp, and validators reach a consensus on the legitimate inscriptions. The system then fetches the latest state for each contract in the set from the Ethereum network, which commonly include a balance record for each unique Bitcoin address associated with various tokens as an entry point for handling necessary operations upon the token amounts.\nMulti-signature validation. The protocol processes each inscription within the bundle, subtracting the gas fee from each and distributing it among validators based on their respective Bitcoin addresses. If the operation within the inscription proved valid in the Bitcoin network, it is executed with multi-signature validation, leading to the state and address balance updates in the Ethereum network. The degree of validation and the consensus mechanism used for this process can be adjusted according to the security requirements of the system.\nInscription publication. After processing all contracts, validators republish the outcomes of operations as inscriptions back to the Bitcoin network. The block index is incremented, indicating the protocol\u2019s readiness to manage the next Bitcoin block."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "IV Implementation",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-A Basic Operations",
69
+ "text": "We provide three concrete instances to clarify the proposed MidasTouch protocol. The operations cover , token , and production.\n. The operation includes details such as the protocol name, operation (registration), signature, token name, deposit amount, and Ethereum address. Following the inscription, an update occurs on Ethereum. If the Ethereum address does not exist in the validator set, it is added with the associated Bitcoin address and balance information. The connection between Bitcoin and Ethereum addresses forms the backbone of the middleware.\n. The operation represents issuing a new token on the Ethereum network. The Bitcoin inscription includes information including the protocol name (in this case, the BRC-20 token standard), operation (deploy), operation signature, token name, total token supply, and the upper bound for tokens to be minted in each round. The Ethereum network responds to this inscription by creating a new corresponding token state if it does not exist. This state contains the inscription information and an initial balance state for the token addresses.\n. The operation represents the closure of an Ethereum event cycle and its report back to the Bitcoin network to guarantee the finalization of the originated inscription requests such as the above inscription. On the Ethereum side, after a function defined in the smart contract is executed, events are emitted. These events are captured by validators, who then publish an inscription on the Bitcoin network, signifying an operation of . This inscription includes the protocol name and a collection of events, each corresponding to an inscription ID and carrying information about the operation\u2019s execution results (e.g., true/false, values). Unlike other algorithms, there is no \u201cop_signature\u201d in this case, as this algorithm is simply forwarding the Ethereum events back to the Bitcoin network without executing a particular operation itself. With this operation, only those inscription requests which are included in an will be committed as a success and ready to be further used."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-B Algorithms",
75
+ "text": "We implement and present the major workflow of MidasTouch (cf. Algorithm 1 ###reference_###), featured with a multi-party validator committee. Notations are listed in Table II ###reference_###.\nBecome committee members. Becoming an eligible committee member requires to be involved. The function is responsible for registering validators who hold both a Bitcoin and Ethereum address and have deposited a specific amount of ETH into a contract. This registration is then inscribed onto the Bitcoin network, and the newly registered validator is added to the validator set . The function also confirms the successful registration of these validators and ensures the completeness and correctness of the information provided during the registration process.\nAction on Bitcoin. The primary function involved in the actions on the Bitcoin network is . This function scans each transaction output from the new Bitcoin block for potential inscriptions. Valid inscriptions are appended to the transaction bundle , and the corresponding contract addresses are accumulated into the contracts set . Additionally, is also responsible for broadcasting post-operation inscriptions, which serve as receipts for the executed transactions, indicating their completion. This dual functionality ensures that all potential inscriptions are evaluated for validity and corresponding receipts are issued, keeping the system secure and transparent.\nAction on Ethereum. The main function involved in the actions on the Ethereum network is . This function is responsible for retrieving the most recent state for every contract within the set from the Ethereum network. For contracts related to BRC-20 or those possessing similar token-managing functionalities, it additionally retrieves a balance record for each distinct Bitcoin address associated with various tokens. During the consensus process that occurs every block, this function processes each inscription within the bundle , distributing the gas fee among validators based on their respective Bitcoin addresses and updating the state and address balance via a multi-signature validation process. Furthermore, oversees the gathering of emitted events from the invoked contracts on Ethereum. These events are later broadcasted on the Bitcoin network as part of the receipt operations, marking the completion of inscriptions.\nSymbol\nMeaning\nScope"
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "IV-C Use Case",
81
+ "text": "To illustrate the real-world operation of the MidasTouch, we provide the following case. We have three participants named Alice, Bob, and Carol, all of whom are actively engaged in the network and aspiring to become validators for the system. To be eligible, they utilize the function. Each participant provides their Bitcoin and Ethereum addresses and deposits a specified amount of ETH into a designated contract . This transaction is recorded or inscribed onto the Bitcoin network, and subsequently, Alice, Bob, and Carol are added to the validator set .\nThen, we introduce Dave, an end-user who intends to execute a transaction on the Bitcoin network. Dave creates an inscription in the transaction output, which is included when the new block is mined. At this point, the function comes into play. It scans each transaction output from the newly mined Bitcoin block, validates Dave\u2019s inscription, and appends it to the bundle . The corresponding contract address is also added to the contracts set .\nOn the Ethereum network, the function is triggered every block. This function retrieves the latest state , where , for each contract within the set , including the contract to which Dave\u2019s inscription was added. In the case where the contract is associated with BRC-20 or similar token-managing functionalities, the function also fetches the balance record for each unique Bitcoin address linked to various tokens, including Dave\u2019s address.\nDuring each block, Alice, Bob, and Carol, as validators, process each inscription within the bundle . They distribute the gas fee among themselves based on their respective Bitcoin addresses. Through a collaborative multi-signature validation process, they update the state () and address balances . This completes the entire workflow of the proposed middleware protocol, ensuring a consistent state is maintained across the Bitcoin and Ethereum networks."
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Evaluation and Analysis",
87
+ "text": ""
88
+ },
89
+ {
90
+ "section_id": "5.1",
91
+ "parent_section_id": "5",
92
+ "section_name": "Performance Analysis",
93
+ "text": "Scalability.\nFig.2 ###reference_### presents a detailed visualization of the impact of the (validator-) committee size on the execution speed of our proposed MidasTouch protocol. The x-axis represents the size of the committee, which ranges from 1 to 20 members111A validator committee with a size smaller than 4 is regarded as a central entity with no consensus process being done.. The y-axis illustrates the number of operations that can be executed per second. We utilize the well-known Practical Byzantine Fault Tolerance (PBFT) [51 ###reference_b51###] protpcol for our committee and factor in the transaction processing capabilities of both Bitcoin and Ethereum networks. Specifically, we consider Bitcoin\u2019s Lightning Network which can process up to the order of 10,000 transactions per second [52 ###reference_b52###], and Ethereum 2.0 where the Casper-PoS [53 ###reference_b53###][54 ###reference_b54###] and sharding technology are enabled, capable of handling 64 times higher transactions per second than the single-sharded Ethereum when the projected Phase-1 is activated. Given these parameters, the operational speed of MidasTouch cannot surpass the minimum throughput of these two networks.\nAs depicted in Fig.2 ###reference_###, the execution speed corresponds to Ethereum 2.0\u2019s average throughput until a committee size of 4, the minimum requirement for consensus in our configuration. After this point, the speed decreases non-linearly with an increase in committee size due to the quadratic time complexity () of the PBFT, where represents the number of nodes.\nThis illustrates that the choice of committee size presents a balancing act between decentralization and performance. While larger committees yield increased decentralization, they compromise on operational speed.\n###figure_2### Gas consumption. We evaluate the additional amount of gas that any inscription needs to pay for validators in terms of the different functionalities of smart contracts running on the Ethereum network. Note that the overhead of sending an inscription is small and can be negligible when compared with the execution of smart contracts due to which an incentive is required to be applied. We consider the gas consumption in typical contracts regarding each functionality:\nFT (fungible tokens [13 ###reference_b13###]). The simplest type of smart contract typically involves just the transfer of tokens from one address to another.\nNFT (non-fungible tokens [15 ###reference_b15###]): NFT contracts can be complex due to the involvement of metadata handling, uniqueness verification, or royalty payment mechanisms.\nStablecoin [55 ###reference_b55###]: Generally simple as well, but some additional complexity for pegging the value to an asset.\nInsurance: It can get complicated depending on the terms of the insurance policy and the type of risks it covers.\nLoan [56 ###reference_b56###]: Loan contracts can be complicated. They usually require mechanisms to handle interest calculation, risk assessment, and loan recovery.\nAuction: Auction contracts need to manage bids from multiple participants, which adds complexity.\nDAO: These are the most complex types of contracts involving governance, voting mechanisms, fund management, or interacting with many other types of contracts.\n###figure_3### Specifically, the percentage of additional value paid by inscription across various categories of smart contracts is presented in Fig.3 ###reference_###, using representative types as examples. FT requires the least additional value, reflecting their relatively straightforward functionality of merely transferring tokens. In contrast, DAOs [57 ###reference_b57###], with their intricacies involving governance, voting mechanisms, and fund management, demand the highest percentage. NFTs, Loans, and Auctions lie in the middle ground. NFT contracts\u2019 complexity arises from handling metadata and verifying uniqueness, while Loan contracts necessitate mechanisms for calculating interest, assessing risk, and recovering loans. Auction contracts, owing to their need to manage multiple participants\u2019 bids, also require a substantial additional value. It is noteworthy that the additional value percentages are influenced by the inherent complexity and functionality of the respective smart contracts. This insight underscores the necessity for efficient management of gas consumption in order to maximize the overall system efficiency.\nFrequency of checking.\nWe further explore the influence of the parameter , which dictates the frequency of invoking the operation in terms of Bitcoin block heights, on the efficiency of the MidasTouch protocol. Fig.4 ###reference_### demonstrates two distinctive aspects of system overhead: time-related overhead, and resource-related overhead, which are associated with the execution time and the computational resources required, respectively.\nWhen , the validator committee is obligated to scrutinize every Bitcoin block, extract the inscriptions from transactions, assemble them into a bundle, arrange them by timestamp, and finally update the corresponding Ethereum smart contracts. As an alternative, the system can postpone the update of Ethereum\u2019s smart contract state until every Bitcoin block heights have been processed, amassing a substantial number of sorted inscriptions in the bundle during this interval.\nIn addition, both the time-related and resource-related overheads are affected by the choice of . Specifically, as increases, the time-related overhead decreases gradually, demonstrating that accumulating more transactions before updating the EVM state can save execution time. However, this comes at the cost of an increase in resource-related overhead, likely due to the need for storing and sorting a larger number of inscriptions. The ideal value of would therefore be a trade-off between these two factors, balancing the need for quick execution with the capacity of the system\u2019s available resources.\n###figure_4###"
94
+ },
95
+ {
96
+ "section_id": "5.2",
97
+ "parent_section_id": "5",
98
+ "section_name": "Primary Security Analysis",
99
+ "text": "Safety (settlement).\nIn our context, the appropriate definition of safety extends beyond conventional distributed systems definitions that primarily focus on state consistency. Here, safety implies that a request is fully executed on both sides and returns the correct value. To achieve this, we primarily ensure settlement: a request should be treated as an indivisible unit that can reach the final state on both sides. This guarantees that the protocol remains in a consistent state and prevents the fraudulent creation of additional values or unwarranted destruction of legitimately existing values within the Ethereum network. The complete lifecycle in our protocol is marked by two Bitcoin transactions: the first transaction (where an inscription request is included) serves as a trigger, initiating a series of events; and the second transaction acts as a receipt, indicating the successful completion of all associated events.\nFirstly, the unidirectional invoking nature of the MidasTouch protocol guarantees that Bitcoin users can successfully complete token transfers from the origin address to the designated address. This indicates that the operations recorded in the inscription can be invoked. Given our assumption that the majority of validators are honest, these operations will faithfully transit through our channel. Upon reaching the smart contract, the operations are processed on the Ethereum chain based on the endorsement of the majority of validators through multi-signature verification. Secondly, after the execution of operations, receipts are issued and broadcasted on the Bitcoin network to guarantee the finalization of the originated inscription requests included in the executed bundle, providing an additional layer of security and transparency.\nFurthermore, the MidasTouch protocol does not possess the ability to externally deduct digital assets from either side. Transactions on Bitcoin are initiated by users, while the legal invocation of a contract requires the majority of validators. Any misbehavior will be rejected through internal consensus procedures. Thus, the addition of the receipt operation ensures the protocol\u2019s safety and settlement, offering conclusive evidence of successful transaction executions.\nLiveness.\nThe liveness property of the system ensures that it remains continuously available without the risk of sudden shutdown. In the context of our protocol, this property signifies that any actions performed within the Bitcoin network, such as transactions or inscriptions, will eventually be reflected in the Ethereum network. This property relies on the correct functionality of the majority of validators, which we assume to be guaranteed (our assumptions in Sec.II-C ###reference_###).\nFairness.\nThe property ensures equality among valid transactions, prohibiting any discrimination. This indicates any Bitcoin transaction conforming to protocol rules and having paid the necessary gas fee will be finally reflected in Ethereum without bias. Fairness can be achieved through the settlement of operation processing on both sides."
100
+ },
101
+ {
102
+ "section_id": "6",
103
+ "parent_section_id": null,
104
+ "section_name": "VI Further Discussions",
105
+ "text": "Utility in \u201cone-direction\u201d. Our bridge is designed to facilitate unidirectional interactions, enabling users to invoke actions and trigger state transitions from the Bitcoin network to the Ethereum network. It is important to note that our solution does not support bi-directional functionality, meaning that users must initiate the workflow from the Bitcoin side and follow the established pathway to trigger events within the Ethereum smart contract. While the unidirectional nature of our bridge may impose certain constraints on its scope of application and usage, it is an inherent limitation resulting from the distinct data formats utilized by Bitcoin and Ethereum. The use of UTXOs in Bitcoin ensures a reliable transaction ordering mechanism but, by its nature, restricts the support for other features such as general state transitions in contracts. However, despite this limitation, we have successfully established a lightweight directional channel at a minimal cost, offering valuable assistance to Bitcoin users seeking to interact with the Ethereum network.\nEvaluation on \u201cone-side\u201d. Our evaluation primarily focuses on the smart contract functionality and performance within the Ethereum testnet. The limitation arises from our consideration of costs, particularly due to the unaffordability of conducting batch transactions on Bitcoin\u2019s network, which lacks a dedicated testnet. In this initial version, we have implemented all the functional events on both the Bitcoin and Ethereum sides, enabling us to maximize our evaluation of the potential performance and associated costs. However, we acknowledge that there is ample room for further optimization. We encourage industry teams with an interest in this topic to invest more resources into conducting comprehensive evaluations.\nExtension, \u201cNOT\u201d comparison. Even though we propose a middleware to bridge the Bitcoin network and Ethereum, our primary emphasis is not on cross-chain functionality, but rather on leveraging Ethereum as a tool to enhance BRC20. As a result, the protocol has been intentionally crafted to address the specific requirements of the BRC-20 scenario.\nFaithfulness of validators. It is well recognized that even permissioned blockchain systems are not completely immune to the trustworthiness of validators, regardless of the committee size. Concerns may arise among regular users regarding the potential compromise of validators, which could pose a threat to the stability of the middleware. To mitigate such risks, we recommend that each middleware validator deposits a substantial amount of tokens (e.g., 32 ETH [58 ###reference_b58###]) into the protocol. This ensures that validators have significant stakes in the network, reducing the likelihood of malicious behavior. This will provide users with a higher level of confidence when transferring larger amounts of tokens through the middleware. Additionally, increasing the committee size by enabling dynamic formation can significantly enhance the robustness and decentralization of the system, moving it closer to a permissionless model [59 ###reference_b59###]. However, it\u2019s important to acknowledge that some degree of centralization might persist [60 ###reference_b60###], but steps can be taken to mitigate this tendency."
106
+ },
107
+ {
108
+ "section_id": "7",
109
+ "parent_section_id": null,
110
+ "section_name": "VII Conclusion",
111
+ "text": "Current Bitcoin and Ethereum are isolated due to their heterogeneous chain structure. In this work, we propose a lightweight one-way middleware, named MidasTouch, to bridge the Bitcoin and Ethereum networks. We employ the notion of the newly proposed BRC-20 standard to incorporate a range of operations into each satoshi and associate them with specific events within Ethereum smart contracts. We implement a prototype of MidasTouch and evaluate the performance from the Ethereum side. Evaluation results demonstrate practicability and efficiency. To our knowledge, this is the first attempt to expand the capabilities of BRC-20."
112
+ }
113
+ ],
114
+ "appendix": [],
115
+ "tables": {
116
+ "1": {
117
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Mainstream interoperable blockchains</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S2.T1.1\" style=\"width:433.6pt;height:169.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(25.9pt,-10.1pt) scale(1.13545897759418,1.13545897759418) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1\">\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T1.1.1.1.1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T1.1.1.1.1.1\" style=\"width:42.7pt;height:18pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:42.7pt;transform:translate(0pt,0pt) rotate(-0deg) ;\">\n<p class=\"ltx_p\" id=\"S2.T1.1.1.1.1.1.1\"><span class=\"ltx_text\" id=\"S2.T1.1.1.1.1.1.1.1\"></span><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T1.1.1.1.1.1.1.2\"> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.1.1.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1.1.1.1.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.1.1.1.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.1.1.1.1.2.1.1.1.1\">Projects</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.1.1.1.2.2\"></span></span></p>\n</span></div></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.1.2\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T1.1.1.1.2.1\" style=\"width:75.8pt;height:18pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:75.8pt;transform:translate(0pt,0pt) rotate(-0deg) ;\">\n<p class=\"ltx_p\" id=\"S2.T1.1.1.1.2.1.1\"><span class=\"ltx_text\" id=\"S2.T1.1.1.1.2.1.1.1\"></span><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T1.1.1.1.2.1.1.2\"> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.2.1.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1.1.2.1.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.1.2.1.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.1.2.1.1.2.1.1.1.1\">Communication</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.2.1.1.2.2\"></span></span></p>\n</span></div></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T1.1.1.1.3\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T1.1.1.1.3.1\" style=\"width:61.2pt;height:18pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:61.2pt;transform:translate(0pt,0pt) rotate(-0deg) ;\">\n<p class=\"ltx_p\" id=\"S2.T1.1.1.1.3.1.1\"><span class=\"ltx_text\" id=\"S2.T1.1.1.1.3.1.1.1\"></span><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T1.1.1.1.3.1.1.2\"> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.3.1.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1.1.3.1.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.1.3.1.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.1.3.1.1.2.1.1.1.1\">Architecture</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.3.1.1.2.2\"></span></span></p>\n</span></div></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.1.4\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T1.1.1.1.4.1\" style=\"width:41.5pt;height:18pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:41.5pt;transform:translate(0pt,0pt) rotate(-0deg) ;\">\n<p class=\"ltx_p\" id=\"S2.T1.1.1.1.4.1.1\"><span class=\"ltx_text\" id=\"S2.T1.1.1.1.4.1.1.1\"></span><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T1.1.1.1.4.1.1.2\"> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.4.1.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1.1.4.1.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.1.4.1.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.1.4.1.1.2.1.1.1.1\">Witness</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.4.1.1.2.2\"></span></span></p>\n</span></div></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.1.5\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T1.1.1.1.5.1\" style=\"width:75.6pt;height:18pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:75.6pt;transform:translate(0pt,0pt) rotate(-0deg) ;\">\n<p class=\"ltx_p\" id=\"S2.T1.1.1.1.5.1.1\"><span class=\"ltx_text\" id=\"S2.T1.1.1.1.5.1.1.1\"></span><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S2.T1.1.1.1.5.1.1.2\"> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.5.1.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S2.T1.1.1.1.5.1.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S2.T1.1.1.1.5.1.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.1.5.1.1.2.1.1.1.1\">Implementation</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S2.T1.1.1.1.5.1.1.2.2\"></span></span></p>\n</span></div></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.2.1\">Polkadot</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.2.2\">XCMP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S2.T1.1.1.2.3\">Parachains</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.2.4\">Relay chain (SC)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S2.T1.1.1.2.5\">Substrate</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.1.1.3.1\">Cosmos</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.3.2\">IBC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T1.1.1.3.3\">Hybrid (TAO/APP)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.3.4\">Relayers</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.3.5\">Tendermint</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.1.1.4.1\">Hermes</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.4.2\">ODAP(-2PC)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T1.1.1.4.3\">Gateway-based</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.4.4\">Middleware</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.4.5\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.1.1.5.1\">Hyperledger</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.5.2\">Trusted party</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T1.1.1.5.3\">Hybrid</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.5.4\">Validators</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.5.5\">Cactus</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.1.1.6.1\">CheaPay</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.6.2\">Sidechain</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T1.1.1.6.3\">Layer-2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.6.4\">Hash-lock</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.6.5\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S2.T1.1.1.7.1\">Herdius</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.7.2\">Sidechain</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S2.T1.1.1.7.3\">Layer-2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.7.4\">Hash-lock</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S2.T1.1.1.7.5\">-</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S2.T1.1.1.8.1\">Tesseract</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.1.1.8.2\">Trusted hardware</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S2.T1.1.1.8.3\">Hybrid</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.1.1.8.4\">TEE</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S2.T1.1.1.8.5\">(Exchange)</td>\n</tr>\n</table>\n</span></div>\n</figure>",
118
+ "capture": "TABLE I: Mainstream interoperable blockchains"
119
+ },
120
+ "2": {
121
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Notations</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.17\" style=\"width:433.6pt;height:403pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(68.6pt,-63.8pt) scale(1.46334314535057,1.46334314535057) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.17.17\">\n<tr class=\"ltx_tr\" id=\"S4.T2.17.17.18\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.17.17.18.1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T2.17.17.18.1.1\" style=\"width:39.2pt;height:18pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:39.2pt;transform:translate(0pt,0pt) rotate(-0deg) ;\">\n<p class=\"ltx_p\" id=\"S4.T2.17.17.18.1.1.1\"><span class=\"ltx_text\" id=\"S4.T2.17.17.18.1.1.1.1\"></span><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.17.17.18.1.1.1.2\"> <span class=\"ltx_text\" id=\"S4.T2.17.17.18.1.1.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.17.17.18.1.1.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T2.17.17.18.1.1.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.17.17.18.1.1.1.2.1.1.1.1\">Symbol</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T2.17.17.18.1.1.1.2.2\"></span></span></p>\n</span></div></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.17.17.18.2\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T2.17.17.18.2.1\" style=\"width:44.2pt;height:18pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:44.2pt;transform:translate(0pt,0pt) rotate(-0deg) ;\">\n<p class=\"ltx_p\" id=\"S4.T2.17.17.18.2.1.1\"><span class=\"ltx_text\" id=\"S4.T2.17.17.18.2.1.1.1\"></span><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.17.17.18.2.1.1.2\"> <span class=\"ltx_text\" id=\"S4.T2.17.17.18.2.1.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.17.17.18.2.1.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T2.17.17.18.2.1.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.17.17.18.2.1.1.2.1.1.1.1\">Meaning</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T2.17.17.18.2.1.1.2.2\"></span></span></p>\n</span></div></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.17.17.18.3\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T2.17.17.18.3.1\" style=\"width:31.9pt;height:18pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"width:31.9pt;transform:translate(0pt,0pt) rotate(-0deg) ;\">\n<p class=\"ltx_p\" id=\"S4.T2.17.17.18.3.1.1\"><span class=\"ltx_text\" id=\"S4.T2.17.17.18.3.1.1.1\"></span><span class=\"ltx_text ltx_font_bold ltx_font_italic\" id=\"S4.T2.17.17.18.3.1.1.2\"> <span class=\"ltx_text\" id=\"S4.T2.17.17.18.3.1.1.2.1\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.17.17.18.3.1.1.2.1.1\">\n<span class=\"ltx_tr\" id=\"S4.T2.17.17.18.3.1.1.2.1.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S4.T2.17.17.18.3.1.1.2.1.1.1.1\">Scope</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S4.T2.17.17.18.3.1.1.2.2\"></span></span></p>\n</span></div></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.2.2.2.2\">validator set, instantiated by \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.2.2.2.3\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S4.T2.2.2.2.3.1\">MidasTouch</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.4.4.4.2\">state set, instantiated by \n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4.3\">Ethereum</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.6.6.6.2\">smart contact set, instantiated by \n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.6.6.6.3\">Ethereum</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.7.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.7.7.7.2\">inscription bundle set</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.7.7.7.3\">Bitcoin</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.8.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.8.8.8.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.8.8.8.2\">contract address/identifier</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.8.8.3\">Ethereum</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.9.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.9.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.9.9.9.2\">receipt set</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.9.9.9.3\">Bitcoin/Ethereum</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.10.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.10.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.10.10.10.2\">address balance</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.10.10.10.3\">Bitcoin</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.11.11.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.11.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.11.11.11.2\">block height/index</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.11.11.11.3\">Bitcoin/Ethereum</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.12.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.12.12.12.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.12.12.12.2\">penalty rate</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.12.12.12.3\">Bitcoin</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.13.13.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.13.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.13.13.13.2\">gas fee</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.13.13.13.3\">Ethereum</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.14.14.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.14.14.14.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.14.14.14.2\">constant value</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.14.14.14.3\">Bitcoin/Ethereum</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.15.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.15.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.15.15.15.2\">short for inscriptions</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.15.15.15.3\">Bitcoin</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.16.16.16\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.16.16.16.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.16.16.16.2\">short for transactions</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.16.16.16.3\">Bitcoin/Ethereum</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.17.17.17\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T2.17.17.17.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T2.17.17.17.2\">validator mapping topology</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.17.17.17.3\">Bitcoin/Ethereum</td>\n</tr>\n</table>\n</span></div>\n</figure>",
122
+ "capture": "TABLE II: Notations"
123
+ }
124
+ },
125
+ "image_paths": {
126
+ "1": {
127
+ "figure_path": "2310.10065v2_figure_1.png",
128
+ "caption": "Figure 1: System overview",
129
+ "url": "http://arxiv.org/html/2310.10065v2/extracted/5488039/Figures/middleware.png"
130
+ },
131
+ "2": {
132
+ "figure_path": "2310.10065v2_figure_2.png",
133
+ "caption": "Figure 2: Evaluation on scalability",
134
+ "url": "http://arxiv.org/html/2310.10065v2/extracted/5488039/Figures/scalability.png"
135
+ },
136
+ "3": {
137
+ "figure_path": "2310.10065v2_figure_3.png",
138
+ "caption": "Figure 3: Evaluation on gas consumption for finalization",
139
+ "url": "http://arxiv.org/html/2310.10065v2/extracted/5488039/Figures/gas_used.png"
140
+ },
141
+ "4": {
142
+ "figure_path": "2310.10065v2_figure_4.png",
143
+ "caption": "Figure 4: Evaluation on different numbers of inter blocks",
144
+ "url": "http://arxiv.org/html/2310.10065v2/extracted/5488039/Figures/varepsilon.png"
145
+ }
146
+ },
147
+ "validation": true,
148
+ "references": [],
149
+ "url": "http://arxiv.org/html/2310.10065v2"
150
+ }
20240322/2311.03821v3.json ADDED
@@ -0,0 +1,595 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Positive Competitive Networks for Sparse Reconstruction",
3
+ "abstract": "We propose and analyze a continuous-time firing-rate neural network, the positive firing-rate competitive network (PFCN), to tackle sparse reconstruction problems with non-negativity constraints. These problems, which involve approximating a given input stimulus from a dictionary using a set of sparse (active) neurons, play a key role in a wide range of domains, including for example neuroscience, signal processing, and machine learning. First, by leveraging the theory of proximal operators, we relate the equilibria of a family of continuous-time firing-rate neural networks to the optimal solutions of sparse reconstruction problems. Then, we prove that the PFCN is a positive system and give rigorous conditions for the convergence to the equilibrium. Specifically, we show that the convergence: (i) only depends on a property of the dictionary; (ii) is linear-exponential, in the sense that initially the convergence rate is at worst linear and then, after a transient, it becomes exponential. We also prove a number of technical results to assess the contractivity properties of the neural dynamics of interest. Our analysis leverages contraction theory to characterize the behavior of a family of firing-rate competitive networks for sparse reconstruction with and without non-negativity constraints. Finally, we validate the effectiveness of our approach via a numerical example.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Sparse reconstruction (SR) or sparse approximation problems are ubiquitous in a wide range of domains spanning, e.g., neuroscience, signal processing, compressed sensing, and machine learning [13 ###reference_b13###, 49 ###reference_b49###, 24 ###reference_b24###, 48 ###reference_b48###]. These problems involve approximating a given input stimulus from a dictionary, using a set of sparse (active) units/neurons. Over the past years, an increasing body of theoretical and experimental evidence [33 ###reference_b33###, 7 ###reference_b7###, 25 ###reference_b25###, 41 ###reference_b41###, 42 ###reference_b42###, 43 ###reference_b43###] has grown to support the use of sparse representations in neural systems. In this context, we propose (and characterize the behavior of) a novel family of continuous-time, firing-rate neural networks (FNNs) that we show tackle SR problems. Due to their biological relevance, we are particularly interested in SR problem with non-negativity constraints and, to solve these problems, we propose the positive firing-rate competitive network. This is an FNN whose state variables have the desirable, biologically plausible, property of remaining non-negative.\nHistorically, understanding representation in neural systems has been a key research challenge in neuroscience. The evidence that many sensory neural systems employ SR traces back to the pioneering work by Hubel and Wiesel, where it is shown that the responses of simple-cells in the mammalian visual cortex (V1) can be described as a linear filtering of the visual input [33 ###reference_b33###].\nThis insight was further expanded upon by Barlow, who hypothesized that sensory neurons aim to encode an accurate representation of the external world using the fewest active neurons possible [7 ###reference_b7###].\nSubsequently, Field showed that simple-cells in V1 efficiently encode natural images using only a sparse fraction of active units [25 ###reference_b25###].\nThen, Olshausen and Field proposed that biological vision systems encode sensory input data and showed that a neural network trained to reconstruct natural images with sparse activity constraints develops units with properties similar to those found in V1 [41 ###reference_b41###, 42 ###reference_b42###]. These ideas have since gained substantial support from studies on different animal species and the human brain [43 ###reference_b43###].\nFormally, the SR problem can be formulated as a composite minimization problem111A composite minimization problem refers to an optimization task that involves minimizing a function composed of the sum of a differentiable and a non-differentiable component, typically combining a smooth loss function with a non-smooth regularization term. given by a least squares optimization problem regularized with a sparsity-inducing penalty function.\nWhile traditional optimization methods rely on discrete algorithms, recently an increasing number of continuous-time recurrent neural networks (RNNs) have been used to solve optimization problems. Essentially, these RNNs are continuous-time dynamical systems converging to an equilibrium that is also the optimizer of the problem. Consequently, much research effort has been devoted to characterizing the stability of those systems and their convergence rates [1 ###reference_b1###, 30 ###reference_b30###, 9 ###reference_b9###, 15 ###reference_b15###, 19 ###reference_b19###].\nIn fact, the use of continuous-time RNNs to solve optimization problems and research on stability conditions for these RNNs has gained considerable interest in a wide range of fields, see, e.g., [51 ###reference_b51###, 2 ###reference_b2###, 4 ###reference_b4###].\nAn RNN designed to tackle the SR problem is the Locally Competitive Algorithm (LCA) introduced by Rozell et al. [45 ###reference_b45###].\nThis network is a continuous-time Hopfield-like neural network [29 ###reference_b29###] (HNN) of the form:\nwith output . In (1 ###reference_###) the state is usually interpreted as a membrane potential of the neurons in the network, is a symmetric synaptic matrix, is a diagonal activation function, and is an external stimulus.\nFollowing [45 ###reference_b45###], several results were established to analyze the properties of the LCA. Specifically, in [3 ###reference_b3###] it is proven that, provided that the fixed point of the LCA is unique then, for a certain class of activation functions, the LCA globally asymptotically converges.\nThen, in [2 ###reference_b2###] it is shown that the fixed points of the LCA coincide with the solutions of the SR problem. Using a Lyapunov approach, under certain conditions on the activation function and on the solutions of the systems, it is also shown that the LCA converges to a single fixed point with exponential rate of convergence.\nVarious sparsity-based probabilistic inference problems are shown to be implemented via the LCA in [16 ###reference_b16###].\nIn [4 ###reference_b4###] a technique using the \u0141ojasiewicz inequality is used to prove convergence of both the output and state variables of the LCA.\n[5 ###reference_b5###], [6 ###reference_b6###], [52 ###reference_b52###] focus on analyzing the LCA for the SR problem with sparsity-inducing penalty function. Specifically, the convergence rate is analyzed in [5 ###reference_b5###].\nIn [6 ###reference_b6###] it is rigorously shown how the LCA can recover a time-varying signal from streaming compressed measurements.\nAdditionally, physiology experiments in [52 ###reference_b52###] demonstrate that numerous response properties of non-classical receptive field (nCRF) can be reproduced using a model having the LCA as neural dynamics with an additional non-negativity constraint enforced on the output to represent the instantaneous spike rate of neurons within the population.\nWe also note that, while the LCA is biologically inspired, as noted in, e.g., [37 ###reference_b37###] a biologically plausible network should exhibit non-negative states and this property is not guaranteed by the LCA.\nMotivated by this, we consider positive SR problems, i.e., a class of SR problems with non-negativity constraints on the states and to tackle these problems we introduce the positive firing-rate competitive network (PFCN).\nThis is an FNN of the form (see, e.g., [22 ###reference_b22###])\nwith output and where the state is interpreted as the firing-rate of the neurons in the network, and, as in (1 ###reference_###), is the synaptic matrix, is the activation function, and is an external stimulus (or input).\nThe HNN (1 ###reference_###) and FNN (2 ###reference_###) are known to be mathematical equivalent [39 ###reference_b39###] through suitably defined state and input transformations.\nHowever, the input transformation is state dependent precisely when the synaptic matrix is rank deficient (as in sparse reconstruction) and, counter-intuitively, the transformation of solutions from HNN to FNN requires that the initial condition of the input depends on the initial condition of the state.\nMoreover, the FNN might hold an advantage over the HNN in terms of biological plausibility in the following sense. When the activation function is non-negative, the positive orthant is forward-invariant, i.e., the state remains non-negative from non-negative initial conditions and is thus interpreted as a vector of firing-rates.\nTherefore, even if the HNN state can be interpreted as a vector of membrane potentials, it is more natural to interpret negative (resp. positive) synaptic connections as inhibitory (resp. excitatory) in the FNN rather than the HNN.\nTo the best of our knowledge, the PFCN is the first RNN to tackle positive SR problems and with our main results we characterize the behavior of this network, showing that it indeed solves this class of problems. Our analysis leverages contraction theory [38 ###reference_b38###] and, in turn, this allows us to also characterize the behavior of the firing-rate competitive network (FCN), i.e., a firing-rate version of the LCA, able to tackle the SR problem. Our use of contraction theory is motivated by the fact that contracting dynamics are robustly stable and enjoy many properties, such as certain types of input-to-state stability. For further details, we refer to the recent monograph [11 ###reference_b11###] and to, e.g., works on recent applications of contraction theory in computational biology, neuroscience, and machine learning [46 ###reference_b46###, 21 ###reference_b21###, 14 ###reference_b14###, 35 ###reference_b35###].\nOur key technical contributions can then be summarized as follows:\nWe propose, and analyze, the firing-rate competitive network and the positive firing-rate competitive network to tackle the SR and positive SR problem, respectively.\nFirst, we introduce a result relating the equilibria of the proposed networks to the optimal solutions of sparse reconstruction problems.\nThen, we characterize the convergence of the dynamics towards the equilibrium. For the PFCN, we also show that this is a positive system, i.e., if the system starts with non-negative initial conditions, its state variables remain non-negative.\nAfter characterizing the local stability and contractivity for the dynamics of our interest, with our main convergence result we prove that, under a standard assumption on the dictionary, our dynamics converges linear-exponentially to the equilibrium, in the sense that (in a suitably defined norm) the trajectory\u2019s distance from the equilibrium is initially upper bounded by a linear function and then convergence becomes exponential.\nWe also give explicit expressions for the average linear decay rate and the time at which exponential convergence begins.\nTo achieve (i) ###reference_i1###, we propose a top/down normative framework for a biologically-plausible explanation of neural circuits solving sparse reconstruction and other optimization problems. To do so, we leverage tools from monotone operator theory [17 ###reference_b17###, 44 ###reference_b44###] and, in particular, the recently studied proximal gradient dynamics [27 ###reference_b27###, 19 ###reference_b19###].\nThis general theory explains how to transcribe a composite optimization problem into a continuous-time firing-rate neural network, which is therefore interpretable.\nOur analysis of the FCN and PFCN dynamics naturally leads to the study of the convergence of globally-weakly and locally-strongly contracting systems.\nThese are dynamics that are weakly infinitesimally contracting on and strongly infinitesimally contracting on a subset of (see Section II-D ###reference_### for the definitions).\nWe then conduct a comprehensive convergence analysis of this class of dynamics, which generalizes the linear-exponential convergence result for the FCN and PFCN to a broader setting. We also provide a useful technical result on the logarithm norm of upper triangular block matrices.\nFinally, we illustrate the effectiveness of our results via numerical experiments. The code to replicate our numerical examples is available at https://tinyurl.com/PFCN-for-Sparse-Reconstruction ###reference_truction###.\nThe rest of the paper is organized as follows. In Section 2, we provide some useful mathematical preliminaries: an overview of the SR problem, norms, and logarithmic norms definitions and results, and a review of contraction theory. In Section 3, we present the main results of the paper: we propose the FCN and the PFCN, establish the equivalence between the optimal solution of the SR problem and the equilibria of the FCN, and prove the linear-exponential convergence behavior of our models.\nIn Section 4, we illustrate the effectiveness of our approach via a numerical example. In Section 5, we analyze the convergence of globally-weakly and locally-strongly contracting systems, showing linear-exponential convergence behavior of these systems.\nWe provide a final discussion and future prospects in Section 7. Finally, in the appendices, we provide instrumental results and review concepts useful for our analysis."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Mathematical Preliminaries",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Notation",
21
+ "text": "We denote by , the all-ones and all-zeros vectors, respectively. We denote by the ball of radius centered at some and whose distance with respect to (w.r.t.) the center is computed w.r.t. the norm . We specify the center of when . We let be the diagonal matrix with diagonal entries equal to and be the identity matrix.\nFor we denote by its rank, and by its spectral abscissa, where denotes the real part of . Given symmetric , we write (resp. ) if is positive semidefinite (resp. definite). The function is the ceiling function and is defined by . The subdifferential of at is the set .\nFinally, whenever it is clear from the context, we omit to specify the dependence of functions on time ."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B The Sparse Reconstruction Problems",
27
+ "text": "Given a -dimensional input (e.g., a -pixel image), the sparse reconstruction problem consists in reconstructing with a linear combination of a sparse vector and a dictionary composed of (unit-norm) vectors (see Figure 2 ###reference_###.b)).\nFollowing [42 ###reference_b42###], sparse reconstruction problems can be formulated as follows:\nwhere is a scalar parameter that controls the trade-off between accurate reconstruction error (the first term) and sparsity (the second term). Indeed, in (3 ###reference_###) is a non-linear cost function that induces sparsity and is typically assumed to be convex and separable across indices, i.e., , for all , with .\nUsing the definition of norm we can write\nThe matrix is known as Gramian matrix of .\nNote that, when is convex and , the objective function is strongly convex, therefore (3 ###reference_###) admits a unique solution. While, when , is not strongly convex, leading to multiple solutions. Specifically, when , we must have . SR problems focus on the underdetermined case, i.e., when .\nA common choice of is the norm, resulting in the following formulation of (3 ###reference_###), known as basis pursuit denoising or lasso:\nFor problem (4 ###reference_###), accurate reconstruction of is possible under the condition that is sparse enough and the dictionary satisfies the following:\nLet be natural numbers. A vector is -sparse if it has at most non-zero entries. A matrix satisfies the restricted isometry property (RIP) of order if there exist a constant , such that for all -sparse we have\nThe order- restricted isometry constant is the smallest such\nthat (5 ###reference_###) holds.\nWe are particularly interested in (4 ###reference_###) when this has non-negative constraints. We term this problem the positive sparse reconstruction problem and the goal is to reconstruct an input using a linear combination of a non-negative and sparse vector and a unit-norm dictionary . Formally, the positive sparse reconstruction problem can be stated as follows:\nThe minimization problem (6 ###reference_###) can equivalently be written as the unconstrained optimization problem\nwhere is the zero-infinity indicator function on and is defined by if and otherwise.\nWe note that problem (7 ###reference_###) can be formally written as problem (3 ###reference_###) when the sparsity inducing cost in (3 ###reference_###) is , where we used the fact that must belong to . Also, for our derivations, it is useful to introduce the scalar function ."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "II-C Norms and Logarithmic Norms",
33
+ "text": "Given two vector norms and on there exist positive\nequivalence coefficients and such that\nFor later use, we give the following\nGiven two norms and , let and be the minimal coefficients satisfying (8 ###reference_###). The equivalence ratio between and is .\nLet denote both a norm on and its corresponding induced matrix norm on . Given and we recall that the vector norm and matrix norm are, respectively, , . The logarithmic norm (log-norm) induced by the norm is .\nFor an invertible , the -weighted matrix norm is . The corresponding log-norm is ."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "II-D Contraction Theory for Dynamical Systems",
39
+ "text": "Consider a dynamical system\nwhere , is a smooth nonlinear function with forward invariant set for the dynamics. We let be the flow map of (9 ###reference_###) starting from initial condition .\nFirst, we give the following:\nGiven a norm with associated log-norm , a smooth function , with -invariant, open and convex, and a constant ( referred as contraction rate, is -strongly (weakly) infinitesimally contracting on if\nwhere is the Jacobian of with respect to .\nOne of the benefits of contraction theory is that it enables the study of the convergence behavior of the flow map with a single condition.\nSpecifically, if is contracting, for any two trajectories and of (9 ###reference_###) it holds\ni.e., the distance between the two trajectories converges exponentially with rate if is -strongly infinitesimally contracting, and never increases if is weakly infinitesimally contracting.\nStrongly infinitesimally contracting systems enjoy many useful properties. Notably, initial conditions are exponentially forgotten [38 ###reference_b38###], time-invariant dynamics admit a unique globally exponential stable equilibrium [38 ###reference_b38###] (see Figure 1 ###reference_###.a)), and enjoy highly robust behaviors [46 ###reference_b46###, 50 ###reference_b50###].\nThese properties do not generally extend to weakly infinitesimally contracting systems. Nevertheless, these systems still enjoy numerous useful properties, such as the so-called dichotomy property [34 ###reference_b34###]. This property states that if a weakly infinitesimally contracting system on has no equilibrium point in , then every trajectory starting in is unbounded (see Figure 1 ###reference_###.b)); otherwise, if the system has at least one equilibrium, then every trajectory starting in is bounded (see Figure 1 ###reference_###.c)).\n###figure_1### Of particular interest is the case of nonsmooth map . In [20 ###reference_b20###, Theorem 6] condition (10 ###reference_###) is generalized for locally Lipschitz function, for which the Jacobian exists almost everywhere (a.e.) in . Specifically, if is locally Lipschitz, then is infinitesimally contracting on if condition (10 ###reference_###) holds for a.e. and .\nFinally, we recall the following result in [15 ###reference_b15###, Corollary 1.(i)] on the weakly infinitesimally contractivity of FNN (2 ###reference_###) that will be useful for our analysis.\nConsider the FNN (2 ###reference_###) with symmetric weight matrix , and with activation function being Lipschitz and slope restricted in . If , then the FNN is weakly infinitesimally contracting with respect to some weighted Euclidean norm, say .222The explicit expression for the matrix is given in Appendix A ###reference_###."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "III Main Results",
45
+ "text": "In this section, we present the main results of this paper. Specifically, we first introduce a family of continuous-time FNNs, i.e., the FCN, to tackle the SR problem in (3 ###reference_###) and give a result relating the equilibria of the former to the optimal solutions of the latter. Then, we consider the SR problem with non-negativity constraints in (7 ###reference_###) and propose an FNN network, i.e., the PFCN, to also tackle such problem."
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-A Firing-rate Neural Networks For Solving Sparse Reconstruction Problems",
51
+ "text": "The SR problems introduced in Section II-B ###reference_### naturally arise in the context of visual information processing. For example, as illustrated in Figure 2 ###reference_###.a) for mammalians, the visual sensory input data is encoded by the receptive fields of simple cells in V1 using only a small fraction of active (sparse) neurons. Formally (see Figure 2 ###reference_###.b)), the input signal is reconstructed through a linear combination of an overcomplete matrix and a sparse vector . The FCN and PFCN introduced in this paper to tackle the SR and positive SR problems are schematically illustrated in Figure 2 ###reference_###.c). Namely, each hidden node, or neuron in what follows, receives as stimulus the similarity score between the input signal and the dictionary element and, collectively, all the hidden neurons give as output a sparse (non-negative) vector .\n###figure_2### To devise our results, we make use of the following standard assumption on the sparsity-inducing cost function .\nThe function is convex, closed, proper333We refer to Appendix C ###reference_### for the definition of those notions., and separable across the indices, i.e., , for all , where is a convex, closed, and proper scalar function.\nIn order to transcribe the SR problem in (3 ###reference_###) into an interpretable continuous-time firing-rate neural network, we leverage the theory of proximal operators.\nThe proximal operator of a convex function is a natural extension of the notion of projection operator onto a convex set. This concept has gained increasing significance in various fields, particularly in signal processing and optimization problems [17 ###reference_b17###, 8 ###reference_b8###].\nWe refer to Appendix C ###reference_### for a self-contained primer on proximal operators, including the definition of the continuous-time proximal gradient dynamics (44 ###reference_###).\nThe SR problem (3 ###reference_###) is a special instance of the composite minimization problem (43 ###reference_###) in Appendix C ###reference_### with and .\nTherefore, to tackle problem (3 ###reference_###) we introduce the following special instance of proximal gradient dynamics, the firing-rate competitive network (FCN):\nwith output .\nThis dynamics is schematically illustrated in Figure 2 ###reference_###.c).\nIn (11 ###reference_###), the term is the input to the FCN and it captures the similarity between the input signal and the dictionary elements, while the term models the recurrent interactions between the neurons. These interactions implement competition between nodes to represent the stimulus.\nAdditionally, we note that in (11 ###reference_###) the particular form of the activation function is linked to the sparsity-inducing term in (3 ###reference_###), , via the proximal operator.\nTo be precise, the activation function is the proximal operator of computed at the point .\nWe now make explicit how the dynamics (11 ###reference_###) reads for the SR problem in (4 ###reference_###) and the positive SR problem in (7 ###reference_###).\nFor the lasso problem (4 ###reference_###), the sparsity-inducing cost function is . This function is convex, separable and , for all . Moreover, it is well know (see, e.g., [44 ###reference_b44###]) that for any , the proximal operator of is , where is the soft thresholding function defined by , and the map is defined by\nwith being the sign function defined by if , if , and if .\nNow and throughout the rest of the paper, we adopt a slight abuse of notation by using the same symbol to represent both the scalar and vector form of the activation function.\nThe corresponding FCN (11 ###reference_###) for the lasso problem (4 ###reference_###) is therefore:\nRemarkably, the dynamics (12 ###reference_###) is the firing-rate version of the LCA designed for tackling the lasso problem (4 ###reference_###), which is a continuous-time Hopfield-like neural network of the form [45 ###reference_b45###]:\nwith output .\nNext, we define the FCN that solves the positive SR problem (7 ###reference_###). For this purpose, we need to determine the proximal operator of . We have \u2013 see Lemma C.3 ###reference_thm3### in Appendix C ###reference_### for the mathematical details \u2013 that , for all , where is the (shifted) ReLU function defined by\nThus, the FCN (11 ###reference_###) that solves the positive SR problem (7 ###reference_###) is given by\nwith output .\nWe call these dynamics positive firing-rate competitive network (PFCN). A key property of the PFCN is that it is a positive system; i.e., given a non-negative initial state, the state variables are always non-negative (we refer to Appendix B ###reference_### for a rigorous proof of this statement). This is a desirable property that can be useful to effectively model both excitatory and inhibitory synaptic connections. In fact, in the PFCN the nature of excitatory and inhibitory recurrent interactions, described by the term , only depends on the sign of the weights. Specifically, the recurrent interaction between two nodes, say them and , is inhibitory if , and excitatory if .\nFinally, for later use, we note that the Jacobian of exists a.e. by Rademacher\u2019s theorem, and now and throughout the rest of the paper we denote by the measure zero set of points where the function is not differentiable."
52
+ },
53
+ {
54
+ "section_id": "3.2",
55
+ "parent_section_id": "3",
56
+ "section_name": "III-B Analysis of the Proposed Networks",
57
+ "text": "We now investigate the key properties of the models introduced above. Intuitively, the results presented in this section can be informally summarized via the following:\nThe trajectories of the PFCN (14 ###reference_###) are bounded. Additionally, if the dictionary is RIP, then:\nthe PFCN converges to an equilibrium point that is also the optimal solution of the positive SR problem (7 ###reference_###);\nthe convergence is linear-exponential, in the sense that the trajectory\u2019s distance from the equilibrium point initially decays at worst linearly, and then, after a transient, exponentially.\n###figure_3### The assumptions, the results, and their links towards building the claims in the informal statement are also summarized in Figure 3 ###reference_###. Specifically, we first show that the equilibria of both the FCN (11 ###reference_###) and PFCN (14 ###reference_###) are the optimal solutions of (3 ###reference_###) and (4 ###reference_###), respectively (Lemma III.1 ###reference_thm1### and Corollary III.2 ###reference_thm2###). Then, we show that the distance between any two trajectories of the PFCN never increases (Theorem III.3 ###reference_thm3###). Moreover, we show that if the dictionary is RIP, then the equilibrium point for the PFCN is not only locally exponentially stable but it is also strongly contracting in a neighborhood of the equilibrium (Theorem III.4 ###reference_thm4###). These results then lead to Theorem III.5 ###reference_thm5###, where we show that the PFCN (14 ###reference_###) has a linear-exponential convergence behavior. That is, the distance between any trajectory of the PFCN and its equilibrium is upper bounded, up to some linear-exponential crossing time, say , by a decreasing linear function. Then, for all , the distance is upper bounded by a decreasing exponential function (see Figure 4 ###reference_### for an illustration of this behavior).\nTo streamline the presentation, we provide explicit derivations for the PFCN (14 ###reference_###). However, the analysis can be adapted for the FCN (11 ###reference_###) (see Remark 2 ###reference_ark2### for the precise conditions).\n###figure_4###"
58
+ },
59
+ {
60
+ "section_id": "3.2.1",
61
+ "parent_section_id": "3.2",
62
+ "section_name": "III-B1 Relating the FCN and PFCN with SR Problems",
63
+ "text": "With Lemma III.1 ###reference_thm1### we show that a given vector is the optimal solution of (3 ###reference_###) if and only if this is also an equilibrium of the FCN (11 ###reference_###). Corollary III.2 ###reference_thm2###, which follows from Lemma III.1 ###reference_thm1###, relates the optimal solutions of (7 ###reference_###) with the equilibria of the PFCN (14 ###reference_###).\nThe vector is an optimal solution of the SR problem (3 ###reference_###) if and only if it is an equilibrium point of the FCN (11 ###reference_###).\nThe necessary and sufficient condition for to be a solution of problem (3 ###reference_###) is\nwhere we have introduced the function defined as .\nNote that is a linear function of , therefore it is Lipschitz, and . That is, is the gradient (w.r.t. ) of a convex function, and thus it is monotone.\nMoreover, by Assumption 1 ###reference_umption1###, the function is convex, closed, and proper, and therefore, so it is . Then, by applying the result in [19 ###reference_b19###, Proposition 4] (picking , , and in such proposition), we have that if and only if is an equilibrium of . This concludes the proof.\n\u220e\nWe can then state the following:\nThe vector is an optimal solution of the positive SR problem (7 ###reference_###) if and only if it is an equilibrium point of the PFCN (14 ###reference_###).\nThe proof, which follows the arguments used to prove Lemma III.1 ###reference_thm1###, is omitted for brevity.\n\u220e"
64
+ },
65
+ {
66
+ "section_id": "3.2.2",
67
+ "parent_section_id": "3.2",
68
+ "section_name": "III-B2 Convergence Analysis",
69
+ "text": "We now present our convergence analysis for the PFCN. In doing so, we introduce here the main convergence results and we refer to the Methods section and to the appendices for technical instrumental results. We start with the following:\nGiven a neural state , an input , and a parameter , the -th neuron is active if , inactive if .\nThe definition of active and inactive neuron/node in our model aligns with the definitions provided in [2 ###reference_b2###] for the LCA.\nSpecifically, as in [2 ###reference_b2###], for an equilibrium point , the activation function in our model is also composed of two operational regions. Namely: (i) one region characterized by having below zero, in which case the output is zero, as the system is at the equilibrium . (ii) one region characterized by having above zero, in which case is strictly increasing with the state .\n\u220e\nWe present the convergence analysis for the PFCN. However, our results can be extended to any FCN (11 ###reference_###), whose proximal operator is Lipschitz and slope restricted in . For the FCN, the in Definition 4 ###reference_inition4### is replaced by . For example, our convergence analysis can be extended to the firing-rate version of the LCA tackling problem (4 ###reference_###), i.e. the dynamics (12 ###reference_###), being Lipschitz and slope restricted in .\nWe now show that the distance between any two trajectories of the PFCN never increases (see Figure 3 ###reference_###). We do so by proving that the PFCN is weakly infinitesimally contracting.\nThe PFCN (14 ###reference_###) is weakly infinitesimally contracting on with respect to the weighted norm .444the explicit expression of is given in (40 ###reference_###) of Appendix A ###reference_###\nFirst, we note that the activation function is Lipschitz with constant and slope restricted in . Indeed the partial derivative of , , is if , and if .\nMoreover, , being .\nThe result then follows by applying Lemma II.1 ###reference_thm1###.\n\u220e\nEssentially, with the above results we established that the trajectories of the FCN are bounded. Next, we further characterize the stability of the equilibria of the PFCN when the dictionary is RIP. We prove that the equilibrium is not only locally exponentially stable but also locally-strongly contracting in a suitably defined norm (see Figure 3 ###reference_###).\nLet be an equilibrium point of the PFCN (14 ###reference_###) having active neurons. If the dictionary is RIP of order and parameter , then\nis locally exponentially stable;\nthe PFCN (14 ###reference_###) is strongly infinitesimally contracting with rate with respect to the norm in a neighborhood of .\nTo prove item (i) ###reference_i1###, we show that is a Hurwitz matrix, i.e., .\nWe start noticing that\nwhere is a diagonal matrix having diagonal entries equal to or .\nWe let and be the number of active and inactive neurons of , respectively, and rearrange the ordering of the elements in such that , where, and , so that\nFurther, we also decompose into\nwhere , , , .\nThe fact that is RIP of order implies that\nTherefore, is positive definite, and its smallest eigenvalue is bounded below by . Moreover, can be written as\nthat is a block upper triangular matrix with Hurwitz diagonal block matrices, and . Thus is Hurwitz. This concludes the proof.\nNext, to prove (ii) ###reference_i2### we note that . By applying Corollary D.2 ###reference_thm2### in Appendix D ###reference_### to , we have\nwith the explicit expression of and given in (47 ###reference_###).\nLet be the region of differentiable points in a neighborhood of . Then, by the continuity property of the log-norm, there exists a neighborhood of ,\nwhere exists and , for all .\nThis concludes the proof.\n\u220e\nTo improve readability, in the statement of Theorem III.4 ###reference_thm4### we do not provide the explicit expression for and . These are instead given in (47 ###reference_###) of Appendix D ###reference_###. For the same reason, we do not report in the statement of Theorem III.4 ###reference_thm4### the neighborhood in which the PFCN is strongly infinitesimally contracting. However, as apparent from the proof, the neighborhood is , which is defined in (25 ###reference_###).\n\u220e\nWith the next result we prove that the PFCN converges linear-exponentially to (see Figure 3 ###reference_###). The proof is given in the Methods section, where we prove a more general result (see Corollary V.2 ###reference_thm2###, in Section V-A ###reference_###).\nWe summarize the key symbols used in the next theorem in Table I ###reference_###.\nConsider the PFCN (14 ###reference_###)\nunder the same assumptions and notations of Theorems III.3 ###reference_thm3### and III.4 ###reference_thm4###.\nLet be the ball around where the system is strongly infinitesimally contracting. Then, for each trajectory starting from and for any , the distance decreases linear-exponentially, in the sense that:\nwhere is the radius of the largest ball centered at such that and where\nare the average linear decay rate and the linear-exponential crossing time, respectively.\nIn what follows, we simply term as contraction factor. We also note that the contraction factor can be chosen optimally to maximize the average linear decay rate . We refer to Lemma V.4 ###reference_thm4### for the mathematical details and the exact statement."
70
+ },
71
+ {
72
+ "section_id": "4",
73
+ "parent_section_id": null,
74
+ "section_name": "IV Simulations",
75
+ "text": "We now illustrate the effectiveness of the PFCN in solving the positive SR problem (LABEL:eq:positive_E_lasso_unconstraines) via a numerical example555The code to replicate all the simulations in this section is available at the github https://tinyurl.com/PFCN-for-Sparse-Reconstruction ###reference_truction###. that is built upon the one in [2 ###reference_b2###]. To this aim, we consider a dimensional sparse signal , with randomly selected non-zero entries. The amplitude of these non-zero entries is obtained by drawing from a normal Gaussian distribution and then taking the absolute values. As in [2 ###reference_b2###] the dictionary is built as a union of the canonical basis and a sinusoidal basis (each basis is normalized so that the dictionary columns have unit norms). Also, we set: (i) the measurements , with , to be , where is a Gaussian random noise with standard deviation ; (ii) .\nGiven this set-up, we simulated both the PFCN (14 ###reference_###) and, for comparison, the LCA (13 ###reference_###). Simulations were performed with Python using the ODE solver solve_ivp. In all the numerical experiments, the simulation time was and initial conditions were set to , except for randomly selected neurons (initial conditions were kept constant across the simulations). The time evolution of the state variables for both the PFCN and LCA is shown in Figure 5 ###reference_###. Both panels illustrate that both the PFCN and the LCA converge to an equilibrium that is close to (although it can not be exactly because of the measurement noise). Also, the figure clearly shows, in accordance with Lemma B.1 ###reference_thm1###, that the trajectories of the PFCN are always non-negative. Instead, the trajectories of the LCA nodes exhibit also negative values over time.\n###figure_5### To illustrate the global convergence behavior of the PFCN (14 ###reference_###), we performed an additional set of simulations, this time with the PFCN starting from randomly generated initial conditions. Then, we randomly selected two neurons from the active and inactive sets and recorded their evolution. The result of this process is shown in Figure 6 ###reference_###, which reports a projection of the phase plane defined by these nodes. Figure 6 ###reference_### shows that the trajectories of the selected nodes converge to the equilibrium point from any of the chosen initial conditions. Specifically, in accordance with our results, the trajectories of the active neurons converge to positive values, while the trajectories of the inactive nodes converge to the origin.\n###figure_6### Finally, we performed an additional, exploratory, numerical study to investigate what happens when the activation function of the LCA is the shifted . Even though the assumptions in [2 ###reference_b2###] on the activation function exclude the use of the for the LCA, we decided to simulate this scenario to investigate if the LCA dynamics would become positive if the was used as activation function. Hence, for our last numerical study, we considered the following LCA dynamics:\nwith output . In Figure 7 ###reference_### the time evolution of the state variables of the LCA (29 ###reference_###) is shown. As apparent from the figure, even using the as activation function, the trajectories of the LCA states still exhibit negative values over time.\n###figure_7###"
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Methods",
81
+ "text": "We prove a number of results, from which Theorem III.5 ###reference_thm5### follows, that characterize convergence of nonlinear systems of the form of (9 ###reference_###) that are globally-weakly contracting and locally-strongly contracting (possibly, in different norms). We show that these systems, which also arise in flow dynamics, traffic networks, and primal-dual dynamics [34 ###reference_b34###, 11 ###reference_b11###], have a linear-exponential convergence behavior.\nFirst, we give a general algebraic result on the inclusion relationship between balls computed with respect to different norms.\nGiven two norms and on , for all , it holds that\nwith and given in (8 ###reference_###).\nWe start by proving the inequality .\nBy definition of ball of radius , for any , we know that . Also, we have\nTherefore , so that .\nThe inequality follows directly from the above inequality and from the fact that . Specifically, we have:\n\u220e\nWe are now ready to state the main result of this section.\nLet and be two norms on . Consider system (9 ###reference_###) with being a locally Lipschitz map satisfying the following assumptions\nis weakly infinitesimally contracting on with respect to ;\nis -strongly infinitesimally contracting on a forward-invariant set \nwith respect to ;\nis an equilibrium point, i.e., , for all .\nAlso, let be the largest closed ball centered at with radius with respect to .\nThen, for each trajectory starting from and for any contraction factor , the distance along the trajectory decreases at worst linearly with an average linear decay rate\nup to at most the linear-exponential crossing time\nwhen the trajectory enters .\nConsider a trajectory of (9 ###reference_###) starting from initial condition and define the intermediate point , as in Figure 8 ###reference_###. Note that is a point on the boundary of , since .\nMoreover, the points , , and lie on the same line segment, thus\n###figure_8### Using the triangle inequality, we get\nBy Assumption (1) ###reference_i1### and equality (33 ###reference_###), we know that , thus\nNext, we upper bound the term .\nWe note that, because each trajectory originating in remains in , the time required for each trajectory starting in , to be inside for the -strongly contracting map is\nThis follows by noticing that\nThus, the time required for a trajectory starting in to be inside is upper bounded by the time required for the trajectory to go from to . In these balls, Assumption (2) ###reference_i2### implies and so is determined by the equality .\nTherefore, at time , we know and we have\nBy iterating the above argument, it follows that after each interval of duration , the distance has decreased by an amount for each .\nTherefore the average linear decay satisfies\nHence, after at most a linear-exponential crossing time , the trajectory will be inside .\nThis concludes the proof.\n\u220e\nAssumptions (2) ###reference_i2### and (3) ###reference_i3### of Theorem V.2 ###reference_thm2### imply that for any , the distance decreases exponentially with time with rate . Specifically, for all it holds that\n\u220e\nThe next result, which establishes the linear-exponential convergence of system (9 ###reference_###), follows from Theorem V.2 ###reference_thm2###.\nUnder the same assumptions and notations as in Theorem V.2 ###reference_thm2###, for each and for any contraction factor , the distance decreases linear-exponentially with time, in the sense that:\nThe result follows directly from Theorem V.2 ###reference_thm2###. Indeed, given a trajectory of (9 ###reference_###) starting from , for all , from Theorem V.2 ###reference_thm2### we know that the distance decreases linearly by an amount with an average linear decay rate towards , which implies the upper bound\nNext, for all the trajectory is inside and Assumption (2) ###reference_i2###, i.e., -strongly infinitesimally contractivity on , implies the bound\nApplying the equivalence of norms to the above inequality we have\nTherefore, for all we have\nThis concludes the proof.\n\u220e\nWith the following Lemma, we give the explicit expression for the optimal contraction factor that maximizes the average linear decay rate .\nUnder the same assumptions and notations as in Theorem V.2 ###reference_thm2###, for the contraction factor that maximize the average linear decay rate is\nwhere is the branch of the Lambert function 666The Lambert function is a multivalued function defined by the branches of the converse relation of the function . See [18 ###reference_b18###] for more details.\nsatisfying , for all .\nTo maximize the linear decay rate we need to solve the optimization problem\nWe compute\nwhich holds if and only if\nNote that the equality (39 ###reference_###) is a transcendental equation of the form , whose solution is known to be the value if and the two values and if , where is the branch satisfying , and is the branch satisfying .\nIn our case it is and , thus . Therefore, the solutions of the equality (39 ###reference_###) are and . Being , the only admissible solution is , thus the thesis.\n\u220e\n###figure_9###"
82
+ },
83
+ {
84
+ "section_id": "5.1",
85
+ "parent_section_id": "5",
86
+ "section_name": "Proof of Theorem III.5",
87
+ "text": "We are now ready to give the proof of Theorem III.5 ###reference_thm5###, which follows from the results of this Section. Indeed, given the assumptions of Theorem III.5 ###reference_thm5###: (i) Theorem III.3 ###reference_thm3### implies that the PFCN is weakly infinitesimally contracting on with respect to ; (ii) Theorem III.4 ###reference_thm4### implies that the PFCN is -strongly infinitesimally contracting on with respect to . Hence, Theorem III.5 ###reference_thm5### follows from Corollary V.3 ###reference_thm3### with , and ."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "VI Conclusions",
93
+ "text": "In this paper, we proposed and analyzed two families of continuous-time firing-rate neural networks: the firing-rate competitive network, FCN, and the positive firing-rate competitive network, PFCN, to tackle sparse reconstruction and positive sparse reconstruction problems, respectively.\nThese networks arise from a top/down normative framework that aims to provide a biologically-plausible explanation for how neural circuits solve sparse reconstruction and other composite optimization problems.\nThis framework is based upon the theory of proximal operators for composite optimization and leads to continuous-time firing-rate neural networks that are therefore interpretable.\nWe first introduced a result relating the optimal solutions of the SR and positive SR problems to the equilibria of the FCN and PFCN (Lemma III.1 ###reference_thm1### and Corollary III.2 ###reference_thm2###). Crucial for the PFCN is the fact that this is a positive system (see Lemma B.1 ###reference_thm1###). This, in turn, can be useful to effectively model both excitatory and inhibitory synaptic connections in a biologically plausible way. Then, we investigated the convergence properties of the proposed networks: we provided an explicit convergence analysis for the PFCN and gave rigorous conditions to extend the analysis to the FCN. Specifically, we showed that (i) the PFCN (14 ###reference_###) is weakly contracting on (Theorem III.3 ###reference_thm3###); (ii) if the dictionary is RIP, then the equilibrium point of the PFCN is locally exponentially stable and, in a suitably defined norm, it is also strongly contracting in a neighborhood of the equilibrium (Theorem III.4 ###reference_thm4###). These results lead to Theorem III.5 ###reference_thm5### that establishes linear-exponential convergence of the PFCN.\nTo derive our key findings, we also devised a number of instrumental results, interesting per se, providing: (i) algebraic results on the log-norm of triangular matrices; (ii) convergence analysis for a broader class of non-linear dynamics (globally-weakly and locally-strongly contracting systems) that naturally arise from the study of the FCN and PFCN. Finally, we illustrated the effectiveness of our results via numerical experiments.\nWith our future research, we plan to extend our results to design networks able to tackle the sparse coding problem [10 ###reference_b10###, 31 ###reference_b31###, 32 ###reference_b32###], which involves learning features to reconstruct a given stimulus. We expect that tackling the sparse coding problem will lead to the study of RNNs with both neural and synaptic dynamics [23 ###reference_b23###, 36 ###reference_b36###, 14 ###reference_b14###]. In this context, we plan to explore if Hebbian rules [28 ###reference_b28###, 26 ###reference_b26###, 14 ###reference_b14###] can be effectively used to learn the dictionary. Moreover, it would be interesting to tackle SR problems with more general and non-convex sparsity-inducing cost functions [47 ###reference_b47###].\nFinally, given the relevance and wide-ranging applications of globally-weakly and locally-strongly contracting systems, we will explore if tighter linear-exponential convergence bounds can be devised."
94
+ }
95
+ ],
96
+ "appendix": [
97
+ {
98
+ "section_id": "Appendix 1",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix A Weight Matrix in Lemma\u00a0II.1 and Theorem\u00a0III.3",
101
+ "text": "We start with giving the explicit expression of the matrix in Lemma II.1 ###reference_thm1### [15 ###reference_b15###].\nTo this purpose, we first recall that for any symmetric matrix , it is always possible to decompose into the form , where is the orthogonal matrix whose columns are the eigenvectors of , and is diagonal with being the vector of the eigenvalues of . Next, to define the weight matrix , we need to introduce the function defined by\n\nThen, letting , it is\nThe expression of the matrix in Theorem III.3 ###reference_thm3### follows from (40 ###reference_###) when ."
102
+ },
103
+ {
104
+ "section_id": "Appendix 2",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix B On the Positiveness of the PFCN",
107
+ "text": "In this appendix, we give a formal proof of the fact that the PFCN (14 ###reference_###) is a positive system. That is, the state variables are never negative, given a non-negative initial state. In order words, the positive orthant is forward invariant.\nFirst, we recall the following standard:\nA set is forward invariant with respect to the system (9 ###reference_###) if for every it holds , for all .\nThen, we give the following:\nThe PFCN (14 ###reference_###) is a positive system.\nTo prove the statement we prove that the positive orthant is forward invariant for .\nWe recall that, by applying Nagumo\u2019s Theorem [40 ###reference_b40###], the positive orthant is forward invariant for a vector field if and only if\nNow, let us consider the PFCN written in components\nThen, for all such that we have\nfor each . This concludes the proof.\n\u220e"
108
+ },
109
+ {
110
+ "section_id": "Appendix 3",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix C A Primer on Proximal Operators",
113
+ "text": "In this appendix, we provide a brief overview of proximal operators and outline the main properties needed for our analysis.\nWe start by giving a number of preliminary notions.\nGiven , the epigraph of is the set .\nA function is\nconvex if is a convex set;\nproper if its value is never and there exists at least one such that ;\nclosed if it is proper and is a closed set.\nNext, we define the proximal operator of , which is a map that takes a vector and maps it into a subset of , which can be either empty, contain a single element, or be a set with multiple vectors.\nThe proximal operator of a function with parameter , , is the operator given by\nOf particular interest for our analysis is the case when the is a singleton. The next Theorem [8 ###reference_b8###, Theorem 6.3] provides conditions under which the exists and is unique.\nLet be a convex, closed, and proper function. Then is a singleton for all .\nThe above result shows that for a convex, closed, and proper function , the proximal operator exists and is unique for all .\nNext, we recall a result on the calculus of proximal mappings [8 ###reference_b8###, Section 6.3].\nLet be a convex, closed, proper, and separable function, that is , with being convex, closed, and proper functions. Then\nBased on the use of proximal operators, proximal gradient method (see, e.g., [44 ###reference_b44###]) can be devised to iteratively solve a class of composite (possibly non-smooth) convex problems\nwhere , are convex, proper and closed functions, and is differentiable. At its core, the proximal gradient method updates the estimate of the solution of the optimization problem by computing the proximal operator of , where is a step size, evaluated at the difference between the current estimate and the gradient of computed at the current estimate. That is,\nNotably, this method has been recently extended and generalized to a continuous-time framework [27 ###reference_b27###, 19 ###reference_b19###], resulting in solving a continuous-time FNN. In this case, the iteration becomes the continuous-time proximal gradient dynamics\nwith .\nFinally, we note that for and the composite optimization problem (43 ###reference_###) is the SR problem (3 ###reference_###). Moreover, we get the FCN (11 ###reference_###) by setting in (44 ###reference_###).\nWe provide the explicit computation of the proximal operator of the sparsity-inducing term of the positive SR problem (7 ###reference_###).\nConsider and let , , . Then\nWe start by noticing that is separable across indices and, for any , we have . Hence, Lemma C.2 ###reference_thm2### implies that\nthe computation of the proximal operator of reduces to computing scalar proximals of . This can be done as follows:\nNote that, by definition of (shifted) function this is exactly . In turn, this proves the statement.\n\u220e"
114
+ },
115
+ {
116
+ "section_id": "Appendix 4",
117
+ "parent_section_id": null,
118
+ "section_name": "Appendix D The Logarithmic Norm of Upper Triangular Block Matrices",
119
+ "text": "We present an algebraic result of the log-norm of upper triangular block matrices. This result is useful for determining the rate and norm with respect to which the PFCN exhibits strong infinitesimal contractivity, as stated in Theorem III.4 ###reference_thm4###. The following Lemma is inspired by [11 ###reference_b11###, E2.28]. We also refer to [46 ###reference_b46###] for a result on the log-norm of these triangular matrices using non-Euclidean norms.\nConsider the block matrix\nFor all and for with and , we have\nWe compute\nwhere the last inequality follows by applying the translation property of the log-norm and the inequality , for all matrix . From the LMI characterization of the logarithmic norm, we obtain\nThe claim then follows by noting that\n.\n\u220e\nNext, we give a specific result for a particular case of the matrix (which has the same form as the Jacobian of PFCN computed at the equilibrium). In this particular case, we are able to determine and specify the matrices and .\nConsider the block matrix\nwith satisfying , with\n. Then, for all and for , we have\nBy applying Lemma D.1 ###reference_thm1### to the block matrix , for all we have\nThis concludes the proof.\n\u220e\nThe result in Corollary D.2 ###reference_thm2### implies that:\nif , then for all ;\nif , then , for all .\n\u220e"
120
+ }
121
+ ],
122
+ "tables": {
123
+ "1": {
124
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.24\" style=\"width:433.6pt;height:211.6pt;vertical-align:-0.8pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-42.3pt,20.6pt) scale(0.836614507034956,0.836614507034956) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.24.24\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.24.24.25.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.24.24.25.1.1\">Symbol</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.24.24.25.1.2\">Meaning</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.24.24.25.1.3\">Ref.</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.2\">Weight matrix w.r.t. the PFCN is globally-weakly contracting</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.3\">Equation\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#A1.E40\" title=\"40 \u2023 Appendix A Weight Matrix \ud835\udc37 in Lemma II.1 and Theorem III.3 \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">40</span></a>)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.2.2\">Euclidean weighted norm w.r.t. the PFCN is globally-weakly contracting</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.2.3\">Theorem\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#S3.Thmthm3\" title=\"Theorem III.3 (Global weak contractivity of the PFCN): \u2023 III-B2 Convergence Analysis \u2023 III-B Analysis of the Proposed Networks \u2023 III Main Results \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">III.3</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.3.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.3.2\">Weight matrix w.r.t. the PFCN is locally-strongly contracting</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.3.3\">Equation\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#A4.E47\" title='47 \u2023 D-1 Expression of \ud835\udc46_\ud835\udf00 and \ud835\udc50_\"exp\" in Theorem III.4 \u2023 Appendix D The \u2113\u2082 Logarithmic Norm of Upper Triangular Block Matrices \u2023 Positive Competitive Networks for Sparse Reconstruction'><span class=\"ltx_text ltx_ref_tag\">47</span></a>)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.4.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.4.2\">Euclidean weighted norm w.r.t. the PFCN is locally-strongly contracting</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.4.3\">Theorem\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#S3.Thmthm4\" title=\"Theorem III.4 (Local exponential stability and local strong contractivity of the PFCN): \u2023 III-B2 Convergence Analysis \u2023 III-B Analysis of the Proposed Networks \u2023 III Main Results \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">III.4</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.5.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.7.7.7.3\">Equivalence ratio between and \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.7.7.7.4\">Definition\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#Thmdefinition2\" title=\"Definition 2 (Equivalence ratio between two norms): \u2023 II-C Norms and Logarithmic Norms \u2023 II Mathematical Preliminaries \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">2</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.8.2\">Radius of the ball where the system is strongly infinitesimally contracting</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.8.8.8.3\">Equation\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#S3.E25\" title=\"25 \u2023 Proof: \u2023 III-B2 Convergence Analysis \u2023 III-B Analysis of the Proposed Networks \u2023 III Main Results \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">25</span></a>)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.12.12.12\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.9.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.12.12.12.4\">Ball of radius centered at computed w.r.t. \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.12.12.12.5\">Equation\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#S3.E25\" title=\"25 \u2023 Proof: \u2023 III-B2 Convergence Analysis \u2023 III-B Analysis of the Proposed Networks \u2023 III Main Results \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">25</span></a>)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.15.15.15\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.13.13.13.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.15.15.15.3\">Radius of the largest ball contained in \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.15.15.15.4\">Theorem\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#S3.Thmthm5\" title=\"Theorem III.5 (Linear-exponential stability of the PFCN): \u2023 III-B2 Convergence Analysis \u2023 III-B Analysis of the Proposed Networks \u2023 III Main Results \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">III.5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.19.19.19\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.16.16.16.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.19.19.19.4\">Ball of radius centered at computed w.r.t. \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.19.19.19.5\">Theorem\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#S3.Thmthm5\" title=\"Theorem III.5 (Linear-exponential stability of the PFCN): \u2023 III-B2 Convergence Analysis \u2023 III-B Analysis of the Proposed Networks \u2023 III Main Results \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">III.5</span></a>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.20.20.20\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.20.20.20.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.20.20.20.2\">Exponential decay rate</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.20.20.20.3\">Equation\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#A4.E47\" title='47 \u2023 D-1 Expression of \ud835\udc46_\ud835\udf00 and \ud835\udc50_\"exp\" in Theorem III.4 \u2023 Appendix D The \u2113\u2082 Logarithmic Norm of Upper Triangular Block Matrices \u2023 Positive Competitive Networks for Sparse Reconstruction'><span class=\"ltx_text ltx_ref_tag\">47</span></a>)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.21.21.21\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.21.21.21.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.21.21.21.2\">Average linear decay rate</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.21.21.21.3\">Equation\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#S3.E27\" title=\"27 \u2023 Theorem III.5 (Linear-exponential stability of the PFCN): \u2023 III-B2 Convergence Analysis \u2023 III-B Analysis of the Proposed Networks \u2023 III Main Results \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">27</span></a>)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.22.22.22\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.22.22.22.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.22.22.22.2\">Linear-exponential crossing time</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S3.T1.22.22.22.3\">Equation\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#S3.E28\" title=\"28 \u2023 Theorem III.5 (Linear-exponential stability of the PFCN): \u2023 III-B2 Convergence Analysis \u2023 III-B Analysis of the Proposed Networks \u2023 III Main Results \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">28</span></a>)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.24.24.24\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.23.23.23.1\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.24.24.24.2\">Contraction factor, \n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.24.24.24.3\">Theorem\u00a0<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#S3.Thmthm5\" title=\"Theorem III.5 (Linear-exponential stability of the PFCN): \u2023 III-B2 Convergence Analysis \u2023 III-B Analysis of the Proposed Networks \u2023 III Main Results \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">III.5</span></a>\n</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.26.1.1\" style=\"font-size:90%;\">TABLE I</span>: </span><span class=\"ltx_text\" id=\"S3.T1.27.2\" style=\"font-size:90%;\">Symbols used in the linear-exponential bound\u00a0(<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2311.03821v3#S3.E26\" title=\"26 \u2023 Theorem III.5 (Linear-exponential stability of the PFCN): \u2023 III-B2 Convergence Analysis \u2023 III-B Analysis of the Proposed Networks \u2023 III Main Results \u2023 Positive Competitive Networks for Sparse Reconstruction\"><span class=\"ltx_text ltx_ref_tag\">26</span></a>).</span></figcaption>\n</figure>",
125
+ "capture": "TABLE I: Symbols used in the linear-exponential bound\u00a0(26)."
126
+ }
127
+ },
128
+ "image_paths": {
129
+ "1": {
130
+ "figure_path": "2311.03821v3_figure_1.png",
131
+ "caption": "Figure 1: Strongly infinitesimally contracting systems: a) the distance between any two trajectories converges exponentially to the unique equilibrium point x\u22c6superscript\ud835\udc65\u22c6x^{\\star}italic_x start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT. Illustration of the dichotomy property of weakly contracting systems: b) the system has no equilibrium and every trajectory is unbounded or c) there exists at least one equilibrium and every trajectory is bounded. Images reused with permission from [11].",
132
+ "url": "http://arxiv.org/html/2311.03821v3/extracted/5488922/fig_paper/contractivity.png"
133
+ },
134
+ "2": {
135
+ "figure_path": "2311.03821v3_figure_2.png",
136
+ "caption": "Figure 2: The visual sensory input data u\u2208\u211dm\ud835\udc62superscript\u211d\ud835\udc5au\\in\\mathbb{R}^{m}italic_u \u2208 blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT is encoded by the receptive fields of simple cells in the mammalian visual cortex (V1) using only a small fraction of active (sparse) neurons. Formally, b) the input u\ud835\udc62uitalic_u is reconstructed by a linear combination of an overcomplete (n\u226bmmuch-greater-than\ud835\udc5b\ud835\udc5an\\gg mitalic_n \u226b italic_m) set of features \u03a6i\u2208\u211dnsubscript\u03a6\ud835\udc56superscript\u211d\ud835\udc5b\\Phi_{i}\\in\\mathbb{R}^{n}roman_\u03a6 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and sparse neurons y\u2208\u211dn\ud835\udc66superscript\u211d\ud835\udc5by\\in\\mathbb{R}^{n}italic_y \u2208 blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. c) Block scheme of the proposed (positive) firing-rate competitive network. The hidden node xisubscript\ud835\udc65\ud835\udc56x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT receives as stimulus the similarity score between the input signal u\u2208\u211dm\ud835\udc62superscript\u211d\ud835\udc5au\\in\\mathbb{R}^{m}italic_u \u2208 blackboard_R start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT and the dictionary element \u03a6i\u2208\u211dnsubscript\u03a6\ud835\udc56superscript\u211d\ud835\udc5b\\Phi_{i}\\in\\mathbb{R}^{n}roman_\u03a6 start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT \u2208 blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT and collectively all hidden neurons give as output a sparse (non-negative) vector y=x\u2208\u211dn\ud835\udc66\ud835\udc65superscript\u211d\ud835\udc5by=x\\in\\mathbb{R}^{n}italic_y = italic_x \u2208 blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. Images reused with permission from [11].",
137
+ "url": "http://arxiv.org/html/2311.03821v3/x1.png"
138
+ },
139
+ "3": {
140
+ "figure_path": "2311.03821v3_figure_3.png",
141
+ "caption": "Figure 3: Schematic diagram summarizing the main results and their assumptions. With Theorem III.5 we show that the PFCN (14) exhibits linear-exponential convergence towards the optimal solution of the positive SR problem (7). The result follows from: (i) establishing a link between the optimal solution of (7) and the equilibria of (14); (ii) characterizing contractivity of (14).",
142
+ "url": "http://arxiv.org/html/2311.03821v3/extracted/5488922/fig_paper/meta_theorem.png"
143
+ },
144
+ "4": {
145
+ "figure_path": "2311.03821v3_figure_4.png",
146
+ "caption": "Figure 4: Schematic representation of the linear-exponential convergence behavior exhibited by the PFCN. The distance of the trajectory from the equilibrium point is upper bounded by a function that decreases linearly with time until tcrosssubscript\ud835\udc61crosst_{\\textup{cross}}italic_t start_POSTSUBSCRIPT cross end_POSTSUBSCRIPT and then exponentially for all t>tcross\ud835\udc61subscript\ud835\udc61crosst>t_{\\textup{cross}}italic_t > italic_t start_POSTSUBSCRIPT cross end_POSTSUBSCRIPT. While the solution of the PFCN is continuous, a bounded jump in the upper bound we obtain might occur at time tcrosssubscript\ud835\udc61crosst_{\\textup{cross}}italic_t start_POSTSUBSCRIPT cross end_POSTSUBSCRIPT.",
147
+ "url": "http://arxiv.org/html/2311.03821v3/x2.png"
148
+ },
149
+ "5": {
150
+ "figure_path": "2311.03821v3_figure_5.png",
151
+ "caption": "Figure 5: Time evolution of the state/neuron variables of the proposed PFCN (14) (leftward panel) and of the LCA (13) (rightward panel) networks. The cross symbols are the non-zero elements of the sparse vector y0subscript\ud835\udc660y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Both the PFCN and the LCA converge to an equilibrium that is close to y0subscript\ud835\udc660y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Note that, in accordance with Lemma B.1, the state variables of the PFCN never become negative.",
152
+ "url": "http://arxiv.org/html/2311.03821v3/x3.png"
153
+ },
154
+ "6": {
155
+ "figure_path": "2311.03821v3_figure_6.png",
156
+ "caption": "Figure 6: Trajectories of two randomly chosen nodes of the PFCN (14) from the active (leftward panel) and inactive (rightward panel) set in the planes defined by these two nodes, respectively. In the panels, the evolution is shown from 20202020 randomly chosen initial conditions. In accordance with our results, the trajectories of the active neurons converge to positive values, while the trajectories of the inactive nodes converge to the origin.",
157
+ "url": "http://arxiv.org/html/2311.03821v3/x4.png"
158
+ },
159
+ "7": {
160
+ "figure_path": "2311.03821v3_figure_7.png",
161
+ "caption": "Figure 7: Time evolution of the state variables of the LCA (29) with ReLUReLU\\operatorname{ReLU}roman_ReLU as activation function. The cross symbols are the non-zero elements of y0subscript\ud835\udc660y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. The LCA converges to an equilibrium close to y0subscript\ud835\udc660y_{0}italic_y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. Even using the ReLUReLU\\operatorname{ReLU}roman_ReLU as activation function, the trajectories of the LCA states still exhibit negative values over time.",
162
+ "url": "http://arxiv.org/html/2311.03821v3/x5.png"
163
+ },
164
+ "8": {
165
+ "figure_path": "2311.03821v3_figure_8.png",
166
+ "caption": "Figure 8: Illustration of the set up for the proof of Th. V.2 with G=\u2225\u22c5\u22252\\textup{G}=\\|\\cdot\\|_{2}G = \u2225 \u22c5 \u2225 start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Given the equilibrium point x\u22c6\u2208\ud835\udcaesuperscript\ud835\udc65\u22c6\ud835\udcaex^{\\star}\\in\\mathcal{S}italic_x start_POSTSUPERSCRIPT \u22c6 end_POSTSUPERSCRIPT \u2208 caligraphic_S, with \ud835\udcae\ud835\udcae\\mathcal{S}caligraphic_S forward invariant set, we consider a trajectory \u03d5t\u2062(x\u2062(0))subscriptitalic-\u03d5\ud835\udc61\ud835\udc650\\phi_{t}\\bigl{(}{x(0)}\\bigr{)}italic_\u03d5 start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ( italic_x ( 0 ) ) of (9) starting from x\u2062(0)\u2209\ud835\udcae\ud835\udc650\ud835\udcaex(0)\\not\\in\\mathcal{S}italic_x ( 0 ) \u2209 caligraphic_S and define the intermediate point xtmp\u2208BrGsubscript\ud835\udc65tmpsubscriptsuperscript\ud835\udc35G\ud835\udc5fx_{\\textup{tmp}}\\in B^{\\textup{G}}_{r}italic_x start_POSTSUBSCRIPT tmp end_POSTSUBSCRIPT \u2208 italic_B start_POSTSUPERSCRIPT G end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT. After a time t\u03c1subscript\ud835\udc61\ud835\udf0ct_{\\rho}italic_t start_POSTSUBSCRIPT italic_\u03c1 end_POSTSUBSCRIPT the trajectory starting at xtmpsubscript\ud835\udc65tmpx_{\\textup{tmp}}italic_x start_POSTSUBSCRIPT tmp end_POSTSUBSCRIPT (which may exit BrGsubscriptsuperscript\ud835\udc35G\ud835\udc5fB^{\\textup{G}}_{r}italic_B start_POSTSUPERSCRIPT G end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT) must enter B\u03c1\u2062rGsubscriptsuperscript\ud835\udc35G\ud835\udf0c\ud835\udc5fB^{\\textup{G}}_{\\rho r}italic_B start_POSTSUPERSCRIPT G end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_\u03c1 italic_r end_POSTSUBSCRIPT, for 0<\u03c1<10\ud835\udf0c10<\\rho<10 < italic_\u03c1 < 1. Image reused with permission from [11].",
167
+ "url": "http://arxiv.org/html/2311.03821v3/x6.png"
168
+ },
169
+ "9": {
170
+ "figure_path": "2311.03821v3_figure_9.png",
171
+ "caption": "Figure 9: Plot of the optimal contraction factor \u03c1\u00af\u2062(kL,G)\u00af\ud835\udf0csubscript\ud835\udc58LG\\bar{\\rho}(k_{\\textup{L},\\textup{G}})over\u00af start_ARG italic_\u03c1 end_ARG ( italic_k start_POSTSUBSCRIPT L , G end_POSTSUBSCRIPT ) given by equation (37).",
172
+ "url": "http://arxiv.org/html/2311.03821v3/x7.png"
173
+ }
174
+ },
175
+ "validation": true,
176
+ "references": [
177
+ {
178
+ "1": {
179
+ "title": "Studies in Linear and Nonlinear Programming.",
180
+ "author": "K. J. Arrow, L. Hurwicz, and H. Uzawa, editors.",
181
+ "venue": "Stanford University Press, 1958.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "2": {
187
+ "title": "Convergence and rate analysis of neural networks for sparse\napproximation.",
188
+ "author": "A. Balavoine, J. Romberg, and C. J. Rozell.",
189
+ "venue": "IEEE Transactions on Neural Networks and Learning Systems,\n23(9):1377\u20131389, 2012.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "3": {
195
+ "title": "Global convergence of the locally competitive algorithm.",
196
+ "author": "A. Balavoine, C. J. Rozell, and J. Romberg.",
197
+ "venue": "In 2011 Digital Signal Processing and Signal Processing\nEducation Meeting (DSP/SPE), pages 431\u2013436. IEEE, 2011.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "4": {
203
+ "title": "Convergence of a neural network for sparse approximation using the\nnonsmooth \u0141ojasiewicz inequality.",
204
+ "author": "A. Balavoine, C. J. Rozell, and J. Romberg.",
205
+ "venue": "In International Joint Conference on Neural Networks, pages\n1\u20138, 2013.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "5": {
211
+ "title": "Convergence speed of a dynamical system for sparse recovery.",
212
+ "author": "A. Balavoine, C. J. Rozell, and J. Romberg.",
213
+ "venue": "IEEE Transactions on Signal Processing, 61(17):4259\u20134269,\n2013.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "6": {
219
+ "title": "Discrete and continuous-time soft-thresholding for dynamic signal\nrecovery.",
220
+ "author": "A. Balavoine, C. J. Rozell, and J. Romberg.",
221
+ "venue": "IEEE Transactions on Signal Processing, 63(12):3165\u20133176,\n2015.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "7": {
227
+ "title": "Single units and sensation: a neuron doctrine for perceptual\npsychology?",
228
+ "author": "H. B. Barlow.",
229
+ "venue": "Perception, 1(4):371\u2013394, 1972.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "8": {
235
+ "title": "First-Order Methods in Optimization.",
236
+ "author": "A. Beck.",
237
+ "venue": "SIAM, 2017, ISBN 978-1-61197-498-0.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "9": {
243
+ "title": "Neural network for quadratic optimization with bound constraints.",
244
+ "author": "A. Bouzerdoum and T. R. Pattison.",
245
+ "venue": "IEEE Transactions on Neural Networks, 4(2):293\u2013304, 1993.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "10": {
251
+ "title": "Nonlinear Hebbian learning as a unifying principle in receptive\nfield formation.",
252
+ "author": "C. S. N. Brito and W. Gerstner.",
253
+ "venue": "PLoS Computational Biology, 12(9):e1005070, 2016.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "11": {
259
+ "title": "Contraction Theory for Dynamical Systems.",
260
+ "author": "F. Bullo.",
261
+ "venue": "Kindle Direct Publishing, 1.1 edition, 2023, ISBN 979-8836646806.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "12": {
267
+ "title": "The Dantzig selector: Statistical estimation when is much\nlarger than .",
268
+ "author": "E. J. Cand\u00e8s and T. Tao.",
269
+ "venue": "Quality Control and Applied Statistics, 54(1):83\u201384, 2009.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "13": {
275
+ "title": "An introduction to compressive sampling.",
276
+ "author": "E. J. Cand\u00e8s and M. B. Wakin.",
277
+ "venue": "IEEE Signal Processing Magazine, 25(2):21\u201330, 2008.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "14": {
283
+ "title": "Contraction analysis of Hopfield neural networks with Hebbian\nlearning.",
284
+ "author": "V. Centorrino, F. Bullo, and G. Russo.",
285
+ "venue": "In IEEE Conf. on Decision and Control, Canc\u00fan, M\u00e9xico,\nDecember 2022.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "15": {
291
+ "title": "Euclidean contractivity of neural networks with symmetric weights.",
292
+ "author": "V. Centorrino, A. Gokhale, A. Davydov, G. Russo, and F. Bullo.",
293
+ "venue": "IEEE Control Systems Letters, 7:1724\u20131729, 2023.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "16": {
299
+ "title": "A common network architecture efficiently implements a variety of\nsparsity-based inference problems.",
300
+ "author": "A. S. Charles, P. Garrigues, and C. J. Rozell.",
301
+ "venue": "Neural Computation, 24(12):3317\u20133339, 2012.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "17": {
307
+ "title": "Proximal splitting methods in signal processing.",
308
+ "author": "P. L. Combettes and J. Pesquet.",
309
+ "venue": "Fixed-point Algorithms for Inverse Problems in Science and\nEngineering, pages 185\u2013212, 2011.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "18": {
315
+ "title": "On the Lambert function.",
316
+ "author": "R. M Corless, G. H Gonnet, D. E. G. Hare, D. J. Jeffrey, and D. E. Knuth.",
317
+ "venue": "Advances in Computational Mathematics, 5(1):329\u2013359, 1996.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "19": {
323
+ "title": "Contracting dynamics for time-varying convex optimization.",
324
+ "author": "A. Davydov, V. Centorrino, A. Gokhale, G. Russo, and F. Bullo.",
325
+ "venue": "IEEE Transactions on Automatic Control, June 2023.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "20": {
331
+ "title": "Non-Euclidean contraction analysis of continuous-time neural\nnetworks.",
332
+ "author": "A. Davydov, A. V. Proskurnikov, and F. Bullo.",
333
+ "venue": "IEEE Transactions on Automatic Control, September 2022.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "21": {
339
+ "title": "Non-Euclidean contractivity of recurrent neural networks.",
340
+ "author": "A. Davydov, A. V. Proskurnikov, and F. Bullo.",
341
+ "venue": "In American Control Conference, pages 1527\u20131534,\nAtlanta, USA, May 2022.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "22": {
347
+ "title": "Theoretical Neuroscience: Computational and Mathematical\nModeling of Neural Systems.",
348
+ "author": "P. Dayan and L. F. Abbott.",
349
+ "venue": "MIT Press, 2005.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "23": {
355
+ "title": "Dynamic properties of neural networks with adapting synapses.",
356
+ "author": "D. W. Dong and J. J. Hopfield.",
357
+ "venue": "Network: Computation in Neural Systems, 3(3):267\u2013283, 1992.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "24": {
363
+ "title": "On the role of sparse and redundant representations in image\nprocessing.",
364
+ "author": "M. Elad, M. A. T. Figueiredo, and Y. Ma.",
365
+ "venue": "Proceedings of the IEEE, 98(6):972\u2013982, 2010.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "25": {
371
+ "title": "Relations between the statistics of natural images and the response\nproperties of cortical cells.",
372
+ "author": "D. J. Field.",
373
+ "venue": "Journal of the Optical Society of America A,\n4(12):2379\u20132394, 1987.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "26": {
379
+ "title": "Mathematical formulations of Hebbian learning.",
380
+ "author": "W. Gerstner and W. Kistler.",
381
+ "venue": "Biological Cybernetics, 87:404\u201315, 2003.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "27": {
387
+ "title": "Proximal gradient flow and Douglas-Rachford splitting dynamics:\nGlobal exponential stability via integral quadratic constraints.",
388
+ "author": "S. Hassan-Moghaddam and M. R. Jovanovi\u0107.",
389
+ "venue": "Automatica, 123:109311, 2021.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "28": {
395
+ "title": "The Organization of Behavior: A Neuropsychological Theory.",
396
+ "author": "D. O. Hebb.",
397
+ "venue": "John Wiley & Sons, 1949.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "29": {
403
+ "title": "Neurons with graded response have collective computational properties\nlike those of two-state neurons.",
404
+ "author": "J. J. Hopfield.",
405
+ "venue": "Proceedings of the National Academy of Sciences,\n81(10):3088\u20133092, 1984.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "30": {
411
+ "title": "\u201dNeural\u201d computation of decisions in optimization problems.",
412
+ "author": "J. J. Hopfield and D. W. Tank.",
413
+ "venue": "Biological Cybernetics, 52(3):141\u2013152, 1985.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "31": {
419
+ "title": "Non-negative sparse coding.",
420
+ "author": "P. O. Hoyer.",
421
+ "venue": "In Proceedings of the 12th IEEE Workshop on Neural Networks\nfor Signal Processing, pages 557\u2013565, 2002.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "32": {
427
+ "title": "Modeling receptive fields with non-negative sparse coding.",
428
+ "author": "P. O. Hoyer.",
429
+ "venue": "Neurocomputing, 52:547\u2013552, 2003.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "33": {
435
+ "title": "Receptive fields and functional architecture of monkey striate\ncortex.",
436
+ "author": "D. H. Hubel and T. N. Wiesel.",
437
+ "venue": "The Journal of Physiology, 195(1):215\u2013243, 1968.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "34": {
443
+ "title": "Weak and semi-contraction for network systems and diffusively-coupled\noscillators.",
444
+ "author": "S. Jafarpour, P. Cisneros-Velarde, and F. Bullo.",
445
+ "venue": "IEEE Transactions on Automatic Control, 67(3):1285\u20131300,\n2022.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "35": {
451
+ "title": "RNNs of RNNs: Recursive construction of stable assemblies of\nrecurrent neural networks.",
452
+ "author": "L. Kozachkov, M. Ennis, and J.-J. E. Slotine.",
453
+ "venue": "In Advances in Neural Information Processing Systems,\nDecember 2022.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "36": {
459
+ "title": "Achieving stable dynamics in neural circuits.",
460
+ "author": "L. Kozachkov, M. Lundqvist, J.-J. E. Slotine, and E. K. Miller.",
461
+ "venue": "PLoS Computational Biology, 16(8):1\u201315, 2020.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "37": {
467
+ "title": "Biologically plausible single-layer networks for nonnegative\nindependent component analysis.",
468
+ "author": "D. Lipshutz, C. Pehlevan, and D. B. Chklovskii.",
469
+ "venue": "Biological Cybernetics, 2022.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "38": {
475
+ "title": "On contraction analysis for non-linear systems.",
476
+ "author": "W. Lohmiller and J.-J. E. Slotine.",
477
+ "venue": "Automatica, 34(6):683\u2013696, 1998.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "39": {
483
+ "title": "Mathematical equivalence of two common forms of firing rate models of\nneural networks.",
484
+ "author": "K. D. Miller and F. Fumarola.",
485
+ "venue": "Neural Computation, 24(1):25\u201331, 2012.",
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "40": {
491
+ "title": "\u00dcber die Lage der Integralkurven gew\u00f6hnlicher\nDifferentialgleichungen.",
492
+ "author": "M. Nagumo.",
493
+ "venue": "Proceedings of the Physico-Mathematical Society of Japan. 3rd\nSeries, 24:551\u2013559, 1942.",
494
+ "url": null
495
+ }
496
+ },
497
+ {
498
+ "41": {
499
+ "title": "Emergence of simple-cell receptive field properties by learning a\nsparse code for natural images.",
500
+ "author": "B. A. Olshausen and D. J. Field.",
501
+ "venue": "Nature, 381(6583):607\u2013609, 1996.",
502
+ "url": null
503
+ }
504
+ },
505
+ {
506
+ "42": {
507
+ "title": "Sparse coding with an overcomplete basis set: A strategy employed by\nV1?",
508
+ "author": "B. A. Olshausen and D. J. Field.",
509
+ "venue": "Vision Research, 37(23):3311\u20133325, 1997.",
510
+ "url": null
511
+ }
512
+ },
513
+ {
514
+ "43": {
515
+ "title": "Sparse coding of sensory inputs.",
516
+ "author": "B. A. Olshausen and D. J. Field.",
517
+ "venue": "Current Opinion in Neurobiology, 14(4):481\u2013487, 2004.",
518
+ "url": null
519
+ }
520
+ },
521
+ {
522
+ "44": {
523
+ "title": "Proximal algorithms.",
524
+ "author": "N. Parikh and S. Boyd.",
525
+ "venue": "Foundations and Trends in Optimization, 1(3):127\u2013239, 2014.",
526
+ "url": null
527
+ }
528
+ },
529
+ {
530
+ "45": {
531
+ "title": "Sparse coding via thresholding and local competition in neural\ncircuits.",
532
+ "author": "C. J. Rozell, D. H. Johnson, R. G. Baraniuk, and B. A. Olshausen.",
533
+ "venue": "Neural Computation, 20(10):2526\u20132563, 2008.",
534
+ "url": null
535
+ }
536
+ },
537
+ {
538
+ "46": {
539
+ "title": "Global entrainment of transcriptional systems to periodic inputs.",
540
+ "author": "G. Russo, M. Di Bernardo, and E. D. Sontag.",
541
+ "venue": "PLoS Computational Biology, 6(4):e1000739, 2010.",
542
+ "url": null
543
+ }
544
+ },
545
+ {
546
+ "47": {
547
+ "title": "Fast sparse optimization via adaptive shrinkage.",
548
+ "author": "D. Regruto V. Cerone, S. M. Fosson.",
549
+ "venue": "In IFAC World Congress 2023, Yokohama, Japan, 2023.",
550
+ "url": null
551
+ }
552
+ },
553
+ {
554
+ "48": {
555
+ "title": "High-Dimensional Data Analysis with Low-Dimensional Models:\nPrinciples, Computation, and Applications.",
556
+ "author": "J. Wright and Y. Ma.",
557
+ "venue": "Cambridge University Press, 2022.",
558
+ "url": null
559
+ }
560
+ },
561
+ {
562
+ "49": {
563
+ "title": "Robust face recognition via sparse representation.",
564
+ "author": "J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma.",
565
+ "venue": "IEEE Transactions on Pattern Analysis & Machine\nIntelligence, 31(2):210\u2013227, 2008.",
566
+ "url": null
567
+ }
568
+ },
569
+ {
570
+ "50": {
571
+ "title": "Scalability in nonlinear network systems affected by delays and\ndisturbances.",
572
+ "author": "S. Xie, G. Russo, and R. H. Middleton.",
573
+ "venue": "IEEE Transactions on Control of Network Systems,\n8(3):1128\u20131138, 2021.",
574
+ "url": null
575
+ }
576
+ },
577
+ {
578
+ "51": {
579
+ "title": "A comprehensive review of stability analysis of continuous-time\nrecurrent neural networks.",
580
+ "author": "H. Zhang, Z. Wang, and D. Liu.",
581
+ "venue": "IEEE Transactions on Neural Networks and Learning Systems,\n25(7):1229\u20131262, 2014.",
582
+ "url": null
583
+ }
584
+ },
585
+ {
586
+ "52": {
587
+ "title": "Visual nonclassical receptive field effects emerge from sparse coding\nin a dynamical system.",
588
+ "author": "M. Zhu and C. J. Rozell.",
589
+ "venue": "PLoS Computational Biology, 9(8):e1003191, 2013.",
590
+ "url": null
591
+ }
592
+ }
593
+ ],
594
+ "url": "http://arxiv.org/html/2311.03821v3"
595
+ }
20240322/2311.04147v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2311.07440v2.json ADDED
@@ -0,0 +1,459 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Optimal approximation of unique continuation E.B.: supported by the EPSRC grants EP/T033126/1 and EP/V050400/1. M. N.: this work was supported by the project \u201dThe Development of Advanced and Applicative Research Competencies in the Logic of STEAM + Health\u201d /POCU/993/6/13/153310, project co-financed by the European Social Fund through The Romanian Operational Programme Human Capital 2014-2020. L. O.: Co-funded by the European Union (ERC, LoCal, 101086697). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Co-funded by the Research Council of Finland (347715, 3530969).",
3
+ "abstract": "We consider numerical approximations of ill-posed elliptic problems with conditional stability.\nThe notion of optimal error estimates is defined including both convergence with respect to discretization and perturbations in data.\nThe rate of convergence is determined by the conditional stability of the underlying continuous problem and the polynomial order of the approximation space.\nA proof is given that no approximation can converge at a better rate than that given by the definition without increasing the sensitivity to perturbations, thus justifying the concept.\nA recently introduced class of primal-dual finite element methods with weakly consistent regularisation is recalled and the associated error estimates are shown to be optimal in the sense of this definition.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Arguably one of the most fundamental results in finite element analysis is the best approximation result for the Galerkin method, known as Cea\u2019s Lemma Cea64 ###reference_b20###, which together with approximation estimates for finite element functions results in quasi-optimal error estimates for finite element methods Zlamal68 ###reference_b48###, Nit70 ###reference_b44###, Bab70 ###reference_b3###, Hel71 ###reference_b27###.\nThis result, that we will review below, essentially says that if a -order elliptic problem, , is approximated with -conforming finite elements of local polynomial order the error in -norm is of the order for a sufficiently smooth solution and that this rate is optimal compared to approximation: the best interpolant of the exact solution has similar accuracy.\nFor ill-posed elliptic problems the situation is different.\nOn the continuous level existence can only be guaranteed after regularisation of the problem.\nThe two main approaches are Tikhonov regularisation TA77 ###reference_b45### and quasi-reversibility LL69 ###reference_b31###.\nThese two approaches are strongly related (see for instance BR18 ###reference_b7###).\nThe main effort in the error analysis has been to estimate the perturbation induced by the addition of regularisation, and how to choose the associated regularisation operator or parameter Miller73 ###reference_b38###, Nat84 ###reference_b43###, Lu88 ###reference_b35###, Bour05 ###reference_b6###, IJ15 ###reference_b28###.\nThe error due to approximation in finite dimensional spaces of such regularised problems has also been analysed Nat77 ###reference_b41###, EHN88 ###reference_b24###, MP01 ###reference_b36###.\nThere is also a rich literature on projection methods for ill-posed problems where the discretisation serves as regularisation and refinement has to stop as soon as the effect of perturbations in data becomes dominant Natt77 ###reference_b42###, Engl83 ###reference_b22###, EN87 ###reference_b23###, HAG02 ###reference_b26###, Kalt00 ###reference_b30###.\nThese methods are often based on least squares methods and the convergence of the approximate solution to the exact solution for unperturbed data has been proven in several works.\nThere are also different stopping criteria for mesh refinement in order to avoid degeneration due to pollution from perturbations.\nHowever no results on rates of convergence where the discretisation errors and the perturbation errors are both included appear in these references.\nThe use of conditional stability (continuous dependence on data under the assumption of a certain a priori bound) to obtain more complete error estimates has been proposed in Bu13 ###reference_b9###, Bu14b ###reference_b10###, Bu16 ###reference_b11###, BO18 ###reference_b18### for a class of finite element methods based on weakly consistent regularisation/stabilisation in a primal-dual framework.\nHere stability is obtained through a combination of consistent stabilisation and Tikhonov regularisation, scaled with the mesh parameter to obtain weak consistency.\nThe upshot is that for this class of methods an error analysis exists, where the computational error is bounded in terms of the mesh parameter and perturbations of data, with constants depending on Sobolev norms of the exact solution.\nSimilarly to the well-posed case, the error estimates for this approach combine the stability of the physical problem with the numerical stability of the computational method and the approximability of the finite element space.\nContrary to the well-posed case, numerical stability can not be deduced from the physical stability, but has to be a consequence of the design of the stabilisation terms.\nThis means that the stabilisation in this framework is bespoke, and must be designed to combine optimal (weak) consistency and sufficient numerical stability.\nThere is often a tension between these two design criteria. As noted above, sometimes Tikhonov regularisation, scaled with the mesh parameter, may be used in the framework.\nAn interesting feature is that the bespoke character also allows for the integration of the dependence of the estimates on physical parameters and different problems regimes BNO19 ###reference_b16###, BNO20a ###reference_b17###.\nOther physical models that have been considered in this framework include data assimilation for fluids\nBO18 ###reference_b18###,BH18 ###reference_b13###,BBFV20 ###reference_b5###, or wave equations BFO20a ###reference_b12###.\nCommon for all these references is the fact that the error estimates reflect the stability of the continuous problem and the approximation order of the finite element space, which seems to be an optimality property of the methods. No rigorous proof, however, has been given for this optimality.\nThe objective of the present work is to show, in the model case of unique continuation for Laplace equation, that the proposed error estimates are indeed optimal.\nFor ill-posed PDEs that are conditionally stable, error estimates in terms of the modulus of continuity in the conditional stability, the consistency error and the best approximation error have also been obtained in DMS23 ###reference_b21###.\nBased on least squares with the norms and the regularisation term dictated by the conditional stability estimate, this variation of quasi-reversibility relies on working with discrete dual norms and constructing Fortin projectors.\nBy choosing the regularisation parameter in terms of the consistency error and the best approximation error, the obtained error bound reflects the conditional stability estimate (qualitatively optimal).\nConditional stability estimates have also been used to obtain some bounds on the generalization error for physics-informed neural networks solving ill-posed PDEs MM22 ###reference_b39###.\nThe question of optimality for both these kind of methods is included in our discussion.\nAnother well-known ill-posed problem is analytic continuation, which, similarly to unique continuation, possesses conditional stability under the assumption of an a priori bound.\nWe will not discuss this problem here; for its conditional stability/conditioning and numerical approximations, we refer the reader to Trefethen20 ###reference_b46###, Trefethen23 ###reference_b47### and the references therein."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Unique continuation problem",
15
+ "text": "Let and write and where is the open ball of radius , with the centre at the origin in .\nThe objective is to solve the continuation problem: given the restriction to the subset , find the restriction when satisfies in .\nFurther, letting and writing , it is classical Fritz60 ###reference_b29### that the following conditional stability estimate holds:\nwhere and the implicit constant do not depend on the harmonic function .\nEstimate (1 ###reference_###) is often called a three-ball inequality and in this case the constants can be given explicitly, see Theorem 2.1 ###reference_theorem1### below.\nWe may view the unique continuation problem as finding such that\nwith a priori knowledge on the size of the solution in as prescribed by the -norm in (1 ###reference_###)."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Motivation and outline",
21
+ "text": "The motivation of this paper comes from error estimates obtained for primal-dual finite element methods applied to this problem, with perturbed data , of the form\nwhich have been shown in BHL18 ###reference_b14###, BNO19 ###reference_b16###, BNO20a ###reference_b17### in different variations of the second order elliptic equation.\nHere is the exponent in (1 ###reference_###) and denotes the mesh parameter defining the characteristic length scale of the finite dimensional space.\nThis is in the case of piecewise affine approximation, however the estimate generalises in a natural way to higher order approximation, as we shall see below.\nOne can obtain a similar bound in the -norm over .\nIn the counterfactual case that one would then recover an estimate that is optimal compared to interpolation.\nHence a natural question is if the bound (3 ###reference_###) in the -norm can be improved upon, since it is suboptimal with one order in when compared to interpolation in the case .\nWe show in this paper that if the coefficient in (1 ###reference_###) is optimal and depends continuously on (Theorem 2.1 ###reference_theorem1###),\nthen regardless of the underlying method, no sequence of approximations to (2 ###reference_###) can converge with a rate better than that given by (3 ###reference_###) (Theorem 2.2 ###reference_theorem2###) without increasing the sensitivity to perturbations.\nWe also point out that although the discussion focuses on the finite element method, the definition of optimal convergence given and the proof of optimality hold for any method producing an approximating sequence in (or relying on such a sequence for the analysis).\nThe paper is organised as follows.\nIn Section 2 ###reference_### we will discuss the notion of optimality of finite element approximations.\nFirst we revisit the classical finite element analysis for well-posed problems.\nIn Section 2.2 ###reference_### we then discuss how the ideas of the well-posed case translate to the ill-posed case.\nThis leads us to a definition of optimal approximation for the problem (2 ###reference_###) and we prove in Section 2.3 ###reference_### that no approximation method can converge in a better way than that given by this definition.\nFinally in Section 3 ###reference_### we show that optimality can indeed be attained by presenting a finite element method with optimal error estimates which extend (3 ###reference_###) for higher order approximations."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "Optimal error estimates for elliptic problems",
27
+ "text": "In this section we will first briefly recall the theory of optimal error estimates for Galerkin approximations of well-posed second order elliptic problems.\nWe then consider the ill-posed model problem (2 ###reference_###) and discuss how the construction that led to optimal approximation in the well-posed case can be adapted to this situation.\nThis leads us to a definition of optimality of approximate solutions in the ill-posed case. We let and .\nFor simplicity we consider the Poisson problem for :\nWe define the associated weak formulation by: find such that\nwhere and .\nIt is well known that is a coercive (-elliptic), continuous bilinear form on , and is bounded and continuous on .\nIt then follows from Lax-Milgram\u2019s lemma LM54 ###reference_b32### that the weak formulation admits a unique solution satisfying the stability estimate\nwhere and the dual norm is defined by\nIntroducing a finite dimensional subspace we define the Galerkin method, find such that\nwhere denotes a perturbed right hand side\n,\nwith and we assume to be known.\nThe associated linear system is invertible since is coercive.\nLet be the solution for the unperturbed right hand side, satisfying\nThen we have that , by Galerkin orthogonality,\nand that .\nSince is an isomorphic isometry, an application of the triangle-inequality for the approximation error gives that\nThis is equivalent to the classical result of Cea\u2019s lemma, but written in a form suitable for our purposes.\nIf and is the space of -conforming piecewise polynomial finite elements of order we immediately have by approximation that\nwhere stands for the seminorm.\nObserve how the Lipschitz stability of (6 ###reference_###) combines with the approximation properties of the finite element space to yield an optimal error estimate.\nPerturbations in data lead to stagnation of the error at the level of the perturbation."
28
+ },
29
+ {
30
+ "section_id": "2.1",
31
+ "parent_section_id": "2",
32
+ "section_name": "Optimal three-ball estimate",
33
+ "text": "Three-ball estimates such as (1 ###reference_###) for solutions of second-order elliptic equations are well-known in the literature, see e.g. the review ARRV09 ###reference_b1### or Bru95 ###reference_b8###.\nHowever, such results typically contain constants that depend implicitly on the geometry and the coefficients of the differential operator, and whose optimality is not clear Bru95 ###reference_b8###.\nWe aim here to give a result in the case of the Laplace operator which, barring optimality, is a variation of existing results in the literature, see (Miller62, ###reference_b37###, Theorem 1) and (kuusi2021, ###reference_b2###, Eq. (1.2)).\nWe will consider only the two and three dimensional cases, for which we prove the following three-ball estimate in -norms with optimal explicit constants.\nLet and be the open ball of radius .\nLet .\nThen for all harmonic functions there holds\nwhere\nMoreover, there does not exist such that\nFor any and there holds\nsee e.g. (kuusi2021, ###reference_b2###, Eq. (1.2)). We aim to transform this estimate into (10 ###reference_###).\nWe take the logarithm of (13 ###reference_###) and write to obtain\nNotice that , so that writing and , we obtain that\nyielding convexity of . Hence, for every and\nWe now set and . Then and , with given by (11 ###reference_###).\nTaking and there holds\nsince . With this choice, taking the exponential of (15 ###reference_###) gives (10 ###reference_###).\nSuppose now that (12 ###reference_###) holds for some .\nWe will show that .\nLet us consider first the two dimensional case.\nIdentifying with , consider the function which is harmonic for .\nThe following argument is similarly valid for its real part.\nUsing polar coordinates we have that, for ,\nNotice that we have equality in (10 ###reference_###) for .\nRecalling that , estimate (12 ###reference_###) reads as\nAs is arbitrary and we must have\n.\nIn other words,\nWe turn to the three dimensional case, and consider the function\nAs above, this is harmonic for .\nPassing to spherical coordinates there holds\nwhere the constant can be written using the Gamma function\nThe conclusion follows as in the two dimensional case.\n\u220e\nNote that the same explicit constants as in Theorem 2.1 ###reference_theorem1### appear in Hadamard\u2019s three-circle theorem (in -norms) for holomorphic functions.\nIn Theorem 2.1 ###reference_theorem1### we proved the optimality and continuous dependence of the exponent for unique continuation subject to the Laplace equation.\nIn a more general setting, a discussion of optimality of three-ball inequalities can be found in EFV06 ###reference_b25### for elliptic and parabolic problems.\nIn LNW10 ###reference_b33###; LUW10 ###reference_b34### some cases in fluid mechanics and elasticity are considered for which optimality is claimed."
34
+ },
35
+ {
36
+ "section_id": "2.2",
37
+ "parent_section_id": "2",
38
+ "section_name": "Definition of optimal convergence for ill-posed problems with conditional stability",
39
+ "text": "In this section we will try to mimic the development in the well-posed case for the problem (2 ###reference_###) and point out where things go wrong.\nWe will do this with minimal reference to a particular approximation method to keep the discussion general.\nHowever, in Section 3 ###reference_### we introduce a method for which the programme can be carried out.\nFirst we will derive a weak formulation.\nThis time, since no boundary conditions are set on , we must consider the trial space .\nTo make form consistent with the problem, the test space must be chosen to be , as keeping would imply a homogeneous Neumann condition on the boundary.\nWe may then write a weak formulation of the problem (2 ###reference_###), find such that\n and\nWe know that the exact solution satisfies this formulation and that (1 ###reference_###) holds.\nAssume now that we have an approximation obtained using\nthe perturbed data , where .\nObserve that although for this data, most likely, no exact solution will exist, a discrete approximation of the unperturbed exact solution may still be constructed.\nSimilarly as before the error satisfies\nwhere\nObserve that even if is produced using a Galerkin procedure we can not use here the same techniques as when proving (8 ###reference_###), since the trial space in this case is bigger than the test space .\nAs before we would now like to apply a stability estimate, this time (1 ###reference_###), using the right hand side on the perturbation. However this is not possible, since there is no right hand side in (2 ###reference_###) and (1 ###reference_###).\nInstead we first decompose , where solves the well-posed problem\nand solves (2 ###reference_###) with .\nUsing the triangle inequality and then applying (6 ###reference_###) to and (1 ###reference_###) to we arrive at\nUsing once again the triangle inequality this leads to\nWe conclude that any approximation must satisfy the bound\nIf we assume that the term is bounded, then inequality (20 ###reference_###) gives an a posteriori bound for the\nerror on in the -norm.\nFor the sake of discussion, we will, for a moment, consider an approximation satisfying certain properties.\nThese properties can be thought of as design criteria for the numerical method, since as it turns out they lead to optimal convergence.\nIn Section 3 ###reference_### we construct a finite element method with these properties.\nBound on the equation residual:\nObserve that this means that the residual convergence in the ill-posed case is as good as the residual convergence in the well-posed case, see (9 ###reference_###).\nBound on the data fitting term:\nThis term is suboptimal by one order in compared to interpolation, but nothing can be gained by assuming better convergence since the\nterm always is dominated by the contribution from in the bound (20 ###reference_###).\nStrengthening the norm on on the other hand is possible provided the perturbation also has additional smoothness.\nFinally we need to assume an a priori bound on :\nThe rationale for this choice is that it is the strongest control that can be achieved through Tikhonov regularisation without affecting the convergence order when the data is unperturbed, assuming that the previous two assumptions hold.\nInjecting these three bounds in (20 ###reference_###) we get the error estimate\nA generic version of this estimate can be obtained by decoupling the rate of convergence from the sensitivity to perturbations, by considering the following bound\nfor .\nDenoting the upper bound here by ,\nwe see that has a unique critical point on , which is a minimum since . Hence we get that\nwith .\nNotice that when .\nConsidering convergence with respect to perturbations in this bound, one would like to have as large as possible.\nBased on the discussion above we here propose a definition of what it means that a family of approximations \nto an ill-posed problem of the form (2 ###reference_###) is optimally convergent.\nAssume that , solves the unique continuation problem (2 ###reference_###).\nLet be the largest value for which the conditional stability estimate (1 ###reference_###) holds.\nLet be a family of functions in .\nIf the family satisfies the inequality (25 ###reference_###) with , then we say that its convergence is optimal.\nAn optimal in the stability estimate (1 ###reference_###) is provided in Theorem 2.1 ###reference_theorem1###.\nWe prove below that, independently of the method used, no family of approximations to the solution of (2 ###reference_###) can satisfy (25 ###reference_###) with .\nIn particular, no method can exceed the convergence rate in (24 ###reference_###) without increasing the sensitivity to data perturbations nor can it improve this sensitivity without decreasing the convergence rate, i.e. there exist no with or , such that\nThe question of constructing such an optimal method with is currently an open question."
40
+ },
41
+ {
42
+ "section_id": "2.3",
43
+ "parent_section_id": "2",
44
+ "section_name": "Proof of optimality",
45
+ "text": "The following Caccioppoli-type inequality is known but we give a short proof for the convenience of the reader.\nLet and . Then for all satisfying in there holds\nDivide the interval in subintervals of equal length with\n, , . For an index \nchoose a such that\n in and write , where and .\nThen, if denotes the commutator ,\nand therefore by (6 ###reference_###) we have that\nHere in the last step we used that\nLet where denotes an arbitrary partial derivative of order , .\nThen . It follows from equation (26 ###reference_###) that\nBy applying this to all partial derivatives of order , we see that\nHence by applying this inequality sequentially for we see\nthat\nThis concludes the proof.\n\u220e\nLet and let , , .\nLet be the optimal exponent in Theorem 2.1 ###reference_theorem1###.\nLet satisfy in and let .\nConsider a family of mappings , for all .\nThen there exist no with , such that\nIn particular, there exist no with or such that (27 ###reference_###) holds.\nWe give a proof by contradiction.\nAssume that there exist with\nsuch that (27 ###reference_###) holds.\nTaking and for satisfying in , the estimate (27 ###reference_###) reduces to\nUsing (27 ###reference_###) again with and , we get\nHence\nWe will write from now on, and recall that is an arbitrary solution to in .\nFor a nonzero , we define\nand choose such that\nthat is,\nWith this choice, inequality (28 ###reference_###) reduces to\nwhich trivially holds for the zero solution also.\nObserve that (29 ###reference_###) would right away contradict the optimality of in Theorem 2.1 ###reference_theorem1### if the -norm on its right-hand side was an -norm.\nTo weaken this norm, we can use Lemma 1 ###reference_ma1### to get that for .\nHence, using this bound in (29 ###reference_###) leads to\nWe now denote by the optimal exponent corresponding to in the three-ball estimate in Theorem 2.1 ###reference_theorem1###, for which\nfor any harmonic function .\nThis means that such an inequality cannot hold with an exponent larger than .\nHowever, since depends continuously on , by considering sufficiently close to we can get arbitrarily close to , i.e. .\nThus inequality (30 ###reference_###) holds with , which contradicts the optimality of in (31 ###reference_###).\nLet us finally show that if with or , then .\nConsider first the case .\nAs , , there holds and\nFor the case , we have that for some ,\nand\nwhich concludes the proof.\n\u220e\nThe proof of Theorem 2.2 ###reference_theorem2### is still valid if we assume (27 ###reference_###) to hold with a weaker norm instead of .\nThe approximation in Theorem 2.2 ###reference_theorem2### depends only on and .\nThis result does not exclude the possibility of a regularisation method that uses more information, for example the size of the perturbation .\nThe optimal method that we present in the following section can also use this information, see Remark 6 ###reference_ark6###."
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "Primal-dual finite element methods with weakly consistent regularisation",
51
+ "text": "In this section we will use a finite element method with weakly consistent stabilisation to construct a sequence of approximate solutions for unique continuation (2 ###reference_###) that satisfy the error estimate (24 ###reference_###), showing that the optimal convergence for this ill-posed problem can be attained by a discrete approximation method.\nThis discussion is based on ideas from Bu16 ###reference_b11###; BHL18 ###reference_b14###, modified to match the assumptions of the theoretical developments above.\nLet be a quasi-uniform family of triangulations of , where triangles with curved boundaries are allowed so that the the covering of is exact Zlamal70 ###reference_b49###; Bern89 ###reference_b4###.\nOn these meshes we define a finite element space , consisting of piecewise polynomials of order (after mapping of the triangles to a reference element).\nWe also let .\nIt then follows that there exist interpolants (Bern89, ###reference_b4###, Corollary 4.1) and\n (Bern89, ###reference_b4###, Corollary 5.2) for which the following interpolation estimates hold\nwhere and is the Hessian of , and\nWe will also use the broken norm defined by\nTo set up the numerical method, we formulate the continuation problem (2 ###reference_###) as pde-constrained optimization and consider the Lagrangian ,\nBy taking its saddle points, we define the finite element method as follows: find such that\nfor all , with\nwhere denotes a face of a triangle and the jump of the gradient over a face is defined by\n\nfor , with the outward pointing unit normal of the triangle .\nFor a more compact formulation we introduce the global form ,\nto write: find such that\nfor all .\nObserve that this form satisfies the consistency property\nTo show that this method satisfies the error bound (24 ###reference_###),\nwe only need to verify that it satisfies (21 ###reference_###), (22 ###reference_###) and (23 ###reference_###) (which represent the design criteria for the method).\nTo this end we introduce the norm\nand we observe that the formulation satisfies the positivity property\nwhich ensures the existence of a discrete solution for all .\nWe proceed by first proving convergence in the -norm, which immediately gives (22 ###reference_###).\nThe proof of the other two bounds and satisfaction of (24 ###reference_###) then follow as a corollary.\nFirst we establish an approximation result for the norm.\nLet , then there holds\nBy the definition of the -norm we see that\nBy the approximation property (32 ###reference_###) we have that\nFor the term measuring the jump of over element faces we note that\nwhere we used the regularity of and the trace inequality MS99 ###reference_b40###\nWe conclude by applying (32 ###reference_###) once again and summing over all the faces.\n\u220e\nLet denote the solution to (36 ###reference_###) and let be the solution to (2 ###reference_###), then there holds\nFirst we decompose the error in the continuous and discrete parts.\nBy the triangle inequality and Lemma 2 ###reference_ma2### it is enough to bound .\nUsing (38 ###reference_###) and (37 ###reference_###) we have\nFor the last two terms on the right hand side we have\nFinally, the following continuity holds\nwhere\nTo prove the continuity (39 ###reference_###) recall that by definition\nUsing the Cauchy-Schwarz inequality we have that\nand\nWe end the proof by observing that by equation (32 ###reference_###) and Lemma 2 ###reference_ma2### there holds\n\u220e\nUnder the same hypothesis as for Proposition 1 ###reference_position1### there holds\nand\nFinally satisfies the error bound (24 ###reference_###).\nFirst we observe that the third claim is an immediate consequence of the first two and Proposition 1 ###reference_position1###.\nIndeed, this follows from the discussion of Section 2.2 ###reference_###, using the error bound (20 ###reference_###) and equations (21 ###reference_###) - (23 ###reference_###).\nThe first inequality is immediate by Proposition 1 ###reference_position1### observing that\nand, for the first term in the right hand side,\nFor the second inequality, by definition\nUsing (37 ###reference_###), followed by integration by parts, we see that for all\nChoosing and using the Cauchy-Schwarz inequality in the first term of the right hand side\nand the continuity of in the second, followed by (33 ###reference_###), we see that\nThe conclusion now follows using Proposition 1 ###reference_position1### to obtain the desired bound\n\u220e\nBoth for the well-posed problem (4 ###reference_###) and the ill-posed problem (2 ###reference_###) there is a lower bound for how well the exact solution can be approximated if the data are perturbed.\nIn the well-posed case the limit is trivially given by in (9 ###reference_###), whereas in the ill-posed case the lower bound occurs when the approximation error term and the perturbation term are equal in (24 ###reference_###), that is,\nThis gives a theoretical lower bound on for which refining the mesh decreases the error bound,\nIf is known the numerical scheme can be designed to stagnate at the level of the best approximation, by modifying the last term in the definition of the stabilisation (35 ###reference_###) to read\n. This shows the connection between this stabilising term and classical Tikhonov regularisation and similar tools as for the latter can be applied here to optimise the parameter compared to perturbations in data.\nIt is straighforward to show that this leads to stagnation at\nHere the implicit constant may depend on . A similar kind of bound was obtained in (DMS23, ###reference_b21###, Theorem 2.2)."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion",
57
+ "text": "In this paper we have shown that the convergence order of the approximation error for unique continuation problems, obtained by combining the approximation orders of the data fitting and the pde-residual with the conditional stability, can not be improved without increasing the sensitivity to perturbations.\nThis shows that the asymptotic accuracy of the methods for unique continuation discussed in Bu14b ###reference_b10###; Bu16 ###reference_b11###; BHL18 ###reference_b14###; BLO18 ###reference_b15###; BNO19 ###reference_b16###; BNO20a ###reference_b17###; DMS23 ###reference_b21### is optimal, in the sense that it is impossible to design a method with better convergence properties.\nThe only remaining possibilities to enhance the accuracy of approximation methods is either to resort to adaptivity, or to introduce some additional a priori assumption to make the continuous problem more stable, such as finite dimensionality of target quantities (see burman2023finite ###reference_b19###)."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {},
62
+ "image_paths": {},
63
+ "validation": true,
64
+ "references": [
65
+ {
66
+ "1": {
67
+ "title": "Inverse Problems 25(12), 123004, 47 (2009).",
68
+ "author": "Alessandrini, G., Rondi, L., Rosset, E., Vessella, S.: The stability for the\nCauchy problem for elliptic equations.",
69
+ "venue": "DOI 10.1088/0266-5611/25/12/123004.",
70
+ "url": null
71
+ }
72
+ },
73
+ {
74
+ "2": {
75
+ "title": "Preprint arXiv 2107.14248",
76
+ "author": "Armstrong, S., Kuusi, T., Smart, C.: Optimal unique continuation for periodic\nelliptic equations on large scales (2021).",
77
+ "venue": null,
78
+ "url": null
79
+ }
80
+ },
81
+ {
82
+ "3": {
83
+ "title": "Numer. Math. 16, 322\u2013333 (1970/71).",
84
+ "author": "Babu\u0161ka, I.: Error-bounds for finite element method.",
85
+ "venue": "DOI 10.1007/BF02165003.",
86
+ "url": null
87
+ }
88
+ },
89
+ {
90
+ "4": {
91
+ "title": "SIAM J. Numer. Anal. 26(5), 1212\u20131240 (1989).",
92
+ "author": "Bernardi, C.: Optimal finite-element interpolation on curved domains.",
93
+ "venue": "DOI 10.1137/0726068.",
94
+ "url": null
95
+ }
96
+ },
97
+ {
98
+ "5": {
99
+ "title": "Inverse Problems 36(8), 085003\u201385024 (2020).",
100
+ "author": "Boulakia, M., Burman, E., Fern\u00e1ndez, M.A., Voisembert, C.: Data\nassimilation finite element method for the linearized Navie-Stokes\nequations in the low Reynolds regime.",
101
+ "venue": "DOI 10.1088/1361-6420/ab9161.",
102
+ "url": null
103
+ }
104
+ },
105
+ {
106
+ "6": {
107
+ "title": "Inverse Problems 21(3), 1087\u20131104 (2005).",
108
+ "author": "Bourgeois, L.: A mixed formulation of quasi-reversibility to solve the Cauchy\nproblem for Laplace\u2019s equation.",
109
+ "venue": "DOI 10.1088/0266-5611/21/3/018.",
110
+ "url": null
111
+ }
112
+ },
113
+ {
114
+ "7": {
115
+ "title": "ESAIM Math. Model. Numer. Anal. 52(1), 123\u2013145 (2018).",
116
+ "author": "Bourgeois, L., Recoquillay, A.: A mixed formulation of the Tikhonov\nregularization and its application to inverse PDE problems.",
117
+ "venue": "DOI 10.1051/m2an/2018008.",
118
+ "url": null
119
+ }
120
+ },
121
+ {
122
+ "8": {
123
+ "title": "Journal d\u2019Analyse Mathematique 65(1), 179\u2013206 (1995)",
124
+ "author": "Brummelhuis, R.: Three-spheres theorem for second order elliptic equations.",
125
+ "venue": null,
126
+ "url": null
127
+ }
128
+ },
129
+ {
130
+ "9": {
131
+ "title": "SIAM J. Sci. Comput. 35(6), A2752\u2013A2780 (2013).",
132
+ "author": "Burman, E.: Stabilized finite element methods for nonsymmetric, noncoercive,\nand ill-posed problems. Part I: Elliptic equations.",
133
+ "venue": "DOI 10.1137/130916862.",
134
+ "url": null
135
+ }
136
+ },
137
+ {
138
+ "10": {
139
+ "title": "C. R. Math. Acad. Sci. Paris 352(7-8), 655\u2013659 (2014).",
140
+ "author": "Burman, E.: Error estimates for stabilized finite element methods applied to\nill-posed problems.",
141
+ "venue": "DOI 10.1016/j.crma.2014.06.008.",
142
+ "url": null
143
+ }
144
+ },
145
+ {
146
+ "11": {
147
+ "title": "In: Building bridges: connections and challenges in modern approaches\nto numerical partial differential equations, Lect. Notes Comput. Sci.\nEng., vol. 114, pp. 93\u2013127. Springer, [Cham] (2016)",
148
+ "author": "Burman, E.: Stabilised finite element methods for ill-posed problems with\nconditional stability.",
149
+ "venue": null,
150
+ "url": null
151
+ }
152
+ },
153
+ {
154
+ "12": {
155
+ "title": "Math. Comp. 89(324), 1681\u20131709 (2020).",
156
+ "author": "Burman, E., Feizmohammadi, A., Oksanen, L.: A finite element data assimilation\nmethod for the wave equation.",
157
+ "venue": "DOI 10.1090/mcom/3508.",
158
+ "url": null
159
+ }
160
+ },
161
+ {
162
+ "13": {
163
+ "title": "Math. Comp. 87(311), 1029\u20131050 (2018).",
164
+ "author": "Burman, E., Hansbo, P.: Stabilized nonconforming finite element methods for\ndata assimilation in incompressible flows.",
165
+ "venue": "DOI 10.1090/mcom/3255.",
166
+ "url": null
167
+ }
168
+ },
169
+ {
170
+ "14": {
171
+ "title": "Inverse Problems 34(3), 035004, 36 (2018).",
172
+ "author": "Burman, E., Hansbo, P., Larson, M.G.: Solving ill-posed control problems by\nstabilized finite element methods: an alternative to Tikhonov\nregularization.",
173
+ "venue": "DOI 10.1088/1361-6420/aaa32b.",
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "15": {
179
+ "title": "SIAM J. Numer. Anal. 56(6), 3480\u20133509 (2018).",
180
+ "author": "Burman, E., Larson, M.G., Oksanen, L.: Primal-dual mixed finite element methods\nfor the elliptic Cauchy problem.",
181
+ "venue": "DOI 10.1137/17M1163335.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "16": {
187
+ "title": "J. Math. Pures Appl. (9) 129, 1\u201322 (2019).",
188
+ "author": "Burman, E., Nechita, M., Oksanen, L.: Unique continuation for the Helmholtz\nequation using stabilized finite element methods.",
189
+ "venue": "DOI 10.1016/j.matpur.2018.10.003.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "17": {
195
+ "title": "Numer. Math. 144(3), 451\u2013477 (2020).",
196
+ "author": "Burman, E., Nechita, M., Oksanen, L.: A stabilized finite element method for\ninverse problems subject to the convection-diffusion equation. I:\ndiffusion-dominated regime.",
197
+ "venue": "DOI 10.1007/s00211-019-01087-x.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "18": {
203
+ "title": "Numer. Math. 139(3), 505\u2013528 (2018).",
204
+ "author": "Burman, E., Oksanen, L.: Data assimilation for the heat equation using\nstabilized finite element methods.",
205
+ "venue": "DOI 10.1007/s00211-018-0949-3.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "19": {
211
+ "title": "Preprint arXiv 2305.06800",
212
+ "author": "Burman, E., Oksanen, L.: Finite element approximation of unique continuation of\nfunctions with finite dimensional trace (2023).",
213
+ "venue": null,
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "20": {
219
+ "title": "Ann. Inst. Fourier (Grenoble) 14(fasc. 2), 345\u2013444 (1964)",
220
+ "author": "C\u00e9a, J.: Approximation variationnelle des probl\u00e8mes aux limites.",
221
+ "venue": null,
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "21": {
227
+ "title": "ESAIM Math. Model. Numer. Anal. 57(4), 2227\u20132255 (2023).",
228
+ "author": "Dahmen, W., Monsuur, H., Stevenson, R.: Least squares solvers for ill-posed\nPDEs that are conditionally stable.",
229
+ "venue": "DOI 10.1051/m2an/2023050.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "22": {
235
+ "title": "In: Numerical treatment of inverse problems in differential and\nintegral equations (Heidelberg, 1982), Progr. Sci. Comput., vol. 2,\npp. 345\u2013354. Birkh\u00e4user Boston, Boston, MA (1983)",
236
+ "author": "Engl, H.W.: Regularization by least-squares collocation.",
237
+ "venue": null,
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "23": {
243
+ "title": "In: Model optimization in exploration geophysics (Berlin, 1986),\nTheory Practice Appl. Geophys., vol. 1, pp. 73\u201392. Friedr. Vieweg,\nBraunschweig (1987)",
244
+ "author": "Engl, H.W., Neubauer, A.: On projection methods for solving linear ill-posed\nproblems.",
245
+ "venue": null,
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "24": {
251
+ "title": "Proc. Amer. Math. Soc. 102(3), 587\u2013592 (1988).",
252
+ "author": "Engl, H.W., Neubauer, A.: Convergence rates for Tikhonov regularization in\nfinite-dimensional subspaces of Hilbert scales.",
253
+ "venue": "DOI 10.2307/2047228.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "25": {
259
+ "title": "Appl. Anal. 85(1-3), 205\u2013223 (2006).",
260
+ "author": "Escauriaza, L., Fern\u00e1ndez, F.J., Vessella, S.: Doubling properties of\ncaloric functions.",
261
+ "venue": "DOI 10.1080/00036810500277082.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "26": {
267
+ "title": "Math. Model. Anal. 7(2), 241\u2013252 (2002)",
268
+ "author": "H\u00e4marik, U., Avi, E., Ganina, A.: On the solution of ill-posed problems by\nprojection methods with a posteriori choice of the discretization level.",
269
+ "venue": null,
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "27": {
275
+ "title": "J. Approximation Theory 4, 165\u2013182 (1971).",
276
+ "author": "Helfrich, H.P.: Optimale lineare Approximation beschr\u00e4nkter Mengen in\nnormierten R\u00e4umen.",
277
+ "venue": "DOI 10.1016/0021-9045(71)90027-x.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "28": {
283
+ "title": "World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ (2015).",
284
+ "author": "Ito, K., Jin, B.: Inverse problems, Series on Applied Mathematics,\nvol. 22.",
285
+ "venue": "Tikhonov theory and algorithms",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "29": {
291
+ "title": "Comm. Pure Appl. Math. 13, 551\u2013585 (1960)",
292
+ "author": "John, F.: Continuous dependence on data for solutions of partial differential\nequations with a prescribed bound.",
293
+ "venue": null,
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "30": {
299
+ "title": "Inverse Problems 16(5), 1523\u20131539 (2000).",
300
+ "author": "Kaltenbacher, B.: Regularization by projection with a posteriori discretization\nlevel choice for linear and nonlinear ill-posed problems.",
301
+ "venue": "DOI 10.1088/0266-5611/16/5/322.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "31": {
307
+ "title": "Translated from the French edition and edited by Richard Bellman.\nModern Analytic and Computational Methods in Science and Mathematics, No. 18.\nAmerican Elsevier Publishing Co., Inc., New York (1969)",
308
+ "author": "Latt\u00e8s, R., Lions, J.L.: The method of quasi-reversibility. Applications to\npartial differential equations.",
309
+ "venue": null,
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "32": {
315
+ "title": "In: Contributions to the theory of partial differential equations,\nAnnals of Mathematics Studies, no. 33, pp. 167\u2013190. Princeton University\nPress, Princeton, N. J. (1954)",
316
+ "author": "Lax, P.D., Milgram, A.N.: Parabolic equations.",
317
+ "venue": null,
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "33": {
323
+ "title": "Duke Math. J. 155(1), 189\u2013204 (2010).",
324
+ "author": "Lin, C.L., Nakamura, G., Wang, J.N.: Optimal three-ball inequalities and\nquantitative uniqueness for the Lam\u00e9 system with Lipschitz\ncoefficients.",
325
+ "venue": "DOI 10.1215/00127094-2010-054.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "34": {
331
+ "title": "Discrete Contin. Dyn. Syst. 28(3), 1273\u20131290 (2010).",
332
+ "author": "Lin, C.L., Uhlmann, G., Wang, J.N.: Optimal three-ball inequalities and\nquantitative uniqueness for the Stokes system.",
333
+ "venue": "DOI 10.3934/dcds.2010.28.1273.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "35": {
339
+ "title": "Math. Comp. 51(183), 107\u2013131 (1988).",
340
+ "author": "Lukas, M.A.: Convergence rates for regularized solutions.",
341
+ "venue": "DOI 10.2307/2008582.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "36": {
347
+ "title": "SIAM J. Numer. Anal. 38(6), 1999\u20132021 (2001).",
348
+ "author": "Math\u00e9, P., Pereverzev, S.V.: Optimal discretization of inverse problems in\nHilbert scales. Regularization and self-regularization of projection\nmethods.",
349
+ "venue": "DOI 10.1137/S003614299936175X.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "37": {
355
+ "title": "Ph.D. thesis, Rice University (1962)",
356
+ "author": "Miller, K.: Three circle theorems in partial differential equations and\napplications to improperly posed problems.",
357
+ "venue": null,
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "38": {
363
+ "title": "In: Symposium on Non-Well-Posed Problems and Logarithmic\nConvexity (Heriot-Watt Univ., Edinburgh, 1972), pp. 161\u2013176.\nLecture Notes in Math., Vol. 316 (1973)",
364
+ "author": "Miller, K.: Stabilized quasi-reversibility and other nearly-best-possible\nmethods for non-well-posed problems.",
365
+ "venue": null,
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "39": {
371
+ "title": "IMA Journal of Numerical Analysis 42(2), 981\u20131022 (2022).",
372
+ "author": "Mishra, S., Molinaro, R.: Estimates on the generalization error of\nphysics-informed neural networks for approximating a class of inverse\nproblems for PDEs.",
373
+ "venue": "DOI 10.1093/imanum/drab032.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "40": {
379
+ "title": "SIAM J. Numer. Anal. 36(1), 251\u2013274 (1999).",
380
+ "author": "Monk, P., S\u00fcli, E.: The adaptive computation of far-field patterns by a\nposteriori error estimation of linear functionals.",
381
+ "venue": "DOI 10.1137/S0036142997315172.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "41": {
387
+ "title": "RAIRO Anal. Num\u00e9r. 11(3), 271\u2013278 (1977).",
388
+ "author": "Natterer, F.: The finite element method for ill-posed problems.",
389
+ "venue": "DOI 10.1051/m2an/1977110302711.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "42": {
395
+ "title": "Numer. Math. 28(3), 329\u2013341 (1977).",
396
+ "author": "Natterer, F.: Regularisierung schlecht gestellter Probleme durch\nProjektionsverfahren.",
397
+ "venue": "DOI 10.1007/BF01389972.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "43": {
403
+ "title": "Applicable Anal. 18(1-2), 29\u201337 (1984).",
404
+ "author": "Natterer, F.: Error bounds for Tikhonov regularization in Hilbert scales.",
405
+ "venue": "DOI 10.1080/00036818408839508.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "44": {
411
+ "title": "Arch. Rational Mech. Anal. 36, 348\u2013355 (1970).",
412
+ "author": "Nitsche, J.: Lineare Spline-Funktionen und die Methoden von Ritz\nf\u00fcr elliptische Randwertprobleme.",
413
+ "venue": "DOI 10.1007/BF00282271.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "45": {
419
+ "title": "V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New\nYork-Toronto, Ont.-London (1977).",
420
+ "author": "Tikhonov, A.N., Arsenin, V.Y.: Solutions of ill-posed problems.",
421
+ "venue": "Translated from the Russian, Preface by translation editor Fritz\nJohn, Scripta Series in Mathematics",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "46": {
427
+ "title": "BIT Numerical Mathematics 60(4), 901\u2013915 (2020).",
428
+ "author": "Trefethen, L.N.: Quantifying the ill-conditioning of analytic continuation.",
429
+ "venue": "DOI 10.1007/s10543-020-00802-7.",
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "47": {
435
+ "title": "Japan Journal of Industrial and Applied Mathematics 40(3),\n1587\u20131636 (2023).",
436
+ "author": "Trefethen, L.N.: Numerical analytic continuation.",
437
+ "venue": "DOI 10.1007/s13160-023-00599-2.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "48": {
443
+ "title": "Numer. Math. 12, 394\u2013409 (1968).",
444
+ "author": "Zl\u00e1mal, M.: On the finite element method.",
445
+ "venue": "DOI 10.1007/BF02161362.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "49": {
451
+ "title": "SIAM J. Numer. Anal. 10, 229\u2013240 (1973).",
452
+ "author": "Zl\u00e1mal, M.: Curved elements in the finite element method. I.",
453
+ "venue": "DOI 10.1137/0710022.",
454
+ "url": null
455
+ }
456
+ }
457
+ ],
458
+ "url": "http://arxiv.org/html/2311.07440v2"
459
+ }
20240322/2311.10278v2.json ADDED
@@ -0,0 +1,477 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Physics-Enhanced Multi-fidelity Learning for Optical Surface Imprint",
3
+ "abstract": "Human fingerprints serve as one unique and powerful characteristic for each person, from which policemen can recognize the identity. Similar to humans, many natural bodies and intrinsic mechanical qualities can also be uniquely identified from surface characteristics. To measure the elasto-plastic properties of one material, one formally sharp indenter is pushed into the measured body under constant force and retracted, leaving a unique residual imprint of the minute size from several micrometers to nanometers. However, one great challenge is how to map the optical image of this residual imprint into the real wanted mechanical properties, i.e.,, the tensile force curve. In this paper, we propose a novel method to use multi-fidelity neural networks (MFNN) to solve this inverse problem. We first build up the NN model via pure simulation data, and then bridge the sim-to-real gap via transfer learning. Considering the difficulty of collecting real experimental data, we use NN to dig out the unknown physics and also implant the known physics into the transfer learning framework, thus highly improving the model stability and decreasing the data requirement. The final constructed model only needs three-shot calibration of real materials. We tested the final model across 20 real materials and achieved satisfying accuracy. This work serves as one great example of applying machine learning into scientific research, especially under the constraints of data limitation and fidelity variance.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Over the past century, humans have been searching for the optimal natural or artificial materials with most suitable mechanical properties. To accelerate this searching process, persistent research has focused on developing platforms for high-throughout (HT) synthesis and characterization of materials (Ament et al., 2021 ###reference_b1###; Erps et al., 2021 ###reference_b10###). Benefiting from its intrinsic experimental simplicity and broad applicability, indentation has been considered as a paradigm for HT probe of mechanical properties of materials (Chen et al., 2021 ###reference_b4###; Lu et al., 2020 ###reference_b23###). Indentation requires minimal specimen preparation and mounting, rewards hundreds of data from a single specimen, and can probe materials across from nano to macro scales via varied loads (Doerner & Nix, 1986 ###reference_b9###; Chen et al., 2022 ###reference_b5###). Using one sharp indenter to push into the material surface under the constant load, the material surface will be crashed to form a crown-like pattern which contains plentiful information to reveal elasto-plastic properties of materials. However, since the inverse mapping from the crashed pattern into formal material parameters is quite complex, many former researchers regard the pattern information as a rough criterion (Jeong et al., 2021 ###reference_b19###). On the other hand, compared to the traditional characterization methods like tensile testing, direct measurement of optical profile is more convenient. Optical measurement has been widely applied in many areas, such as biological systems (Cooke et al., 2021 ###reference_b6###; Smith et al., 2018 ###reference_b29###), particle tracking (Zalevsky et al., 2009 ###reference_b34###) and vibrations (Cuomo et al., 2022 ###reference_b7###; Zhong et al., 2021 ###reference_b35###).\nThe rapid development in artificial intelligence has aroused one great opportunity of AI for science (Taddeo & Floridi, 2018 ###reference_b31###; Degrave et al., 2022 ###reference_b8###; Fawzi et al., 2022 ###reference_b12###). One big limitation is that the data in many scientific areas are not plentiful, especially for high-fidelity experimental data. To overcome this problem, many new machine learning frameworks incorporating physical constraints have been proposed, i.e.,, physics-informed neural networks (PINN) (Cuomo et al., 2022 ###reference_b7###; Raissi et al., 2019 ###reference_b25###). To decrease the requirements of real experimental data, some pre-training and transfer learning frameworks have been proposed. One rough model will be trained with many simulation data and then fine-tuned by some experimental data. In this way, the requirements of high-fidelity data will be decreased. Another big issue in scientific AI is that many problems in scientific areas are inverse problems, i.e.,, the input-output relation may be ill-conditioned (Tarantola, 2005 ###reference_b32###; Meng et al., 2017 ###reference_b24###; Song et al., 2023 ###reference_b30###; Hu et al., 2023b ###reference_b18###). Sometimes, different input parameters correspond to the similar or even the same output results, thus making the NN prediction to be inaccurate when trying to infer the results back to the inputs. This is referred as the non-unique problem.\nHerein, we attempt to optimize the inverse mapping from the residual pattern formed by indentation into the formal stress-strain curve of materials through MFNN. To make sure our problem is not ill-conditioned so that the model can predict well, we first apply NN to explore the forward problem and then combine the optimization method to search the possibility of non-uniqueness. This method assists to determine what suitable features to choose. Our scientific discovery is that instead of choosing the load-displacement curve, the residual pattern is already informative enough for predicting stress-strain relation. This conclusion turns out to be consistent with former mechanical theories of strain fields.\nAs for the NN architecture, the whole process is achieved by first building up an initial NN model based on a large amount of 2D simulation data through finite element methods (FEM), and then transferring it into the 3D model using some 3D FEM simulation data. The transferred model gets further fine-tuned by incorporating some real experimental data and corresponding physical constraints into the model. To make the transfer learning process more efficient and stabilized, two physical parameters (friction coefficient and Poisson\u2019s ratio ) are tuned in the simulation set, resulting into an ensemble of three parallel NNs. The final model is constructed from these NNs. It turns out that this step of tuning physical parameters is quite critical to the prediction accuracy. The underlying principle is owing to the closer sim-to-real gap after tuning the friction coefficient and Poisson\u2019s ratio . The final result owns satisfying accuracy when predicting the real stress-strain relation of the testing materials. We further notice that the initial pre-assumed material model to describe material behaviors will greatly influence the final prediction result. Hence, a method without pre-assuming material model is proposed and achieves relatively great accuracy. In summary, the main contributions of the paper include:\nTo the best of our knowledge, the first attempt to apply NN framework to inverting the optical profile of the indentation imprint to the real material elasto-plastic properties. The models with and without pre-defined mechanical laws are respectively trained and discussed.\nDesigning the MFNN framework to combine multi-fidelity data from 2D and 3D FEM simulation, and real experiments. Incorporating the physical intuitions into the MFNN model to decrease the data requirements and stabilize the model. The constructed model only needs real experimental data of 3 types of materials. We also contributed experimental results of other 20 types of materials. It turns out that our model achieved an average 3.4% relative error across 20 testing real materials.\nApplying forward NN and BFGS (Yuan, 1991 ###reference_b33###) optimization to explore the non-unique issue in the inverse problem and dig out the required features for training. Approaches based on NN to approximate forward models are a powerful tool that have recently seen more use in computational imaging. Our work supplement that using such an approach to speed up and differentiate FEM is an effective application.\nRelease of the dataset and code for indentation and corresponding residual imprint features. Our MFNN framework can serve as a benchmark in this problem.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Backgrounds and methods",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Indentation",
21
+ "text": "As shown in Figure 1 ###reference_###A, one formal-shaped indenter (here is four-fold) is pushed into the material surface to create one minute crater (also called pile-up) (Figure 1 ###reference_###C). There are two types of indentation techniques. The first technique is to apply the constant load () and only use the division of load () and projection area () of the crater to describe material\u2019s property. That is called hardness ().\nWhile this method is convenient, the final measured result is not directly related to material parameters, i.e.,, the stress-strain relation. To acquire more information, the second method records the load evolution() v.s.,indenting depth() (Figure 2 ###reference_###A right) and tries to map from the load-depth curve into material stress-strain relation ((Figure 2 ###reference_###A left). However, many former research reveal that the load-depth curve may not be informative enough (Chen et al., 2007 ###reference_b3###; Campbell et al., 2018 ###reference_b2###). Here we show that the first technique (hardness measuring) plus optical profile can already make the inverse problem well-conditioned, even without the need for load-depth curve."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Optical profilometer",
27
+ "text": "Our goal is to utilize the optical profilometer (Fainman et al., 1982 ###reference_b11###) to measure the surface height map of the residual imprint. Specifically, as shown in Figure 1 ###reference_###B, a beam of light from a single source is split by the interferometer into two separate beams. Each of these beams travel separate paths, one onto a reference surface and the other onto the surface to be measured. The beams are then recombined resulting in an interference pattern. An imaging device, usually a CCD array, is used to collect this information. By moving the interferometer vertically away from the measurement surface, the point at which this interference occurs can be found for each pixel of the CCD. By tracking the position of the interferometer during this process a 3D map (Figure 1 ###reference_###D) of the surface can be formed.\n###figure_2###"
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Forward and inverse problem",
33
+ "text": "Figure 2 ###reference_### schematically illustrates the problem set of our study. Figure 2 ###reference_###A (left side) is a schematic diagram showing a typical stress-strain response of a power-law strain-hardening material which can be used for many engineering metallic materials. The elastic behavior follows Hook\u2019s law, whereas the plastic response is approximated by different constitutive models (Hertel\u00e9 et al., 2011 ###reference_b16###). One assumption is the three-parameter Hollomon model (fitting parameters: , , ), in which true stress and true strain are related as:\n, while another assuming model is the four-parameter Ludwik model (fitting parameters: , , , ), displayed as:\n, where E is the elastic modulus, is the yield stress, is the work hardening coefficient, n is the work hardening exponent, and is the equivalent plastic strain determined as . We find that the pre-assumed model sometimes fails to accurately fit the stress-strain curves measured in the real experiments, as shown in Appendix A ###reference_###. Later we will also turn into the study without pre-assuming the constitutive model and represent stress-strain curves via point-to-point linear connection.\nKnowing the stress-strain relation of one material, theoretically we can uniquely determine the residual imprint and load-depth curve. This is referred as the forward problem. However, how to inversely determine the stress-strain curve from the residual imprint remains challenging."
34
+ },
35
+ {
36
+ "section_id": "2.4",
37
+ "parent_section_id": "2",
38
+ "section_name": "FEM simulation",
39
+ "text": "To simulate the elasto-plastic behaviors of the system, the ABAQUS (Dassault Syst\u00e8mes Simulia Corp.) software package is employed to conduct 2D and 3D FEM analysis (Reddy, 2019 ###reference_b26###). For single 16-core CPU, each 2D job costs around 10 minutes, while each 3D job costs 6 hours."
40
+ },
41
+ {
42
+ "section_id": "2.5",
43
+ "parent_section_id": "2",
44
+ "section_name": "Network design and dataset construction",
45
+ "text": "Specific NN structure is displayed in Figure 2 ###reference_###B,comprising several independent NNs connected by extra parameters. Each NN owns 6 hidden layers capable of learning the variations of pile-up features and hardness with respect to different stress-strain properties. Each hidden layer owns 32 neurons. ReLU is used as the activation function. The Adam optimizer (Kingma & Ba, 2014 ###reference_b21###) is applied in training by setting the learning rate as 0.0001. Appendix B ###reference_### shows the dataset statistics, in which the parameters of FEM materials are wide enough to nearly cover all the metallic materials. We mainly use the dataset of Ludwik model to train the NNs in the following sections."
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "Related work",
51
+ "text": "With the great progress of computer vision (He et al., 2016 ###reference_b15###; Shorten & Khoshgoftaar, 2019 ###reference_b28###; Hu et al., 2023a ###reference_b17###), combining optical microscope with NN to infer underlying properties is becoming more and more prevalent (Sheinin et al., 2022 ###reference_b27###; Cooke et al., 2021 ###reference_b6###). As for inferring elasto-plastic properties from residual imprint, the traditional methods mainly focused on doing FEM iteration to match the simulation results with experiments (Campbell et al., 2018 ###reference_b2###; Meng et al., 2017 ###reference_b24###; Jeong et al., 2021 ###reference_b19###). These methods encounter two issues that 1) The iterations of FEM to acquire the best set of parameters to fit experimental results will consume much time; 2) The sim-to-real gap is not fixed, thus the predicted parameters may encounter great errors in some cases. In recent years, some researchers have tried to apply NN to solve this inverse problem (Lu et al., 2020 ###reference_b23###; Haj-Ali et al., 2008 ###reference_b14###; Jeong et al., 2021 ###reference_b19###). However, they all directly utilized the features of load-depth curves without the features of residual imprints, which may encounter unique problem. Meanwhile, the relation between the input features and the predicted elastic and plastic properties are not well revealed or explained by machine learning. In other words, more efforts are needed to utilize machine learning to help explore the underlying physics and give us inspirations, and to instill existing physical constraints into the model to make the training more efficient (Karniadakis et al., 2021 ###reference_b20###; Liu et al., 2021 ###reference_b22###).\n###figure_3### ###figure_4###"
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments and results",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Unique problem and feature selection",
63
+ "text": "One fundamental question is whether the features we choose can guarantee a unique problem. Here we artificially define the feature extraction method based on curve characteristics, as illustrated in Appendix D ###reference_###. Specifically, we can extract three features from force curves based on the the loading curvature, initial unloading slope, and the ratio of residual unloading depth to maximum loading depth. We extract nine features from pile-up curves by finding the maximum height and calculating the volumes and weighted centres in varied parts. We also have tried using encoder-decoder structure to automatically output features, while the testing results show that both types of features perform similarly.\nAs shown in Figure 4 ###reference_###A, we use 2D FEM data (Figure 3 ###reference_###A) to build up an accurate forward prediction model, i.e.,, predicting the force and pile-up features based on input constitutive model parameters. The data number is actively augmented to ensure that the Mean Absolute Percentage Error (), defined as follows,\n, for predicting each feature is below 2%. and are the true and prediction values of the data point, respectively. Each time 50 data points are added into the parameter range with large errors, and the total data number for forward prediction is 2450. Then we use this trained NN as a surrogate model to explore the informativeness of the indented features. To check whether one material owns siblings with quite similar features, we fix its parameters and iterate the parameters of the candidate material to continually decrease their feature differences. The iteration process uses typical BFGS optimization algorithm. The iteration stops when the of all features are below the set limit or the iteration number exceeds the upper limit. Finally, we carry out FEM simulations to verify whether the hypothesized material siblings own similar features. Figure 4 ###reference_###B1 and Figure 4 ###reference_###B2 display two typical material siblings acquired from the above-mentioned workflow, in which the load-displacement curves are almost the same and the pile-ups reveal some difference at the highest parts.\nTheoretically, we can distinguish two materials if the maximum difference among their features exceeds the possible variance, e.g.,, the experimental errors. Hence, we define the concept of distinguishing ratio as,\n, where and are the features from two material siblings, respectively. depends on how many features considered. We then take a uniform grid (grid number = 5) on the parameter space of Ludiwk model, and test the uniqueness of 625 materials. Among these 625 materials, of them own material siblings with all the relative feature differences lower than the distinguishing ratio. Then we define the non-unique ratio at this specific distinguishing ratio as , displaying the possibility to encounter unique problems. Figure 4 ###reference_###C shows the evolution of the non-unique ratio with the distinguishing ratio employing different features. It reveals that the unique problem is quite severe if only three force features and hardness are input as features. However, the non-uniqueness is largely mitigated if we also include the information of pile-up. Even if we choose only three features from pile-up combined with hardness, the performance is still much better than the pure force case when the distinguishing ratio is lower than 6%. Meanwhile, we find that the non-unique ratios with nine pile-up features and three force features are close to the case with nine pile-up features and hardness, both observing slight increases when the distinguishing ratio is higher than 8%. This hints that when including the information of pile-up, maybe we can represent the information of load-displacement relation with only hardness. We then verify this hypothesis by doing the inverse training with nine pile-up features and hardness. The of each predicted parameter (, , , ) for the testing set decreases to lower than 5% when the total data number increases to 4000, as shown in Figure 4 ###reference_###D.\n###figure_5###"
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "2D to 3D simulation model transfer",
69
+ "text": "We then discuss the transfer learning framework of our models. To decrease the burden of expensive data like 3D FEM simulations and experiments, we first build up a baseline model with many pure 2D FEM data, and then ameliorate the gap between 2D and 3D FEM models with some 3D FEM data. Finally, we calibrate the gap between 3D FEM simulations and experiments with several experimental data. The features we use in all the following conditions are based on nine pile-up features and hardness.\nThe numerical indenter simulated in the 2D axisymmetric FEM modeling is in the conical shape, in which the pile-up morphology can be represented by a one-dimensional (1D) curve, as shown in Figure 4 ###reference_###B2. However, the indenters used in most hardness testing experiments are fourfold Vickers indenters without the axisymmetric property. Figure 5 ###reference_###A and Figure 5 ###reference_###B show two typical 2D images of pile-up morphologies in the experiment and 3D FEM simulation, respectively. Due to surface roughness and grain variations in real metals, the raw pile-up morphologies acquired from experiments are not perfectly smooth in heights (Figure 5 ###reference_###A). To mitigate the height variations and transform the 2D image into the 1D curve, we divide the 2D image into many slim square strips and calculate the average height in each divided region. Then we acquire a plot of the averaged height v.s.,the distance to the imprint center and calculate related features, as shown in Figure 5 ###reference_###C.\n###figure_6### Since different shaped indenters will result to different pile-up morphologies and corresponding features, using the ML model trained by the 2D FEM data to predict the 3D FEM result will contain large errors. We first use 4000 2D FEM data to build up a 2D ML model and directly employ it for the prediction of 3D FEM results, denoted as in Figure 2 ###reference_###. The green points in Fig. 4A display the prediction of in this case, and the errors are quite large. However, these predicted Y1 still incorporate the information of 2D FEM data, and they own a rough trend with the true values. To correct the wrong correspondence and transfer the model from 2D to 3D, here we set the predicted as the extra feature and put it together with the original pile-up features and hardness extracted from 3D FEM model to predict the target parameters (, , , ). Figure 6 ###reference_###B shows the evolution of MAPE v.s.,the 3D data number for the case with and without the extra feature (). We find the MAPE of the case incorporating 2D prediction will decrease to lower than 5% when the data number increases to 50, much lower than the case using pure 3D data. Figure 6 ###reference_###A shows the correspondence of the predicted and targeted values based on this transferred model. The transfer learning framework decreases the requirement of expensive data."
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "Sim-to-real model transfer (Physical-boosting)",
75
+ "text": "After acquiring a 3D ML model, the next step is to transfer it into the real experimental ML model. However, as shown in Figure 5 ###reference_###C and Figure 5 ###reference_###F, the pile-up curves of 3D simulations and experiments are not perfectly consistent, even if the input stress-strain curves for simulations are acquired from the real materials. This inconsistency may be caused by many systematic errors, e.g., the indenter tip is not perfectly sharp, the over-simplified assumption of Poisson\u2019s ratio to be 0.3 and friction coefficient to be 0.15, and the grain effects. The feature value differences between experiments and simulations are larger than 20% in some features, while the evolution trends with different materials are consistent. The destination target in this section is to use some types of materials for calibrating sim-to-real gap, and then employ this calibrated model to predict the stress-strain relation of other types of materials.\nUnder the constraints of the limited number of metal types in the real experiments or industries, one intuitive way to mitigate the sim-to-real gap is by incorporating some experimental data points with the 3D FEM data points together, and apply these merged data for the transfer learning process from 2D FEM model to 3D model, the same as the process mentioned in the section 2D to 3D simulation model transfer. This time the 3D model is not based on pure 2D/3D FEM data, since some experimental data are also used. Here in our experiments, each type of material has been measured 8 times repetitively. The number of data points chosen out of these 8 points for the transfer learning should be determined. We then plot the evolution of MAPE with the experiment data number in Figure 6 ###reference_###C. The number of materials used for the training ranges from 1 to 3 (one-shot to three-shot), respectively. The MAPE are calculated from the other 20 testing materials. The specific choice of material types and experimental data points are randomly repeated for 20 times. With the experiment data number increasing, the MAPE will first decrease and then increase when the experiment data number exceeds 4. This phenomenon is reasonable since an overlarge number of data points of the same material tends to induce overfitting.\nAccording to the above discussion, the transfer learning will be inefficient if the experimental data is rare and the sim-to-real gap is too large. In the above FEM simulations, the Poisson\u2019s ratio and the friction coefficient are always assumed to be fixed values ( = 0.3, = 0.15), which are common settings in most indentation models (Chen et al., 2007 ###reference_b3###; Goto et al., 2019 ###reference_b13###; Haj-Ali et al., 2008 ###reference_b14###). However, our further study reveals that these two physical parameters will greatly impact the acquired features, as illustrated in Appendix E ###reference_###. We vary the to be (0.2, 0.3, 0.4) and the to be (0.05, 0.15, 0.25), and then calculate the evolution of pile-ups in these cases. For each type of features, the maximum difference among these 9 cases will range from 20% to 40%. The changing ratios will also vary among different materials. We then calculate the corresponding features of the materials (Al7075, SS430) and compare them to the experimental features. Figure 6 ###reference_###E displays the total feature differences between simulations and experiments under varied and . Among the nine combinations, the sim-to-real gap will be lower than others if the two physical parameters (, ) = [(0.3, 0.15), (0.3, 0.05), (0.2, 0.15)]. Next, we build up three independent DNNs in which the input 3D FEM data are calculated by setting two physical parameters to be the above three values, respectively. Specifically, the parameters (, ) are set as fixed values (0.3, 0.15) for 4000 2D FEM simulations, while changed to [(0.3, 0.15), (0.3, 0.05), (0.2, 0.15)] for each subset of 50 3D FEM data. The total number of 3D FEM data used increases to 150.\nThen for each DNN, using the merged data of 3D FEM and experiments, one candidate value will be predicted, denoted as , , in three independent subsets. These three DNNs form into a committee and finally determine what the optimal prediction value is. The specific weights for each DNN are tuned as,\nHere is the final predicted value in our model and (, ) are two parameters to be determined by experimental data. To stabilize the training, we use function to constrain the values of (, ) between 0 and 1. We use the experimental data of SS430 and Al7075 as the training data and find that the training error reaches minimum when , . Through forming a committee under varied physical parameters and fine-tuning the relative weights, the and represented in simulations can be closer to the experimental condition and other systematic biases such as tip radius effects of nominally sharp indenters can also be actively mitigated. More specifically, the reason why the settings [(0.3, 0.15), (0.3, 0.05), (0.2, 0.15)] are closer to experiments is not necessarily owing to the closer and values as that in experiments, but that these settings possibly offset the systematic biases caused by other factors. This whole process is named as \u2018physical-boosting\u2019 since the intuition is based on tuning two physical parameters in the simulations. In summary, the experimental training data has been used three times throughout the ML model, i.e., the choice of physical parameters with three least sim-to-real gaps, combining with 3D simulation data to train DNNs, fine-tuning the relation among three independent DNNs. We use this final model to predict the stress-strain relation of the left 20 other types of materials, serving as the testing set (SS304 (Figure 6 ###reference_###F1) and Al6061 (Figure 6 ###reference_###F2) as the examples shown). The predicted stress-strain curves are satisfyingly close to the real curves acquired by tensile testing (the mean relative errors of predicted stresses are 3.4% across all testing materials). The full accuracy table is in Appendix C ###reference_###.\nDuring forming the committee of three (, ) pairs, the initial grid search for choosing three closest ones is indispensable, since the member with quite large sim-to-real gap will even undermine the model performance."
76
+ },
77
+ {
78
+ "section_id": "4.4",
79
+ "parent_section_id": "4",
80
+ "section_name": "Pointwise stress-strain prediction",
81
+ "text": "###figure_7### To make the prediction more tenable and represent a much wider material space, here we add another kind of material model into the training and testing dataset, called \u2018point-stress model\u2019. In this model, the linear elastic part is determined by elastic modulus and yield stress , while the nonlinear plastic part is represented by nine strain-stress points, i.e., [], as shown in Figure 7 ###reference_###A. The whole tensile curve is estimated between the points by linear interpolation. The yield strain and the maximum strain are then limits of the total interval of the strain values to be estimated on the tensile curve. From this interval, the intermediate strain values, representing the positions of the to points to estimate, are calculated by adopting a geometric progression. This choice of progression serves to obtain a higher density of points at lower strains, aiming to capture more features near the yield stress. The specific equation is defined as,\nHere the common ratio ranges from 1.1 to 1.5 in different materials. A larger corresponds to denser aggregation of points near yield stress. After determining strain points , the corresponding stress values are randomly generated under the constraint of softened-hardening behavior, displayed as,\nIn summary, the variables to be determined in the point-stress model are [].\nThen the dataset for the pointwise stress-strain prediction of 2D FEM simulations comprises 2000 trials of Ludwik model, 1000 trials of Hollomon model, and 1000 trials of the point-stress model. As for the inverse prediction, we also try to predict the discrete strain-stress points and linearly interpolate them into the whole tensile curve, in which the objective parameters are []. Here the total number of strain-stress points are artificially determined by humans, even can be numerous if the computing resource is enough. Meanwhile, the specific positions of strains can also be flexible, different from the points in the input point-stress model. We set 6 strain positions uniformly gridded on [], and 10 strain positions uniformly gridded on []. Figure 7 ###reference_###B shows the evolution of the average mean absolute errors of all the objective parameters with the iterations (epochs). Both the training error (blue) and the testing error (red) will decrease to less than 5% when epochs arrive 2000. The inner plot shows the corresponding comparison between the true stress and the predicted stress. The predicting accuracy is magically good, hinting that each part of the stress-strain tensile curve may have its unique influences on the final pile-up morphologies.\nThe remaining process of transferring the 2D model into the 3D model, and then into the experimental model is the same as that stated in the section: Sim-to-real model transfer (Physical-boosting). For the 3D simulation data, apart from the original 150 trials of Ludwik model and 60 trials of Hollomon model, 60 trials of the point-stress model are added. The final predicted strain-stress points for the two training materials and two testing materials in experiments are displayed in Figure 7 ###reference_###C and Figure 7 ###reference_###D, respectively. The overall prediction accuracy is satisfying."
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Discussion",
87
+ "text": "Herein, we attempt to bridge the gap between optical residual profiles and material elasto-plastic properties via MFNN. How to use machine learning to explore the inverse problem and how to supplement the physical constraints into the model are discussed. Considering the high cost to get real experiment data, we try to build up a method with only one-shot or few-shot calibrations. The testings results show excellent acuracy. Combining novel imaging and vision techniques with object disturbances is a powerful modality that deserves more attention. This work is a nice demonstration that adding information that can be easily captured by an interferometer significantly improves reconstructions."
88
+ }
89
+ ],
90
+ "appendix": [
91
+ {
92
+ "section_id": "Appendix 1",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix A Pre-assumed constitutive models",
95
+ "text": "###figure_8### ###figure_9###"
96
+ },
97
+ {
98
+ "section_id": "Appendix 2",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix B Dataset construction",
101
+ "text": ""
102
+ },
103
+ {
104
+ "section_id": "Appendix 3",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix C Dataset construction",
107
+ "text": ""
108
+ },
109
+ {
110
+ "section_id": "Appendix 4",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix D Feature extraction",
113
+ "text": "###figure_10###"
114
+ },
115
+ {
116
+ "section_id": "Appendix 5",
117
+ "parent_section_id": null,
118
+ "section_name": "Appendix E Impacts of Poisson\u2019s ratio and friction coefficient",
119
+ "text": "###figure_11###"
120
+ }
121
+ ],
122
+ "tables": {
123
+ "1": {
124
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Parameter ranges of Ludwik/Hollomon model for the dataset of 2D/3D FEM. Here the chosen range of parameters is wide enough to nearly cover all the metallic materials.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A2.T1.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A2.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A2.T1.4.4.5\">Models</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T1.1.1.1\">\n (GPa)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T1.2.2.2\">\n (GPa)</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T1.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T1.4.4.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T1.4.5.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A2.T1.4.5.1.1\">Ludwik</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T1.4.5.1.2\">30\u2013300</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T1.4.5.1.3\">0.05\u20131</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T1.4.5.1.4\">0.1\u20130.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T1.4.5.1.5\">0.1\u20132</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T1.4.6.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A2.T1.4.6.2.1\">Hollomon</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T1.4.6.2.2\">30\u2013300</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T1.4.6.2.3\">0.05\u20133</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T1.4.6.2.4\">0.05\u20130.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T1.4.6.2.5\">N/A</td>\n</tr>\n</tbody>\n</table>\n</figure>",
125
+ "capture": "Table 1: Parameter ranges of Ludwik/Hollomon model for the dataset of 2D/3D FEM. Here the chosen range of parameters is wide enough to nearly cover all the metallic materials."
126
+ },
127
+ "2": {
128
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The datasets and sizes used in this study.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A2.T2.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A2.T2.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T2.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A2.T2.1.1.1.2\">Size</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A2.T2.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A2.T2.1.2.1.1\">2D FEM, Ludwik</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T2.1.2.1.2\">4000</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.3.2.1\">2D FEM, Hollomon</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.3.2.2\">1000</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.4.3.1\">2D FEM, Point-stress</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.4.3.2\">1000</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.5.4.1\">3D FEM, Ludwik</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.5.4.2\">150</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.6.5.1\">3D FEM, Hollomon</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.6.5.2\">60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.7.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.7.6.1\">3D FEM, Point-stress</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.7.6.2\">60</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.8.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.8.7.1\">Al6061, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.8.7.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.9.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.9.8.1\">Al7075, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.9.8.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.10.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.10.9.1\">Al2011, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.10.9.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.11.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.11.10.1\">Al3003, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.11.10.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.12.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.12.11.1\">Al2024, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.12.11.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.13.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.13.12.1\">Al5052, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.13.12.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.14.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.14.13.1\">Al6063, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.14.13.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.15.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.15.14.1\">SS430, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.15.14.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.16.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.16.15.1\">SS316, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.16.15.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.17.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.17.16.1\">SS303, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.17.16.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.18.17\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.18.17.1\">SS410, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.18.17.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.19.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.19.18.1\">SS304, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.19.18.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.20.19\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.20.19.1\">C26000, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.20.19.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.21.20\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.21.20.1\">C22000, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.21.20.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.22.21\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.22.21.1\">C71500, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.22.21.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.23.22\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.23.22.1\">C10100, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.23.22.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.24.23\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.24.23.1\">C11000, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.24.23.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.25.24\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.25.24.1\">T1, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.25.24.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.26.25\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.26.25.1\">Ti-3Al-2.5V, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.26.25.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.27.26\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.27.26.1\">Ti-6Al-7Nb, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.27.26.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.28.27\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T2.1.28.27.1\">AZ31B, experiment</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T2.1.28.27.2\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T2.1.29.28\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A2.T2.1.29.28.1\">AZ91D, experiment</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T2.1.29.28.2\">8</td>\n</tr>\n</tbody>\n</table>\n</figure>",
129
+ "capture": "Table 2: The datasets and sizes used in this study."
130
+ },
131
+ "3": {
132
+ "table_html": "<figure class=\"ltx_table\" id=\"A3.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Real testing materials and corresponding average relative errors of predicted stresses.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A3.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A3.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"A3.T3.1.1.1.1\">Testing material</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T3.1.1.1.2\">Relative error (%)</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A3.T3.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A3.T3.1.2.1.1\">Al6061</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T3.1.2.1.2\">3.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.3.2.1\">Al2011</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.3.2.2\">3.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.4.3.1\">Al3003</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.4.3.2\">2.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.5.4.1\">Al2024</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.5.4.2\">4.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.6.5.1\">Al5052</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.6.5.2\">3.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.7.6.1\">Al6063</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.7.6.2\">2.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.8.7.1\">SS316</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.8.7.2\">4.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.9.8.1\">SS303</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.9.8.2\">2.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.10.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.10.9.1\">SS410</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.10.9.2\">3.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.11.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.11.10.1\">SS304</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.11.10.2\">2.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.12.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.12.11.1\">C26000</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.12.11.2\">3.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.13.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.13.12.1\">C22000</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.13.12.2\">3.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.14.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.14.13.1\">C71500</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.14.13.2\">4.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.15.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.15.14.1\">C10100</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.15.14.2\">4.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.16.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.16.15.1\">C11000</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.16.15.2\">2.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.17.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.17.16.1\">T1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.17.16.2\">3.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.18.17\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.18.17.1\">Ti-3Al-2.5V</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.18.17.2\">4.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.19.18\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.19.18.1\">Ti-6Al-7Nb</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.19.18.2\">2.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.20.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.20.19.1\">AZ31B</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.20.19.2\">3.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.21.20\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A3.T3.1.21.20.1\">AZ91D</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T3.1.21.20.2\">5.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T3.1.22.21\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A3.T3.1.22.21.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T3.1.22.21.1.1\">Average</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T3.1.22.21.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A3.T3.1.22.21.2.1\">3.4</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
133
+ "capture": "Table 3: Real testing materials and corresponding average relative errors of predicted stresses."
134
+ }
135
+ },
136
+ "image_paths": {
137
+ "1": {
138
+ "figure_path": "2311.10278v2_figure_1.png",
139
+ "caption": "Figure 1: \nExperimental methods. (A and B) Schematic illustrations of indentation and optical profilometer, respectively. (C) Typical pile-up image taken from scanning electron microscope. (D) Typical height distribution image measured by optical profilometer.",
140
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Picture1.png"
141
+ },
142
+ "2": {
143
+ "figure_path": "2311.10278v2_figure_2.png",
144
+ "caption": "Figure 2: Transfer learning to solve the indentation inverse problem via residual imprint (pile-up). (A) Schematic illustration of indentation forward and inverse problems. Materials conforming to typical hardening behaviors (left) will form pile-up on sample surfaces after indentation and response typical load-displacement curves (right). (B) Flowcharts of the transfer learning DNN employed in this study.",
145
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Picture2.png"
146
+ },
147
+ "3": {
148
+ "figure_path": "2311.10278v2_figure_3.png",
149
+ "caption": "Figure 3: 2D and 3D FEM models in our study. The total element number is 3449 in 2D FEM (A) and 223292 in 3D FEM (B).",
150
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Picture3.png"
151
+ },
152
+ "4": {
153
+ "figure_path": "2311.10278v2_figure_4.png",
154
+ "caption": "Figure 4: Forward prediction with the unique problem and inverse prediction with the feature selection process. (A) Forward prediction combining BFGS optimization to find the mystical material siblings with the same indentation features. (B1-B2) Two typical material siblings ((E,\u03c3y,n,K\ud835\udc38subscript\ud835\udf0e\ud835\udc66\ud835\udc5b\ud835\udc3eE,\\sigma_{y},n,Kitalic_E , italic_\u03c3 start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT , italic_n , italic_K) = (200, 0.28, 0.65, 1.365), (203, 0.254, 0.485, 1.020)) corresponding to almost the same load-displacement curves. (C) Plot of Non-unique ratio v.s.,Distinguishing ratio. (D) Value correlation of the predicted n\ud835\udc5bnitalic_n and the target n\ud835\udc5bnitalic_n. The results are based on 4000 2D FEM data with 3500 training data (blue) and 500 testing data (orange).",
155
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Picture4.png"
156
+ },
157
+ "5": {
158
+ "figure_path": "2311.10278v2_figure_5.png",
159
+ "caption": "Figure 5: Pile-up profiles acquired from experiments and simulations. (A-C) and (D-F) are pile-up profiles of SS304, and Al7075, respectively. (A) SS304 pile-up profile measured from experiments. The vertical heights of four-fold profiles are divided into slim square strips and averaged v.s.,the horizontal distance (X, green color) from the origin. Both the heights and distances are normalized by the indentation lateral length (a, white color). All the 3D pile-up profiles (A-B, D-E) are dissolved through this method to form into 2D pile-up curves (C) and (F).",
160
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Picture5.png"
161
+ },
162
+ "6": {
163
+ "figure_path": "2311.10278v2_figure_6.png",
164
+ "caption": "Figure 6: A transfer learning framework for the real experimental prediction. (A) 2D to 3D FEM model transfer with total 50 3D FEM data. The green and blue points are respectively the prediction of n\ud835\udc5bnitalic_n for 3D FEM with pure 2D model and 3D transferred model. (B) MAPE of n\ud835\udc5bnitalic_n v.s.,3D FEM data number with (red) or without (blue) 2D baseline model. (C) MAPE of n\ud835\udc5bnitalic_n v.s.,experiment data number via the method of direct transfer. Here the experiment data number refers to the data points of the same material. The number of types refers to the types of materials used for transfer learning. (D) Comparison of predicting accuracy for three transfer learning methods employed in this study. (E) The choice of physical parameters, i.e.,, friction coefficient \u03bc\ud835\udf07\\muitalic_\u03bc and Poisson\u2019s ratio \u03bd\ud835\udf08\\nuitalic_\u03bd, for physical-boosting. Here the MAPE measures the feature difference between simulations and experiments of Al7075 (red) and SS430 (blue). (F1) and (F2) The ground truth and predicted stress-strain curves of SS304 and Al6061, respectively.",
165
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Picture6.png"
166
+ },
167
+ "7": {
168
+ "figure_path": "2311.10278v2_figure_7.png",
169
+ "caption": "Figure 7: Direct pointwise prediction without the advanced assumption of constitutive models. (A) The created random stress-strain curves conforming to the point-stress model. (B) The red and blue shaded lines represent the MAE over the training epochs for testing and training datasets, respectively. The DNNs are trained 40 times with different initial weights. The inset shows the pointwise comparison between predicted stresses and target stresses. The green and yellow points are training and testing datasets. (C) and (D) The final pointwise prediction from transferred model for training and testing materials, respectively.",
170
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Picture7.png"
171
+ },
172
+ "8": {
173
+ "figure_path": "2311.10278v2_figure_8.png",
174
+ "caption": "Figure 8: Fitting of experimental tensile curves via four-parameter Ludwik model. The red dots show the experimental results, while the green curves present the fitted curves. Ludwik model can well fit the stress-strain behaviors in all the four metals.",
175
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Appendix_figure1.png"
176
+ },
177
+ "9": {
178
+ "figure_path": "2311.10278v2_figure_9.png",
179
+ "caption": "Figure 9: Fitting of experimental tensile curves via three-parameter Hollomon model. The red dots show the experimental results, while the green curves present the fitted curves. Hollomon model cannot well fit the stress-strain behaviors of high-hardening metals, such as the SS304 in this study.",
180
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Appendix_figure2.png"
181
+ },
182
+ "10": {
183
+ "figure_path": "2311.10278v2_figure_10.png",
184
+ "caption": "Figure 10: Schematic illustration of chosen features in the load-displacement relation and the pile-up. (A) A typical load-displacement curve from which the loading curvature C\ud835\udc36Citalic_C, initial unloading slope S\ud835\udc46Sitalic_S, and the ratio of residual unloading depth to maximum loading depth hr/hmsubscript\u210e\ud835\udc5fsubscript\u210e\ud835\udc5ah_{r}/h_{m}italic_h start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT / italic_h start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT are extracted. These three features are mostly used in previous research. (B) A typical 2D pile-up morphology. Here we focus on the part higher than the unindented flattened surface, denoted by the dashed rectangle. (C) Illustration of the features in pile-up. For each pile-up curve, we characterize its maximum height Hmaxsubscript\ud835\udc3bmaxH_{\\text{max}}italic_H start_POSTSUBSCRIPT max end_POSTSUBSCRIPT, total pile-up volume V1subscript\ud835\udc491V_{1}italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, and lateral center coordinate of pile-up O1subscript\ud835\udc421O_{1}italic_O start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. We further find that much information is comprised in the curve part near the position with maximum height. To further extract more information, we only count in the part whose height is higher than 12\u00d7Hmax12subscript\ud835\udc3bmax\\frac{1}{2}\\times H_{\\text{max}}divide start_ARG 1 end_ARG start_ARG 2 end_ARG \u00d7 italic_H start_POSTSUBSCRIPT max end_POSTSUBSCRIPT and calculate its volume and center coordinate, as shown in the green region. We name these two features half-volume V1/2subscript\ud835\udc4912V_{1/2}italic_V start_POSTSUBSCRIPT 1 / 2 end_POSTSUBSCRIPT and half-center O1/2subscript\ud835\udc4212O_{1/2}italic_O start_POSTSUBSCRIPT 1 / 2 end_POSTSUBSCRIPT. In the same way, as for the part with the height higher than 34\u00d7Hmax34subscript\ud835\udc3bmax\\frac{3}{4}\\times H_{\\text{max}}divide start_ARG 3 end_ARG start_ARG 4 end_ARG \u00d7 italic_H start_POSTSUBSCRIPT max end_POSTSUBSCRIPT and 78\u00d7Hmax78subscript\ud835\udc3bmax\\frac{7}{8}\\times H_{\\text{max}}divide start_ARG 7 end_ARG start_ARG 8 end_ARG \u00d7 italic_H start_POSTSUBSCRIPT max end_POSTSUBSCRIPT, respectively, we extract features fourth-volume V1/4subscript\ud835\udc4914V_{1/4}italic_V start_POSTSUBSCRIPT 1 / 4 end_POSTSUBSCRIPT, fourth-center O1/4subscript\ud835\udc4214O_{1/4}italic_O start_POSTSUBSCRIPT 1 / 4 end_POSTSUBSCRIPT, eighth-volume V1/8subscript\ud835\udc4918V_{1/8}italic_V start_POSTSUBSCRIPT 1 / 8 end_POSTSUBSCRIPT, and eighth-center O1/8subscript\ud835\udc4218O_{1/8}italic_O start_POSTSUBSCRIPT 1 / 8 end_POSTSUBSCRIPT. In total, we use 9 features to represent the information of the pile-up.",
185
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Appendix_figure3.png"
186
+ },
187
+ "11": {
188
+ "figure_path": "2311.10278v2_figure_11.png",
189
+ "caption": "Figure 11: Influence of Poisson\u2019s ratio \u03bd\ud835\udf08\\nuitalic_\u03bd and friction coefficient \u03bc\ud835\udf07\\muitalic_\u03bc on pile-up profiles. (A) Pile-ups with \u03bd\ud835\udf08\\nuitalic_\u03bd fixed to be 0.3, and \u03bc\ud835\udf07\\muitalic_\u03bc varied to be 0.05, 0.15, and 0.25. The pile-up height decreases with the increasing \u03bc\ud835\udf07\\muitalic_\u03bc. (B) Pile-ups with \u03bc\ud835\udf07\\muitalic_\u03bc fixed to be 0.15, and \u03bd\ud835\udf08\\nuitalic_\u03bd varied to be 0.2, 0.3, and 0.4. The pile-up height increases with the increasing \u03bd\ud835\udf08\\nuitalic_\u03bd. (C-F) Evolution of the three representative features used in this study (centre O1subscript\ud835\udc421O_{1}italic_O start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, hardness H\ud835\udc3bHitalic_H, and max height Hmaxsubscript\ud835\udc3bmaxH_{\\text{max}}italic_H start_POSTSUBSCRIPT max end_POSTSUBSCRIPT) with \u03bd\ud835\udf08\\nuitalic_\u03bd and \u03bc\ud835\udf07\\muitalic_\u03bc. We show the 3D FEM results of SS304 and Al6061 as a comparison.",
190
+ "url": "http://arxiv.org/html/2311.10278v2/extracted/5487818/Images/Appendix_figure4.png"
191
+ }
192
+ },
193
+ "validation": true,
194
+ "references": [
195
+ {
196
+ "1": {
197
+ "title": "Autonomous materials synthesis via hierarchical active learning of nonequilibrium phase diagrams.",
198
+ "author": "Ament, S., Amsler, M., Sutherland, D. R., Chang, M.-C., Guevarra, D., Connolly, A. B., Gregoire, J. M., Thompson, M. O., Gomes, C. P., and van Dover, R. B.",
199
+ "venue": "Science Advances, 7(51):eabg4930, 2021.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "2": {
205
+ "title": "Experimental and computational issues for automated extraction of plasticity parameters from spherical indentation.",
206
+ "author": "Campbell, J., Thompson, R., Dean, J., and Clyne, T.",
207
+ "venue": "Mechanics of Materials, 124:118\u2013131, 2018.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "3": {
213
+ "title": "On the uniqueness of measuring elastoplastic properties from indentation: the indistinguishable mystical materials.",
214
+ "author": "Chen, X., Ogasawara, N., Zhao, M., and Chiba, N.",
215
+ "venue": "Journal of the Mechanics and Physics of Solids, 55(8):1618\u20131660, 2007.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "4": {
221
+ "title": "Tuning nanoscale adhesive contact behavior to a near ideal hertzian state via graphene coverage.",
222
+ "author": "Chen, Y., Guan, Z., Yang, W., Yao, Y., and Wang, H.",
223
+ "venue": "Computational Materials Science, 194:110427, 2021.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "5": {
229
+ "title": "Anomalous layer-dependent lubrication on graphene-covered-substrate: Competition between adhesion and plasticity.",
230
+ "author": "Chen, Y., Guan, Z., Liu, J., Yang, W., and Wang, H.",
231
+ "venue": "Applied Surface Science, pp. 153762, 2022.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "6": {
237
+ "title": "Physics-enhanced machine learning for virtual fluorescence microscopy.",
238
+ "author": "Cooke, C. L., Kong, F., Chaware, A., Zhou, K. C., Kim, K., Xu, R., Ando, D. M., Yang, S. J., Konda, P. C., and Horstmeyer, R.",
239
+ "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3803\u20133813, 2021.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "7": {
245
+ "title": "Scientific machine learning through physics-informed neural networks: Where we are and what\u2019s next.",
246
+ "author": "Cuomo, S., Di Cola, V. S., Giampaolo, F., Rozza, G., Raissi, M., and Piccialli, F.",
247
+ "venue": "arXiv preprint arXiv:2201.05624, 2022.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "8": {
253
+ "title": "Magnetic control of tokamak plasmas through deep reinforcement learning.",
254
+ "author": "Degrave, J., Felici, F., Buchli, J., Neunert, M., Tracey, B., Carpanese, F., Ewalds, T., Hafner, R., Abdolmaleki, A., de Las Casas, D., et al.",
255
+ "venue": "Nature, 602(7897):414\u2013419, 2022.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "9": {
261
+ "title": "A method for interpreting the data from depth-sensing indentation instruments.",
262
+ "author": "Doerner, M. F. and Nix, W. D.",
263
+ "venue": "Journal of Materials research, 1(4):601\u2013609, 1986.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "10": {
269
+ "title": "Accelerated discovery of 3d printing materials using data-driven multiobjective optimization.",
270
+ "author": "Erps, T., Foshey, M., Lukovi\u0107, M. K., Shou, W., Goetzke, H. H., Dietsch, H., Stoll, K., von Vacano, B., and Matusik, W.",
271
+ "venue": "Science advances, 7(42):eabf7435, 2021.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "11": {
277
+ "title": "Optical profilometer: a new method for high sensitivity and wide dynamic range.",
278
+ "author": "Fainman, Y., Lenz, E., and Shamir, J.",
279
+ "venue": "Applied optics, 21(17):3200\u20133208, 1982.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "12": {
285
+ "title": "Discovering faster matrix multiplication algorithms with reinforcement learning.",
286
+ "author": "Fawzi, A., Balog, M., Huang, A., Hubert, T., Romera-Paredes, B., Barekatain, M., Novikov, A., R Ruiz, F. J., Schrittwieser, J., Swirszcz, G., et al.",
287
+ "venue": "Nature, 610(7930):47\u201353, 2022.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "13": {
293
+ "title": "Determining suitable parameters for inverse estimation of plastic properties based on indentation marks.",
294
+ "author": "Goto, K., Watanabe, I., and Ohmura, T.",
295
+ "venue": "International Journal of Plasticity, 116:81\u201390, 2019.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "14": {
301
+ "title": "Nonlinear constitutive models from nanoindentation tests using artificial neural networks.",
302
+ "author": "Haj-Ali, R., Kim, H.-K., Koh, S. W., Saxena, A., and Tummala, R.",
303
+ "venue": "International Journal of Plasticity, 24(3):371\u2013396, 2008.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "15": {
309
+ "title": "Deep residual learning for image recognition.",
310
+ "author": "He, K., Zhang, X., Ren, S., and Sun, J.",
311
+ "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770\u2013778, 2016.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "16": {
317
+ "title": "A generic stress\u2013strain model for metallic materials with two-stage strain hardening behaviour.",
318
+ "author": "Hertel\u00e9, S., De Waele, W., and Denys, R.",
319
+ "venue": "International Journal of Non-Linear Mechanics, 46(3):519\u2013531, 2011.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "17": {
325
+ "title": "Fedssc: Shared supervised-contrastive federated learning.",
326
+ "author": "Hu, S., Feng, L., Yang, X., and Chen, Y.",
327
+ "venue": "arXiv preprint arXiv:2301.05797, 2023a.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "18": {
333
+ "title": "A deep learning-enhanced framework for multiphysics joint inversion.",
334
+ "author": "Hu, Y., Wei, X., Wu, X., Sun, J., Chen, J., Huang, Y., and Chen, J.",
335
+ "venue": "Geophysics, 88(1):K13\u2013K26, 2023b.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "19": {
341
+ "title": "Evaluation of equi-biaxial residual stress from spherical indentation imprint.",
342
+ "author": "Jeong, C., Hwang, Y., Kim, N., Lee, C., and Lee, H.",
343
+ "venue": "International Journal of Mechanical Sciences, 211:106773, 2021.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "20": {
349
+ "title": "Physics-informed machine learning.",
350
+ "author": "Karniadakis, G. E., Kevrekidis, I. G., Lu, L., Perdikaris, P., Wang, S., and Yang, L.",
351
+ "venue": "Nature Reviews Physics, 3(6):422\u2013440, 2021.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "21": {
357
+ "title": "Adam: A method for stochastic optimization.",
358
+ "author": "Kingma, D. P. and Ba, J.",
359
+ "venue": "arXiv preprint arXiv:1412.6980, 2014.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "22": {
365
+ "title": "Knowledge extraction and transfer in data-driven fracture mechanics.",
366
+ "author": "Liu, X., Athanasiou, C. E., Padture, N. P., Sheldon, B. W., and Gao, H.",
367
+ "venue": "Proceedings of the National Academy of Sciences, 118(23):e2104765118, 2021.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "23": {
373
+ "title": "Extraction of mechanical properties of materials through deep learning from instrumented indentation.",
374
+ "author": "Lu, L., Dao, M., Kumar, P., Ramamurty, U., Karniadakis, G. E., and Suresh, S.",
375
+ "venue": "Proceedings of the National Academy of Sciences, 117(13):7052\u20137062, 2020.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "24": {
381
+ "title": "An insight into the identifiability of material properties by instrumented indentation test using manifold approach based on ph curve and imprint shape.",
382
+ "author": "Meng, L., Breitkopf, P., and Le Quilliec, G.",
383
+ "venue": "International Journal of Solids and Structures, 106:13\u201326, 2017.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "25": {
389
+ "title": "Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.",
390
+ "author": "Raissi, M., Perdikaris, P., and Karniadakis, G. E.",
391
+ "venue": "Journal of Computational physics, 378:686\u2013707, 2019.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "26": {
397
+ "title": "Introduction to the finite element method.",
398
+ "author": "Reddy, J. N.",
399
+ "venue": "McGraw-Hill Education, 2019.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "27": {
405
+ "title": "Dual-shutter optical vibration sensing.",
406
+ "author": "Sheinin, M., Chan, D., O\u2019Toole, M., and Narasimhan, S. G.",
407
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16324\u201316333, 2022.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "28": {
413
+ "title": "A survey on image data augmentation for deep learning.",
414
+ "author": "Shorten, C. and Khoshgoftaar, T. M.",
415
+ "venue": "Journal of big data, 6(1):1\u201348, 2019.",
416
+ "url": null
417
+ }
418
+ },
419
+ {
420
+ "29": {
421
+ "title": "Tracking multiple objects outside the line of sight using speckle imaging.",
422
+ "author": "Smith, B. M., O\u2019Toole, M., and Gupta, M.",
423
+ "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6258\u20136266, 2018.",
424
+ "url": null
425
+ }
426
+ },
427
+ {
428
+ "30": {
429
+ "title": "Piner: Prior-informed implicit neural representation learning for test-time adaptation in sparse-view ct reconstruction.",
430
+ "author": "Song, B., Shen, L., and Xing, L.",
431
+ "venue": "In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 1928\u20131938, January 2023.",
432
+ "url": null
433
+ }
434
+ },
435
+ {
436
+ "31": {
437
+ "title": "How ai can be a force for good.",
438
+ "author": "Taddeo, M. and Floridi, L.",
439
+ "venue": "Science, 361(6404):751\u2013752, 2018.",
440
+ "url": null
441
+ }
442
+ },
443
+ {
444
+ "32": {
445
+ "title": "Inverse problem theory and methods for model parameter estimation.",
446
+ "author": "Tarantola, A.",
447
+ "venue": "SIAM, 2005.",
448
+ "url": null
449
+ }
450
+ },
451
+ {
452
+ "33": {
453
+ "title": "A modified bfgs algorithm for unconstrained optimization.",
454
+ "author": "Yuan, Y.-x.",
455
+ "venue": "IMA Journal of Numerical Analysis, 11(3):325\u2013332, 1991.",
456
+ "url": null
457
+ }
458
+ },
459
+ {
460
+ "34": {
461
+ "title": "Simultaneous remote extraction of multiple speech sources and heart beats from secondary speckles pattern.",
462
+ "author": "Zalevsky, Z., Beiderman, Y., Margalit, I., Gingold, S., Teicher, M., Mico, V., and Garcia, J.",
463
+ "venue": "Optics express, 17(24):21566\u201321580, 2009.",
464
+ "url": null
465
+ }
466
+ },
467
+ {
468
+ "35": {
469
+ "title": "Towards rolling shutter correction and deblurring in dynamic scenes.",
470
+ "author": "Zhong, Z., Zheng, Y., and Sato, I.",
471
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9219\u20139228, 2021.",
472
+ "url": null
473
+ }
474
+ }
475
+ ],
476
+ "url": "http://arxiv.org/html/2311.10278v2"
477
+ }
20240322/2311.14033v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2312.01697v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2312.03408v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2312.04964v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2312.09016v2.json ADDED
@@ -0,0 +1,281 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Symmetry Breaking and Equivariant Neural Networks",
3
+ "abstract": "Using symmetry as an inductive bias in deep learning has been proven to be a principled approach for sample-efficient model design. However, the relationship between symmetry and the imperative for equivariance in neural networks is not always obvious. Here, we analyze a key limitation that arises in equivariant functions: their incapacity to break symmetry at the level of individual data samples. In response, we introduce a novel notion of \u2019relaxed equivariance\u2019 that circumvents this limitation. We further demonstrate how to incorporate this relaxation into equivariant multilayer perceptrons (E-MLPs), offering an alternative to the noise-injection method. The relevance of symmetry breaking is then discussed in various application domains: physics, graph representation learning, combinatorial optimization and equivariant decoding.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The notion of symmetry is of fundamental importance across the sciences, mathematics, and more recently in machine learning. It captures the idea that an object is essentially the same after some transformation is applied to it (Weyl, 1952 ###reference_b21###). Using symmetry as an inductive bias in machine learning has emerged as a powerful idea, with important conceptual and practical breakthroughs (Bronstein et al., 2021 ###reference_b2###).\nThe common intuition is that symmetry in the data distribution should naturally lead to equivariance constraints on learned functions. However, even in symmetric domains, it appears that equivariant functions have an important limitation: the inability to break symmetry at the level of data samples. The classical example of symmetry breaking appears in physical phase transitions. From an initially symmetric state, an asymmetric state is observed (see Section 1 ###reference_###). As we will see and as discussed by Smidt et al. (2021 ###reference_b19###), equivariant neural networks are unable to model these phenomena. Getting rid of equivariance altogether would be an unsatisfactory solution, as it is still necessary to account for the symmetry of physical laws.\nIn this theory-oriented extended abstract, we give a precise characterization of this problem and argue that it is not limited to applications in physics. We show that a wide range of learning tasks require symmetry breaking and that equivariance is therefore fundamentally too constraining. We introduce a relaxation of equivariance that allows to deal with this issue. We then show how to build equivariant multilayer perceptrons (Shawe-Taylor, 1989 ###reference_b18###; Ravanbakhsh et al., 2017 ###reference_b15###; Finzi et al., 2021 ###reference_b5###) that can break symmetry. Finally, we propose avenues for future works and practical applications of our framework.\nWe introduce some mathematical background and notations used in the rest of the paper in Appendix A ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Equivariance Preserves Symmetry",
15
+ "text": "It is known that equivariant functions preserve the symmetry of their input. One of the earliest versions of this statement is due to Curie (1894 ###reference_b4###): \u201cthe symmetries of the causes are to be found in the effects\u201d. Chalmers (1970 ###reference_b3###) provided a more mathematical version of this statement, with effects (observed state) being the result of equivariant physical laws acting on causes (initial state). The general idea is captured by the following proposition.\nLet be an equivariant function and denote the stabilizer subgroup of . Then,\nThe proof follows in Section C.1 ###reference_###.\nThis can also be said differently in terms of orbit types (see definition in Appendix A ###reference_###). When the equivariant function is seen as acting on orbits, we must have .\nAn equivariant function therefore cannot map an orbit of type to an orbit of type that is not coarser than (see Figure 1 ###reference_###).\nFor continuous functions, a version of this result holds when inputs are approximately symmetric. In this case, the inability to break the symmetry for symmetric inputs translates to more difficulty in breaking it for approximately symmetric inputs.\nLet be equivariant and Lipschitz, with constant and denote the induced norm. Then,\nThe proof follows in Section C.2 ###reference_###.\nIf an input is close to its transformed version, the images under a continuous equivariant function also have to be close.\nFinally, we highlight an important fact regarding symmetric inputs of finite groups.\nLet and be any non-trivial linear group action of a finite group with faithful representation. Then, the set of symmetric inputs is of measure zero with respect to the Lebesgue measure.\nThe proof follows in Section C.3 ###reference_###. This captures many groups of interest in machine learning. Symmetric inputs are therefore in some sense rare. At first glance, this could suggest that the Curie principle (Proposition 2.1 ###reference_theorem1###) is hardly relevant since the cases in which it would apply are improbable. Things are however not so simple. First, in many domains, such as graphs, the set of actual inputs is discrete. In this case, Proposition 2.3 ###reference_theorem3### does not apply. Second, there could be a significant bias towards symmetric inputs in the data, as these data points often have special properties that make them more common. This is for example often the case in physics. Third, non-injective activation functions, like ReLUs (Nair and Hinton, 2010 ###reference_b13###), can make symmetric activations much more likely in the intermediary layers of a neural network by zeroing out entries. It is therefore important to handle symmetric inputs beyond the constraints imposed by equivariance, as we explain in the next section.\n[Following Curie\u2019s principle, an input cannot be mapped to an output of lower symmetry. In this example, a symmetric digit (orbit type ) cannot be mapped to a 1 (orbit type ). Likewise a 1 cannot be mapped to a 2.][b]\n\n\u2003{subfigure}[Relaxed equivariance solves the symmetry breaking problem by allowing any of the admissible outputs.][b]\n###figure_1### ###figure_2###"
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Relaxed Equivariance",
21
+ "text": "A version of equivariance that allows breaking the symmetry of inputs and mapping to arbitrary orbit types is necessary. Some applications are detailed in Section 5 ###reference_###. We note that the appropriate notion was introduced by (Kaba et al., 2023 ###reference_b9###) for canonicalization, a problem requiring symmetry breaking. However, their definition applies more generally.\nGiven group actions on and , satisfies relaxed equivariance if , there exists such that\nThe motivation for relaxed equivariance being the correct way to account for symmetry breaking is as follows. First, it captures the idea of symmetry in the task, meaning that the output of the function is predictable under transformation of the input, up to meaningless stabilizing transformations since , with . Second, the output does not need to maintain all the symmetries of the input (see Figure 1 ###reference_###). To see this, notice that for , one possibility allowed by relaxed equivariance is . In this case, we obtain , which by contrast to what we have with equivariance (see Section C.1 ###reference_###), does not impose any constraints on the stabilizer of the output.\nIn Appendix B ###reference_###, we further justify how relaxed equivariance naturally appears in machine learning from first principles."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Breaking Symmetry in Equivariant Multilayer Perceptrons",
27
+ "text": "We now investigate how to build relaxed equivariance into neural networks instead of equivariance. One seemingly ad-hoc solution is sometimes adopted to deal with symmetry breaking, for example by Liu et al. (2019 ###reference_b11###) and Locatello et al. (2020 ###reference_b12###) for graph and set generation. It simply consists of adding noise to the input to break the symmetry and then using an equivariant neural network. Proposition 2.3 ###reference_theorem3### confirms that this procedure has some justification. The input is almost surely mapped to a regular orbit. Then, the equivariant neural network can map the noisy input to an orbit of arbitrary type. However, there are at least two downsides to this approach. First, relaxed equivariance is only respected in expectation, similarly to equivariance when adding noise to data. Second, if the subsequent equivariant neural network is continuous, Proposition 2.2 ###reference_theorem2### indicates that a significant amount of noise will be required to properly break the symmetry, which might hurt generalization.\nTo circumvent these issues, we provide an adaptation of equivariant multilayer perceptrons (E-MLPs) that can handle symmetry breaking (Shawe-Taylor, 1989 ###reference_b18###; Ravanbakhsh et al., 2017 ###reference_b15###; Finzi et al., 2021 ###reference_b5###). E-MLPs provide a standard method to build equivariant neural networks (Bronstein et al., 2021 ###reference_b2###) and consist of stacking linear equivariant layers with point-wise non-linear functions.\nLinear layers with relaxed equivariance can be constructed using the following result:\nLet have representations and on and respectively. Define as the invariant subspace of under and as the projection matrix onto the subspace . Additionally, define be the conjugacy class of some subgroup , and to be the set of inputs stabilized by a group in , e.g. inputs of type .\nThen, for a weight matrix , if there exists a such that for all left cosets\nwhere is an arbitrary coset representative, then the map satisfies relaxed equivariance.\nThe proof and some discussion follow in Section C.5 ###reference_###. Additionally, for permutation groups standard point-wise activation functions can be used, thanks to the fact that they satisfy relaxed equivariance (Section D.1 ###reference_###), and that relaxed equivariance is compatible with composition (Section D.2 ###reference_###)."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Applications",
33
+ "text": "Our analysis provides a general framework for symmetry breaking in deep learning and applies to multiple domains. We give a few examples thereafter of domains for which we think symmetry breaking analysis could be an exciting future direction (see Figure 2 ###reference_###)."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Conclusion",
39
+ "text": "In this paper, we have analyzed a fundamental limitation of equivariant functions in handling symmetry breaking. We have shown that it is important to account for it in multiple applications in machine learning by relaxing the equivariance constraint. We have finally provided a way to adapt E-MLPs to satisfy the relaxed version equivariance instead of the standard one. We hope this constitutes a first effort to better understand symmetry breaking in machine learning. Many avenues are still left to explore for the extension of this work. First, experimental testing of our claims in different domains is necessary. Second, the constraint stated in Theorem 4.1 ###reference_theorem1### could be costly to solve for large groups; making it scale sublinearly in group size would be desirable. Finally, alternative ways to achieve relaxed equivariance could be explored, notably a probabilistic approach where the symmetry equivalent images are sampled instead of being deterministically computed by a network.\nWe wish to thank Tara Akhound-Sadegh, Yan Zhang and the reviewers for their valuable comments. This work is in part supported by the CIFAR AI chairs program and NSERC Discovery. S.-O. K.\u2019s research is also supported by IVADO, FRQNT and the DeepMind Scholarship."
40
+ }
41
+ ],
42
+ "appendix": [
43
+ {
44
+ "section_id": "Appendix 1",
45
+ "parent_section_id": null,
46
+ "section_name": "Appendix A Background",
47
+ "text": "In the following section, we introduce some useful notions on group actions and equivariant functions. The results we refer to can be found in elementary textbook on group theory, for example Pinter (2010 ###reference_b14###)."
48
+ },
49
+ {
50
+ "section_id": "Appendix 2",
51
+ "parent_section_id": null,
52
+ "section_name": "Appendix B First-principle Derivation of Relaxed Equivariance",
53
+ "text": "We are in general interested in learning tasks for which the underlying distribution possesses some symmetry. For predictive modelling, given some group actions on and , that means that underlying conditional distribution satisfies . This is similar when modelling data conditioned on a latent variable with . When we wish for the model to approximate the full distribution on (typically when is finite and small), equivariance with the action defined on functions follows straightforwardly. In that case, we assume , where is the set of probability distributions on and obtain\nHowever, in many situations, we wish to obtain a deterministic model giving an output that maximizes the probability, rather than modelling the full distribution; e.g., Maximum a Posteriori instead of the full posterior Hastie et al. (2009 ###reference_b7###) (note that a similar argument applies when trying to approximate the distribution by a simpler one).\nIn this case, we define as\nwhere the is a set since the maximum may not be unique and is a choice function that selects a unique element.\nWe show in Section C.4 ###reference_###, that if the distribution is symmetric under some group action, then must be a union of orbits of the stabilizer of when acting on . This is simply because, then, some probabilities are the same by symmetry.\nWe now assume that is a unique orbit. In a sense, this amounts to the idea that all the symmetry of the model is completely captured by the transformation group . We can then prove the following theorem:\nLet be defined by Equation 8 ###reference_###. If is symmetric under some action of and the set is a unique orbit, then satisfies the relaxed equivariance condition.\nThe proof is given in Section C.4 ###reference_###.\nRelaxed equivariance therefore naturally arises as a requirement for deterministic models under symmetric distributions. The same applies when is a function that generates samples from a latent variable when the underlying conditional distribution is symmetric."
54
+ },
55
+ {
56
+ "section_id": "Appendix 3",
57
+ "parent_section_id": null,
58
+ "section_name": "Appendix C Proofs",
59
+ "text": "For any and , we have\nFrom equivariance of , we also have\nThus,\nThe stabilizer of is therefore at least , which completes the proof.\nIf is Lipschitz with constant , we have\nFrom equivariance of , we find\nwhich completes the proof.\nThe set is equal to . We will show that for each , the set of elements of stabilized by is of measure zero. Since the union is over a finite set, will therefore also be of measure zero.\nThe set of elements stabilized by is given by the solutions of the equation . The stabilizer is therefore the eigenspace of with eigenvalues 1. If is a faithful representation, then for any , . However, for any linear operator other than , the dimension of eigenspaces with eigenvalue 1, if they exist, must be . But, any subspace of of dimension has measure zero with respect to the Lebesgue measure. Therefore, the set of elements stabilized by any is of measure zero.\nThis completes the proof.\nWe introduce the following lemmas\nLet for all . Then,\nFrom the symmetry of , and based on the definition of the stabilizer, we have for all\nTherefore,\nwhich concludes the proof.\nLet for all . Then,\nFrom the symmetry , we have for all\nTherefore,\nwhich concludes the proof.\nWe now provide the proof of Theorem B.1 ###reference_theorem1###.\nWe have\nUsing Lemma C.4 ###reference_theorem4###, we therefore have\nUsing Lemma C.6 ###reference_theorem6###, we obtain\nUsing the assumption that the is only one orbit, we have\nThis is equivalent to saying that there exists a such that\nwhich is the relaxed equivariance condition.\nFirst, we show that if the condition Equation 2 ###reference_### is satisfied, then for all and for all , there exists a such that the constraint\nis satisfied.\nFor some , consider the set of elements that belong to the same coset of the stabilizer of , e.g. the set . For all these group members, the constraint Equation 28 ###reference_### can be satisfied with the same . We can therefore have for all these elements,\nwhere and a unique chosen arbitrarily in .\nBy definition of the stabilizer, we have\nThen, we know that by definition the projection maps onto . Thus, for any , we have\nTherefore, if for all cosets in , Equation 34 ###reference_### is satisfied with an arbitrary representative, Equation 28 ###reference_### is satisfied for all and .\nSecond, we prove that for all orbits, and for any , there must be a .\nFor any , consider an arbitrary representative . It must be that for some . Since and are conjugate, there exists a such that . Since stabilizers of elements in the same orbit are conjugate, we have . Therefore, .\nFinally, we invoke the orbit consistency property (Section D.3 ###reference_###) to show that for any orbit , since there is an , Equation 34 ###reference_### must be statisfied for any . Since this is true for any , Equation 34 ###reference_### also holds for any . Therefore, the map satisfies relaxed equivariance.\nFor the coset containing the identity element, the representative can selected as the identity itself, such that there is no constraint. This therefore results in constraints.\nNote that contrarily to standard equivariance constraints like in (Finzi et al., 2021 ###reference_b5###), it does not follow from these constraints that if\na similar constraint is also satisfied for . It is therefore not possible to straightforwardly reduce the constraints to a set of generators."
60
+ },
61
+ {
62
+ "section_id": "Appendix 4",
63
+ "parent_section_id": null,
64
+ "section_name": "Appendix D Properties of Relaxed Equivariance",
65
+ "text": "This property is trivially satisfied, but it is still useful to formulate it explicitly.\nLet be equivariant. Then, satisfies relaxed equivariance.\nIf is equivariant:\nSince , satisfies the relaxed equivariance condition.\nLet and satisfy relaxed equivariance. Then satisfies relaxed equivariance.\nWe have\nwhere .\nThen,\nwhere . Since , we have and this completes the proof.\nLet act on and . Assume that acts transitively on , such that is a single orbit. For any and , if there exists a such that\nthen satisfies the relaxed equivariance condition.\nAny can be written as for some .\nWe therefore have\nFrom Equation 40 ###reference_###, we have\nfor some .\nFrom Equation 40 ###reference_###, we also know that\nfor some . Therefore,\nReplacing in 42 ###reference_###, we obtain\nSince we have and , we have\nwhere . This completes the proof."
66
+ }
67
+ ],
68
+ "tables": {},
69
+ "image_paths": {
70
+ "1(a)": {
71
+ "figure_path": "2312.09016v2_figure_1(a).png",
72
+ "caption": "Figure 1: Illustration of the symmetry breaking problem with a function equivariant to C4=\u27e8c\u27e9subscript\ud835\udc364delimited-\u27e8\u27e9\ud835\udc50C_{4}=\\langle c\\rangleitalic_C start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT = \u27e8 italic_c \u27e9.",
73
+ "url": "http://arxiv.org/html/2312.09016v2/x1.png"
74
+ },
75
+ "1(b)": {
76
+ "figure_path": "2312.09016v2_figure_1(b).png",
77
+ "caption": "Figure 1: Illustration of the symmetry breaking problem with a function equivariant to C4=\u27e8c\u27e9subscript\ud835\udc364delimited-\u27e8\u27e9\ud835\udc50C_{4}=\\langle c\\rangleitalic_C start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT = \u27e8 italic_c \u27e9.",
78
+ "url": "http://arxiv.org/html/2312.09016v2/x2.png"
79
+ },
80
+ "2(a)": {
81
+ "figure_path": "2312.09016v2_figure_2(a).png",
82
+ "caption": "Figure 2: Some applications for which symmetry breaking is relevant.",
83
+ "url": "http://arxiv.org/html/2312.09016v2/x3.png"
84
+ },
85
+ "2(b)": {
86
+ "figure_path": "2312.09016v2_figure_2(b).png",
87
+ "caption": "Figure 2: Some applications for which symmetry breaking is relevant.",
88
+ "url": "http://arxiv.org/html/2312.09016v2/x4.png"
89
+ },
90
+ "2(c)": {
91
+ "figure_path": "2312.09016v2_figure_2(c).png",
92
+ "caption": "Figure 2: Some applications for which symmetry breaking is relevant.",
93
+ "url": "http://arxiv.org/html/2312.09016v2/x5.png"
94
+ },
95
+ "2(d)": {
96
+ "figure_path": "2312.09016v2_figure_2(d).png",
97
+ "caption": "Figure 2: Some applications for which symmetry breaking is relevant.",
98
+ "url": "http://arxiv.org/html/2312.09016v2/x6.png"
99
+ }
100
+ },
101
+ "validation": true,
102
+ "references": [
103
+ {
104
+ "1": {
105
+ "title": "Machine learning for combinatorial optimization: a methodological\ntour d\u2019horizon.",
106
+ "author": "Yoshua Bengio, Andrea Lodi, and Antoine Prouvost.",
107
+ "venue": "European Journal of Operational Research, 290(2):405\u2013421, 2021.",
108
+ "url": null
109
+ }
110
+ },
111
+ {
112
+ "2": {
113
+ "title": "Geometric deep learning: Grids, groups, graphs, geodesics, and\ngauges.",
114
+ "author": "Michael M Bronstein, Joan Bruna, Taco Cohen, and Petar Velikovi\u0107.",
115
+ "venue": "arXiv preprint arXiv:2104.13478, 2021.",
116
+ "url": null
117
+ }
118
+ },
119
+ {
120
+ "3": {
121
+ "title": "Curie\u2019s principle.",
122
+ "author": "Alan F Chalmers.",
123
+ "venue": "The British Journal for the Philosophy of Science, 21(2):133\u2013148, 1970.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "4": {
129
+ "title": "Sur la sym\u00e9trie dans les ph\u00e9nom\u00e8nes physiques,\nsym\u00e9trie d\u2019un champ \u00e9lectrique et d\u2019un champ magn\u00e9tique.",
130
+ "author": "Pierre Curie.",
131
+ "venue": "Journal de physique th\u00e9orique et appliqu\u00e9e, 3(1):393\u2013415, 1894.",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "5": {
137
+ "title": "A practical method for constructing equivariant multilayer\nperceptrons for arbitrary matrix groups.",
138
+ "author": "Marc Finzi, Max Welling, and Andrew Gordon Wilson.",
139
+ "venue": "In International conference on machine learning, pages\n3318\u20133328. PMLR, 2021.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "6": {
145
+ "title": "The symmetry perspective: from equilibrium to chaos in phase\nspace and physical space, volume 200.",
146
+ "author": "Martin Golubitsky and Ian Stewart.",
147
+ "venue": "Springer Science & Business Media, 2002.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "7": {
153
+ "title": "The elements of statistical learning: data mining, inference,\nand prediction, volume 2.",
154
+ "author": "Trevor Hastie, Robert Tibshirani, Jerome H Friedman, and Jerome H Friedman.",
155
+ "venue": "Springer, 2009.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "8": {
161
+ "title": "Equivariant networks for crystal structures.",
162
+ "author": "S\u00e9kou-Oumar Kaba and Siamak Ravanbakhsh.",
163
+ "venue": "In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,\neditors, Advances in Neural Information Processing Systems, 2022.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "9": {
169
+ "title": "Equivariance with learned canonicalization functions.",
170
+ "author": "S\u00e9kou-Oumar Kaba, Arnab Kumar Mondal, Yan Zhang, Yoshua Bengio, and Siamak\nRavanbakhsh.",
171
+ "venue": "In International Conference on Machine Learning, pages\n15546\u201315566. PMLR, 2023.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "10": {
177
+ "title": "Expressive sign equivariant networks for spectral geometric learning.",
178
+ "author": "Derek Lim, Joshua Robinson, Stefanie Jegelka, Yaron Lipman, and Haggai Maron.",
179
+ "venue": "In ICLR 2023 Workshop on Physics for Machine Learning, 2023.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "11": {
185
+ "title": "Graph normalizing flows.",
186
+ "author": "Jenny Liu, Aviral Kumar, Jimmy Ba, Jamie Kiros, and Kevin Swersky.",
187
+ "venue": "Advances in Neural Information Processing Systems, 32, 2019.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "12": {
193
+ "title": "Object-centric learning with slot attention.",
194
+ "author": "Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran,\nGeorg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf.",
195
+ "venue": "Advances in Neural Information Processing Systems,\n33:11525\u201311538, 2020.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "13": {
201
+ "title": "Rectified linear units improve restricted boltzmann machines.",
202
+ "author": "Vinod Nair and Geoffrey E Hinton.",
203
+ "venue": "In Proceedings of the 27th international conference on machine\nlearning (ICML-10), pages 807\u2013814, 2010.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "14": {
209
+ "title": "A book of abstract algebra.",
210
+ "author": "Charles C Pinter.",
211
+ "venue": "Courier Corporation, 2010.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "15": {
217
+ "title": "Equivariance through parameter-sharing.",
218
+ "author": "Siamak Ravanbakhsh, Jeff Schneider, and Barnabas Poczos.",
219
+ "venue": "In International Conference on Machine Learning, pages\n2892\u20132901. PMLR, 2017.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "16": {
225
+ "title": "E (n) equivariant graph neural networks.",
226
+ "author": "Victor Garcia Satorras, Emiel Hoogeboom, and Max Welling.",
227
+ "venue": "In International conference on machine learning, pages\n9323\u20139332. PMLR, 2021.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "17": {
233
+ "title": "Your dataset is a multiset and you should compress it like one.",
234
+ "author": "Daniel Severo, James Townsend, Ashish J Khisti, Alireza Makhzani, and Karen\nUllrich.",
235
+ "venue": "In NeurIPS 2021 Workshop on Deep Generative Models and\nDownstream Applications, 2021.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "18": {
241
+ "title": "Building symmetries into feedforward networks.",
242
+ "author": "J. Shawe-Taylor.",
243
+ "venue": "In 1989 First IEE International Conference on Artificial Neural\nNetworks, (Conf. Publ. No. 313), pages 158\u2013162, 1989.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "19": {
249
+ "title": "Finding symmetry breaking order parameters with euclidean neural\nnetworks.",
250
+ "author": "Tess E. Smidt, Mario Geiger, and Benjamin Kurt Miller.",
251
+ "venue": "Phys. Rev. Research, 3:L012002, Jan 2021.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "20": {
257
+ "title": "Top-n: Equivariant set and graph generation without exchangeability.",
258
+ "author": "Clement Vignac and Pascal Frossard.",
259
+ "venue": "In International Conference on Learning Representations, 2022.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "21": {
265
+ "title": "Symmetry.",
266
+ "author": "Hermann Weyl.",
267
+ "venue": "In Symmetry. Princeton University Press, 1952.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "22": {
273
+ "title": "Multiset-equivariant set prediction with approximate implicit\ndifferentiation.",
274
+ "author": "Yan Zhang, David W Zhang, Simon Lacoste-Julien, Gertjan J. Burghouts, and Cees\nG. M. Snoek.",
275
+ "venue": "In International Conference on Learning Representations, 2022.",
276
+ "url": null
277
+ }
278
+ }
279
+ ],
280
+ "url": "http://arxiv.org/html/2312.09016v2"
281
+ }
20240322/2312.10070v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2312.12973v2.json ADDED
@@ -0,0 +1,570 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Sparse Mean Field Load Balancing in Large Localized Queueing Systems",
3
+ "abstract": "Scalable load balancing algorithms are of great interest in cloud networks and data centers, necessitating the use of tractable techniques to compute optimal load balancing policies for good performance. However, most existing scalable techniques, especially asymptotically scaling methods based on mean field theory, have not been able to model large queueing networks with strong locality. Meanwhile, general multi-agent reinforcement learning techniques can be hard to scale and usually lack a theoretical foundation. In this work, we address this challenge by leveraging recent advances in sparse mean field theory to learn a near-optimal load balancing policy in sparsely connected queueing networks in a tractable manner, which may be preferable to global approaches in terms of wireless communication overhead. Importantly, we obtain a general load balancing framework for a large class of sparse bounded-degree wireless topologies. By formulating a novel mean field control problem in the context of graphs with bounded degree, we reduce the otherwise difficult multi-agent problem to a single-agent problem. Theoretically, the approach is justified by approximation guarantees. Empirically, the proposed methodology performs well on several realistic and scalable wireless network topologies as compared to a number of well-known load balancing heuristics and existing scalable multi-agent reinforcement learning methods.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "1. Introduction",
9
+ "text": "The increasing demand for computational resources has led to increased research interest in parallel and distributed systems such as data centres and large-scale cloud networks, which can be accurately modelled by large-scale queueing networks due to their stochastic nature (Rodriguez and Guillemin, 2018 ###reference_b33###; Walker et al., 2022 ###reference_b41###). As a result, there is also a renewed interest in studying and designing scalable load balancing algorithms to allow these parallel systems to operate efficiently by reducing queue lengths, minimizing job waiting times, increasing system throughput, etc. (Mishra et al., 2020 ###reference_b27###).\nThis work also focuses on load balancing policies in large networks with the goal of reducing overall job drops in the system.\nHere, we will consider sparsely connected queueing systems,\nrather than assuming that all agents can access to all the queues.\nMany successful centralized and decentralized algorithms have already been proposed where the load balancer (agent) sends jobs to the available parallel servers (queues). These include (i) join-the-shortest-queue (JSQ) to reduce expected job delay when servers are homogeneous, have infinite buffer space, and independent, identical (i.i.d.), and exponentially distributed job service times (Winston, 1977 ###reference_b42###), (ii) shortest-expected-delay (SED) to minimize the mean response time of jobs when service times are exponential, but with different service rates (Selen et al., 2016 ###reference_b36###), (iii) join-idle-queue (JIQ) to balance the overall system load while reducing communication overhead between load balancers and servers (Lu et al., 2011 ###reference_b26###), and more, see e.g. (der Boor et al., 2022 ###reference_b9###; Mukherjee et al., 2018 ###reference_b29###).\nHowever, the first two aforementioned algorithms are asynchronous and assume instantaneous knowledge of queue states, which is not true in practice, especially in large systems. And for JIQ, queues also need to be maintained at the scheduler end which we do not consider in our system model.\nOften, large queueing systems\nare modelled as a decentralized, multi-agent system with an underlying graph topology.\nDifferent types of graphs can be used to represent different types of (wireless) network topologies and information structures, for example a cube-connected cycle was used to represent a versatile network of agents in parallel computing (Habibian and Patooghy, 2017 ###reference_b16###).\nThe vertices in the graph represent agents (load balancers) and the edges represent the (possibly wirelessly connected) neighbourhood of each agent, resulting in local states, actions and information exchange.\nIn this work, all agents with access to the same queues form a neighbourhood, and each agent is assumed to only allocate incoming load to queues within their own neighbourhood."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "2. Queueing System",
15
+ "text": "We consider a system with a set of schedulers/agents and servers, where each server has its own queue with finite buffer capacity, .\nThe queues work in a first-in-first-out (FIFO) manner, and servers take the jobs one at a time from their queue, processing them at an exponential rate . Once a job has been processed, it leaves the system forever.\nSomewhat to the successful power-of- policies (Mitzenmacher, 2001 ###reference_b28###), we assume that each scheduler accesses only a limited number (e.g. out of the ) of available queues and can only allocate its arriving jobs to these accessible queues, with and fixed.\nWe assume and associate each server (queue) with one agent, though it is possible to extend the model to varying or even random numbers of queues per agent.\nNote that all connections between agents are assumed to be wireless.\nJobs arrive to the system accordingly to a Markov modulated Poisson process (Fischer and Meier-Hellstern, 1993 ###reference_b12###) with total rate , and then are divided uniformly amongst all agents, which is also equivalent to independent Poisson processes at each agent given the shared arrival rate by Poisson thinning (Harremo\u00ebs et al., 2007 ###reference_b17###).\nThe agent takes the allocation action based on a learned or predetermined, memory-less policy , which considers current queue state information of its accessible queues.\nThis information is periodically sent by servers to neighbouring agents, such that agents only obtain information on queues, reducing the amount of messages to be sent.\nIf the job allocation is done to an already full queue, it is dropped and results in a penalty.\nSimilarly, jobs depart from a queue at rate .\nThe goal of the agent is to minimize the overall jobs dropped."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "2.1. Locality and Scalability",
21
+ "text": "Note that in contrast to many other analyzed settings, in this work the out of available queues are not sampled randomly for each package, but instead fixed for each agent according to some predefined topology (see also Section 3.1 ###reference_###). In other words, we assume a strong concept of locality where agents have access only to a very limited subset of queues, implying also a strong sense of partial observability in the system.\nThe value for therefore depends on the type of graph topology being considered.\nNote that an agent always has access to its own queue.\nAccordingly, our queueing model contains an associated static underlying\nundirected graph , where is the set of wireless edges between vertices (agents) based on the queues accessible by the agent.\nAn agent will have an edge to another agent whenever they have access to the queues associated to that agent, and vice versa. Therefore, an agent can have a maximum of edges (neighbours) to other agents.\nWe will denote the set of neighbours of agent as .\nThe motivation to use a graph structure with bounded degree arises from the fact that the corresponding model allows us to find more tractable local solutions for large systems. We need large systems that are structured in some sense, otherwise the system is too heterogeneous to find a tractable and sufficiently structured solution. Therefore, we look at systems where the structure can be expressed graphically. To avoid confusion, note that the framework is not limited to regular graphs. The framework already includes basic regular graphs such as grids and toruses, but also allows many other irregular random graphs such as the configuration model, see also Section 3.1 ###reference_###. Here we then apply RL and MFC to find otherwise hard-to-compute near-optimal queueing strategies in this highly local setting. The simplicity of the queueing strategy \u2013 instantaneously assigning a packet to one of the neighbouring queues based on periodic and local information \u2013 not only allows for fast execution and high scalability, since information does not need to be exchanged for each incoming packet, but also allows for easy addition of more nodes to scale the system to arbitrary sizes.\nSo in the following, we will obtain tractable solutions by first formulating the finite queueing system, then formulating its limiting mean field equivalent as the system grows, and lastly applying (partially-observed) RL and MFC."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "2.2. Finite Queueing System",
27
+ "text": "To begin, consider the following system. Each agent is associated with a local state, and local action . The state will be the current queue filling of the queues from accessible to agent . The set of actions could be the set of these accessible queues, to which new packets are assigned. Hence, we have a finite set of action and state space.\nThe global state of the system is given by . Similarly, the global action is defined as ."
28
+ },
29
+ {
30
+ "section_id": "2.2.1",
31
+ "parent_section_id": "2.2",
32
+ "section_name": "2.2.1. Synchronous model",
33
+ "text": "We want the agents to work in a synchronized manner, e.g. to model wireless communication delays, and also for the servers to send their queue state information to the respective agents once every fixed time interval. To achieve this, we model our system at discrete decision epochs , where is the synchronization delay, the time passing between each decision epoch. The interval may be understood as a type of synchronization or update delay, assuming that it takes amount of time to obtain updated local information from the servers and update the queueing strategy (e.g. routing table). Note that we may easily adjust to approximate continuous time. Using this delay, we can also model our system as a MFC-based RL problem and learn the optimal policy using state-of-the-art RL-based algorithms (Sutton and Barto, 2018 ###reference_b38###)."
34
+ },
35
+ {
36
+ "section_id": "2.2.2",
37
+ "parent_section_id": "2.2",
38
+ "section_name": "2.2.2. Localized policies",
39
+ "text": "For the moment, let each agent be associated with a localized policy parameterized by which defines a distribution of the local action conditioned on the states of its neighbourhood .\nThe size of the neighbourhood depends on and the type of graph used.\nGiven the global state , each agent takes an action which is independently drawn from .\nIn other words, agents do not coordinate between each other, and decide independently where to send arriving packets.\nThe parameters parametrize the tuple of localized policies , with the joint policy as its product , as agents act independently.\nNote: We do not include the actions of other agents, because all agents may take an action at the same time, so we will not have this information and each agent will act independently."
40
+ },
41
+ {
42
+ "section_id": "2.2.3",
43
+ "parent_section_id": "2.2",
44
+ "section_name": "2.2.3. Symmetrized packet assignments",
45
+ "text": "For tractable analysis, it is necessary to handle neighbors anonymously, i.e., providing the same probability of assigning to any particular neighbors with the same queue states.\nIndeed, to some extent such symmetrization is necessary to obtain a useful mean field limit. Otherwise, behaviour depends strongly on the ordering of neighbours, which we cannot model to obtain the mean field limit.\nConsider a simple one-dimensional line graph where nodes are connected in a straight path. If we allow actions that are not symmetric, e.g. if all agents send all their packets to the first of their two neighbours under some ordering of their neighbours, then we obtain different behaviour depending on this ordering. Cutting the graph into sets of three nodes, we could define the center node of each set to be the first neighbour for the other nodes, leading to a packet arrival rate of at the center node. On the other hand, if we define first neighbours such that there are no overlapping first neighbours between any agents, then any node will have a packet arrival rate of at most . Hence, a certain symmetrization is natural for the scaling limit.\nAt most, we could consider a solution that anonymizes but still differentiates between neighbors whenever one is fuller than the other. This may be important especially if e.g. queue serving rates are heterogeneous, and the according framework is straightforward. However, in our experiments we found that such an assumption complicates training of a RL policy due to the significant addition of action space complexity. Therefore, trading off between policy expressiveness and RL training stability, we assume that any agent may choose to either send to their own queue, or offload uniformly randomly to any of their neighbours. In other words, for simplicity of learning we consider the actions for all agents , of either sending to its own queue (), or randomly sending to a neighbour (), see Figure 1 ###reference_### for an example visualization. This assumption symmetrizes the model and obtains better performance, see Appendix.\n###figure_1###"
46
+ },
47
+ {
48
+ "section_id": "2.2.4",
49
+ "parent_section_id": "2.2",
50
+ "section_name": "2.2.4. Dynamical system model",
51
+ "text": "The state of the agent is the state of its associated queue, and is affected by the actions of itself and neighbouring agents. By Poisson thinning, at every epoch , given the current global state and action , the next local state of agent can be calculated independently only depending on the current state and actions , i.e. ,\nwhere each can be computed by the Kolmogorov equation for continuous-time Markov chains, given that the rate of an arrival at a queue is given by the sum of arrival rates assigned by the agents, such that the total arrival rate is , and the package departure rate of each queue is fixed to the serving rate .\nEach agent is associated with a local stage reward and according global stage reward . The reward is given in terms of a penalty for job drops due to each action .\nThe objective is then to find the localized policy tuple such that the global reward is maximized, starting from initial state distribution ."
52
+ },
53
+ {
54
+ "section_id": "2.3",
55
+ "parent_section_id": "2",
56
+ "section_name": "2.3. Sparse Graph Mean Field System",
57
+ "text": "In this section, we will consider limits of large graphs and the behaviour in such systems, by using mean field theory from (Lacker et al., 2023 ###reference_b19###) which focuses on an abstract theoretical framework, that is (i) unrelated to particular applications and (ii) does not explicitly consider control, which we integrate into the states of their discrete-time model.\nIn order to obtain a limiting mean field system, we assume a shared policy for all agents . This assumption is natural, as it often gives state-of-the-art performance in various RL benchmarks (Yu et al., 2022 ###reference_b43###; Christianos et al., 2020 ###reference_b4###; Papoudakis et al., 2021 ###reference_b31###) and also allows immediate application to arbitrary sizes of the finite system, as all schedulers can use the same policy. Furthermore, it will allow to scale up the system at any time by simply adding more schedulers and queues, without retraining a policy.\nIn contrast to typical mean field games (Lasry and Lions, 2007 ###reference_b20###) and mean field control (Bensoussan et al., 2013 ###reference_b2###), we cannot reduce the entire system exactly to a single representative agent and its probability law. This is because in a sparse setting, the neighbourhood state distribution of any agent remains an empirical distribution of finitely many neighbours and hence cannot follow a law of large numbers into a deterministic mean field distribution. Therefore, the neighbourhood and its state distribution remain stochastic and it is necessary to model the probability law of entire neighbourhoods. The modelling of such graphical neighbourhoods is formalized in the following."
58
+ },
59
+ {
60
+ "section_id": "2.3.1",
61
+ "parent_section_id": "2.3",
62
+ "section_name": "2.3.1. Topological structure",
63
+ "text": "To make sense of the limiting topology of our system formally, we introduce technical details, letting finite systems be given by sequences of (potentially random) finite graphs and initial states converging in probability in the local weak sense to some limiting random graph . Here, we assume that graphs are of bounded degree, i.e. there exists a finite degree such that all nodes have at most neighbours. In other words, we define a sequence of systems of increasing size according to a certain topology, which formalizes the scalable architectural choice of a network structure, such as a ring topology or torus. In the following paragraph, we give the formal definition, which may be skimmed by the reader.\nFor convergence in the local weak sense, define first the space of marked rooted graphs , the elements of which essentially constitute a snapshot of the entire system at any point in time. Such a marked rooted graph consists of a tuple , where is a graph, is a particular node of (the so-called root node), and defines states (\u201dmarks\u201d) for each node in , i.e. the current queue filling of queues associated to any agent (node). Denote by the marked rooted subgraph of vertices in the -hop neighbourhood of the root node . The space is metrized such that sequences whenever for any , there exists such that for all there exists a mark-preserving isomorphism , i.e. with for all nodes (local convergence).\nWe will abbreviate elements as , and their node sets as whenever it is clear from the context. Then, finally, convergence in the local weak sense is formally defined by\n\nin probability for every continuous and bounded , where denotes the connected component of in .\nIn other words, wherever we randomly look in the graph, there will be little difference between the distribution of the random local system state (including its topology), and of the limiting . This holds true, e.g. if we initialize all queues as empty or i.i.d., and consider certain types of topologies such as grids, see Section 3.1 ###reference_###. More details also in (Lacker et al., 2023 ###reference_b19###)."
64
+ },
65
+ {
66
+ "section_id": "2.3.2",
67
+ "parent_section_id": "2.3",
68
+ "section_name": "2.3.2. System model",
69
+ "text": "As discussed in the prequel, consider sequences of possibly random rooted marked graphs (i.e. finite graphs and initial states) with agents , converging in the local weak sense to the potentially infinite-sized system . For a moment, consider a decentralized, stochastic control policy such that an agent chooses to offload to a random neighbour (action ) with probability , depending on its own queue state only. We can then consider the probability law of the limiting system as the state of a single-agent MDP, similar to the MFC MDP formalism in standard MFC (Pham and Wei, 2018 ###reference_b32###). This law is in essence the probability that the graph around any randomly chosen queue is in a certain state. At least formally, we do so by identifying the choice of at any time as an action, and letting the state of the MFC MDP be given by the law of the time-variant system resulting from the application of when starting at some initial law.\nDeferring for a moment (until Section 2.4 ###reference_###) (i) the question of how to simulate, represent or compute the probability law of a possibly infinite-sized rooted marked graph, and (ii) the detailed partial information structure (i.e. observation inputs) of the following policy, we consider a hierarchical upper-level MFC policy that, for any current MFC MDP state, assigns to all agents at once such a policy at time . To reiterate, the decentralized control policy at any time now becomes the action of the MFC MDP, where the dynamics are formally given by the usually infinite-dimensional states , i.e. the probabilities of the limiting rooted marked graph being in a particular state after applying a sequence of policies . The cost function for any such upper-level policy is then given by the number of expected packet drops per agent in the limiting system,\nUsing analogous definitions for the finite system based on the topologies and initial states , we can apply the upper-level MFC policy to each agent and use the resulting number of average packet drops at time . Using our newly introduced graphical formulation, we thus write the cost function in the finite system as"
70
+ },
71
+ {
72
+ "section_id": "2.3.3",
73
+ "parent_section_id": "2.3",
74
+ "section_name": "2.3.3. Optimality guarantees",
75
+ "text": "One can now show that the performance of the finite system is approximately guaranteed by the performance in the limiting mean field system. Informally, this means that for any two policies, if the performance of one policy is better in the mean field system, it will also be better in large finite systems.\nConsider a sequence of finite graphs and initial states converging in probability in the local weak sense to some limiting . For any policy , as , we have convergence of the expected packet drop objective\nAs a result, we have obtained a limiting mean field system for large systems, which may be more tractable for finding improved load balancing schemes. In particular, if we have a number of policies between which to choose, then the policy performing best in the MFC system will also perform best in sufficiently large systems. Here, the set of MFC policies can include well-known algorithms such as JSQ, which is known to be optimal in many special cases.\nConsider a finite set of MFC policies with differing MFC objective values to . Let be the policy with maximal MFC objective value . Then, there exists such that for all , we also have optimality in the finite system\nThe proofs are provided in Appendix and use the theoretical framework of (Lacker et al., 2023 ###reference_b19###) for general dynamical systems.\nIn our experiments, we will also allow for randomized assignments per packet to further improve performance empirically, such that all packets arriving at a particular scheduler during the entirety of any epoch are independently randomly either allocated to the local queue or a random neighbour, i.e. formally we replace offloading choices by probabilities for offloading each packet independently, .\nThe MFC MDP formulation and theoretical guarantees give us the opportunity to use MFC together with single-agent control such as RL in order to find good scalable solutions while circumventing hard exact analysis and improving over powerful techniques such as JSQ in large queueing systems. All that remains is to solve the limiting MDP, as MFC has formally converted MARL into single-agent RL."
76
+ },
77
+ {
78
+ "section_id": "2.4",
79
+ "parent_section_id": "2",
80
+ "section_name": "2.4. Reinforcement Learning",
81
+ "text": "Building upon the preceding MFC formulation, all that remains to find optimal load balancing in large sparse graphs, is to solve the MFC problem. Due to its complexity, the limiting MFC problem will be solved by considering it as a variant of an MDP, i.e. a standard formulation for single-agent centralized RL, which will also allow a model-free design methodology. Nonetheless, the training is still performed on the preceding finite graph multi-agent system, as we cannot evaluate the typically infinite limiting system .\nMore precisely, we will consider a partially-observed MDP (POMDP) variant of the problem, since at any time , we cannot evaluate the potentially infinite system exactly to obtain action . Instead, we will use the empirical distribution as an observation that is only correlated with the state of the entire system, but of significantly lower dimensionality (-dimensional vector, instead of plus additional topological information). This also means that we need not consider the limiting system of potentially infinite size, or include the information of root nodes when considering network topologies in the following, which is intuitive as there is no notion of global root in local queueing systems.\nHere, the centralized RL controller could have estimated or exact global information on the statistics of the queue states of all nodes, or alternatively we can understand the approach as an optimal open-loop solution for any given known starting state, since the limiting MFC dynamics on are deterministic, and therefore the centralized RL controller can be used to compute an optimal deterministic sequence of control, which can then be applied locally.\nIn our experiments we also allow to simply insert the empirical distribution of the locally observed neighbour queue states at each node (a simple estimate of the true empirical distribution),\nto instead sample decision rules for each agent according to the local empirical state distribution, which is verified to be successful. Thus, our approach leans into the centralized training decentralized execution scheme (Zhang et al., 2021 ###reference_b44###) and learns a centralized policy, which can then be executed in a decentralized manner among all schedulers. As desired, our approach is applicable to localized queueing systems.\nIn order to solve the POMDP, we apply the established proximal policy optimization (PPO) RL method (Schulman et al., 2017 ###reference_b35###; Yu et al., 2022 ###reference_b43###) with and without recurrent policies, as commonly and successfully used in POMDP problems (Ni et al., 2022 ###reference_b30###). PPO is a policy gradient method with a clipping term in the loss function, such that the policy does not take gradient steps that are too large while learning (Schulman et al., 2017 ###reference_b35###; Yu et al., 2022 ###reference_b43###). The overall training algorithm is given in Appendix, which also shows how to analogously apply a trained MFC policy to a finite system."
82
+ },
83
+ {
84
+ "section_id": "2.4.1",
85
+ "parent_section_id": "2.4",
86
+ "section_name": "2.4.1. Training on a finite queueing system",
87
+ "text": "Note that the considered observation and any other variables such as the number of dropped packets at the root node can indeed be computed without evaluating an infinite system until any finite time , since at any time , at most any node less than steps away from the root node may have had an effect on the root node state. Therefore, the computation of root node marginals can be performed exactly until any finite time , even if the limiting system consists of an infinitely large graph . However, the cost of such an approach would still be exponential in the number of time steps, as a -hop neighbourhood would typically include exponentially many nodes, except for very simple graphs with degree .\nWe therefore consider alternatives: For one, we could apply a sequential Monte Carlo approach to the problem by simulating instances of a system from times to some terminal time that consists of all nodes less than away from the root node. However, this means that we would have to simulate many finite systems in parallel. Instead, using the fact that the empirical distribution of agent states in the finite system converges to as seen in Theorem 2.1 ###reference_theorem1###, it should be sufficient to evaluate via the empirical distribution of a sufficiently large system.\nThus, we simulate only a single instance of a large system with many nodes by using it for the limiting MFC, which is equivalent to learning directly on a large finite system. In other words, our approach learns load balancing strategies on a finite system by using the MFC formalism for tractability of state and action representations, with theoretical guarantees."
88
+ },
89
+ {
90
+ "section_id": "3",
91
+ "parent_section_id": null,
92
+ "section_name": "3. Experiment Setup",
93
+ "text": "In this section, we give an explanation of the different types of graph topologies we have used to verify our aforementioned theoretical analysis. We also give a description of the different used load balancing algorithms."
94
+ },
95
+ {
96
+ "section_id": "3.1",
97
+ "parent_section_id": "3",
98
+ "section_name": "3.1. Topologies",
99
+ "text": "A brief description of the practical topologies of interest, most of which fulfill the convergence in the local weak sense defined earlier is given here, where the agents are numbered from to .\n(i) First, we consider the simple CYC-1D graph, which has extensively been used in the study of queueing networks and is highly local (Shortle et al., 2018 ###reference_b37###). Each agent has access to other queues/servers, , while the edge nodes, and , form a connection. (ii) Next, we define the cube-connected cycle (CCC) graph. This undirected cubic graph has been considered as a network topology for parallel computing architectures (Habibian and Patooghy, 2017 ###reference_b16###). It is characterized by the cycle order, which is the degree of each node and defines the total number of nodes in the graph. (iii) We also apply the torus (TORUS) grid graph that has been repeatedly used to represent distributed systems for parallel computing (Deveci et al., 2019 ###reference_b10###), as a higher-dimensional extension of the CYC-1D graph. We here consider a -D torus which is a rectangular lattice having rows and columns. Every node connects to its nearest neighbours and the corresponding edge nodes are also connected. The total nodes in a -D torus are .\n(iv) Another highly general topology is the configuration model (CM). This sophisticated generalized random graph is one of the most important theoretical models in the study of large networks (Fosdick et al., 2018 ###reference_b13###). In contrast to the prior highly local topologies, the CM can capture realistic degree distributions under little clustering. Here every agent is assigned a certain degree, making the graph heterogeneous as compared to every agent having the same degree as in previous mentioned graphs. The degree sequence we have used is in the set with equal likelihood.\n(v) And lastly, for an ablation study on graphical convergence assumptions, we use the Bethe (BETHE) lattice. This cycle-free regular tree graph is used for analysis of many statistical physics, mathematics related models and potential games (Szab\u00f3 and Borsos, 2016 ###reference_b39###). It is characterized by a pre-defined lattice depth , with all nodes in the lattice having the same fixed number of neighbours, . The number of nodes at a depth away from the root node are given by: and the total nodes in the Bethe lattice are calculated as: . We use for our experiments.\nNote that it has already been formally shown in (Lacker et al., 2023 ###reference_b19###, Sections 3.6 and 7.3) that for a sequence of increasing regular trees, the empirical measure does not converge to the same limit law as the root particle, even in the weak sense.\nThis is because even as , a large proportion of the nodes are leaves, which greatly influences the behaviour of the empirical measure, as particles at different heights behave differently, and the root particle is the only particle at height zero. We have also verified this mismatch using our experiments in Section 4 ###reference_###, though we nevertheless obtain improved performance in certain regimes."
100
+ },
101
+ {
102
+ "section_id": "3.2",
103
+ "parent_section_id": "3",
104
+ "section_name": "3.2. Load Balancing Algorithms",
105
+ "text": "We now explain first the mean field solver that we have designed, based on our mathematical modelling from Section 2 ###reference_###, namely the MF-Random solver. Then we mention all the existing state-of-the-art finite-agent solvers which we have used for performance comparison. The different kinds of load balancing policies which we have verified on the above-mentioned topologies are given in the following:\nMF-Random (MF-R): The agent is only aware of the state of its own queue. The upper-level learned policy is a vector that gives the probability of sending jobs to the own queue or to a random other accessible queue.\nMARL-PS (NA-PS): The policy is trained, using PPO with parameter sharing (Christianos et al., 2020 ###reference_b4###), as it often gives state-of-the-art performance in various RL benchmarks (Yu et al., 2022 ###reference_b43###; Papoudakis et al., 2021 ###reference_b31###), on a smaller number of agents. The learned policy can be used for any arbitrary number of agents. It generates a continuous policy which gives the probability of either sending to your queue or randomly to one of the neighbours queues while only observing the state of your own queue, similar to MF-R.\nJoin-the-shortest-queue (JSQ): A discrete policy that sends jobs to the shortest out of all accessible queues.\nRandom (RND): A policy where the probability of sending the job to any accessible queues is equally likely.\nSend-to-own-queue (OWN): A discrete policy where the agent only sends jobs to its own queue.\nNote that the action from the above-mentioned load balancing policies is obtained at the beginning of each and then used for that entire timestep. Also note that for all the algorithms, the value for the number of accessible queues is dependent on the type of graph topology being used, and we refer to Section 3.1 ###reference_### for more details on this. In the next section, we discuss all the required parameters and chosen values for our experiments.\n###figure_2###"
106
+ },
107
+ {
108
+ "section_id": "3.3",
109
+ "parent_section_id": "3",
110
+ "section_name": "3.3. Training",
111
+ "text": "For all our experiments, we consider that each agent is associated with only one queue, so . The servers work at an exponential rate of . At every decision epoch we simulate a Markov modulated arrival rate , with with transition law for switching between rates: and .\nNote that these values were chosen to depict the switching of the system between high and low traffic regimes. In principle, any reasonable values can be considered.\nWe use the well-established policy gradient based RL algorithm, proximal policy optimization (PPO) with a localized reward function which penalizes for the drops in each queue.\nThe rest of the system parameters and the PPO hyperparameters are given in Appendix.\nLastly, the number of agents and degrees in different graph topologies are fixed during training of the MF-R policies as:\n-D cyclic (CYC-1D): , ,\nCube-connected cycle (CCC): , , ,\n-D Torus grid (TORUS): , , ,\nConfiguration model (CM): , ,\nBethe lattice (BETHE): , , .\nOnce trained on these parameters, the learned policy can be evaluated on varying graph sizes without the need of retraining, as done in our experiments."
112
+ },
113
+ {
114
+ "section_id": "4",
115
+ "parent_section_id": null,
116
+ "section_name": "4. Experiment Results",
117
+ "text": "We now present the performance comparison of the load balancing policies of Section 3.2 ###reference_### on graph topologies from Section 3.1 ###reference_###.\nFor the exact simulation of associated continuous-time Markov chains , we sample exponential waiting times of all events using the Gillespie algorithm.\nFor training, we use the same simulation horizon for each episode consisting of discrete decision epochs, and for comparability the performance is evaluated in terms of the average packet drops which is calculated as the sum over decision epochs of the average number of total packets dropped in all queues. However, note that simulating the same time span with different does provide slightly different results, due to the switching between high and low traffic regimes after each epoch. Each evaluation was repeated for simulated episodes, and error bars depict the confidence interval."
118
+ },
119
+ {
120
+ "section_id": "5",
121
+ "parent_section_id": null,
122
+ "section_name": "5. Conclusion",
123
+ "text": "Overall, we have learned efficient algorithms for load balancing in highly local queuing systems with some degree of theoretical guarantees through sparse mean-field approximations. The approach has been positively evaluated in comparison to well-known baselines, and can scale to queueing systems of arbitrary size with arbitrary wireless communication delays.\nFuture work could attempt to quantify rates of convergence for approximate optimality guarantees of MFC policies under the framework of (Lacker et al., 2023 ###reference_b19###), or consider dynamic programming principles.\nOne could also look into the analysis of graph structures such as Bethe and regular trees, for which the current\nsparse MFC modelling does not suffice."
124
+ }
125
+ ],
126
+ "appendix": [
127
+ {
128
+ "section_id": "Appendix 1",
129
+ "parent_section_id": null,
130
+ "section_name": "Appendix A Appendix",
131
+ "text": "This Appendix contains the proofs, algorithm, hyperparameter values and additional experiments connected to the paper titled: \u201dSparse Mean Field Load Balancing in Large Localized Queueing Systems\u201d."
132
+ },
133
+ {
134
+ "section_id": "Appendix 2",
135
+ "parent_section_id": null,
136
+ "section_name": "Appendix B Proof",
137
+ "text": "We apply the framework of (Lacker et al., 2023 ###reference_b19###), which does not include actions or rewards, by including actions and rewards via separate time steps. Specifically, each time step is split in three, and we define the agent state space for each of the agents, indicating at any third decision epoch the state of its queue. After each such epoch, the agent states will contain the state of its queue together with its choice of action at times , and lastly also the number of packet drops at times . Here, is the maximum expected number of packet drops and is given by the times the maximum per-scheduler arrival rate .\nWe formally rewrite the system defined earlier, using the symbol for the state of the rewritten system instead of . As a result, each agent is endowed with a random local -valued state variable at each time . For the dynamics, we let denote the space of unordered terminating sequences (up to some maximum degree) with the discrete topology as in (Lacker et al., 2023 ###reference_b19###), and define in the following a system dynamics function returning a new state for any current local state , any current -valued neighbourhood queue fillings and i.i.d. sampled -valued noise . More precisely, we use\nwhere we define and the random noise variable as a contingency over all transitions, through a finite tuple of random variables consisting of components for (i) randomly sampling the next action according to , (ii) computing the expected lost packets , and (iii) sampling the next queue state\n.\nFor the first step, fixing any , we let\nfor all , to sample action from .\nDeferring for a moment the second step, for the new state in the third step, we use the Kolmogorov forward equation for the queue state during the epoch for any initial :\nwith unit vector ,\nwhich results from Poisson thinning, as the equivalent arrival rate at a queue of an agent currently in state with neighbourhood will be given by . Thus, we have the transition rate matrix with , for , and where for , and zero otherwise. Therefore, the next queue filling is sampled as\nLastly, for the second step we consider the non-random conditional expectation of packet losses during any epoch:\nunder the executed actions (allocations), analogously to the new state, by adding another row to the transition matrix , giving where , i.e. counting all the expected packet arrivals whenever the queue is already full ().\nWe use the same definitions for finite systems . We can verify (Lacker et al., 2023 ###reference_b19###, Assumption A), since is continuous e.g. by discreteness of the relevant spaces, and all are sampled independently and identically over agents and times. By (Lacker et al., 2023 ###reference_b19###, Theorem 3.6), the empirical distribution converges in probability in to its mean field limit , and in particular\nat any time , where denotes the space of probability measures equipped with the topology of weak convergence. Hence, the above describes the original objective by\nsince we split any time step into three by the prequel, the continuous mapping theorem, and dominated convergence, where we use the continuous function\nwhere ,\nto sum up the expected packet drops in the system, since the integrand is continuous and bounded (under the sum topology for the union in , and the product topology for products in ).\n\u220e\nDefine the non-zero (by assumption) optimality gap:\nThen, by Theorem 2.1 ###reference_theorem1###, there exists such that:\nTherefore,\nis the desired conclusion.\n\u220e"
138
+ },
139
+ {
140
+ "section_id": "Appendix 3",
141
+ "parent_section_id": null,
142
+ "section_name": "Appendix C Algorithm",
143
+ "text": "In order to solve the POMDP, we apply the established proximal policy optimization (PPO) RL method (Schulman et al., 2017 ###reference_b35###; Yu et al., 2022 ###reference_b43###) with and without recurrent policies, as commonly and successfully used in POMDP problems (Ni et al., 2022 ###reference_b30###). PPO is a policy gradient method with a clipping term in the loss function, such that the policy does not take gradient steps that are too large while learning (Schulman et al., 2017 ###reference_b35###; Yu et al., 2022 ###reference_b43###). For our experiments, we have worked with the stable and easy-to-use RLlib 1.10 implementation (Liang et al., 2018 ###reference_b22###) of PPO. The overall training code is given in Algorithm 1 ###reference_### given in , which also shows how to analogously apply a trained MFC policy to a finite system. We use diagonal Gaussian neural network policies with -activations and two hidden layers, parametrizing MFC MDP actions by values in for each entry , normalized by dividing by the sums .\nThe rest of the system parameters and the PPO hyperparameter are given here in Tables 1 ###reference_### and 2 ###reference_###, respectively."
144
+ },
145
+ {
146
+ "section_id": "Appendix 4",
147
+ "parent_section_id": null,
148
+ "section_name": "Appendix D Additional Experiments",
149
+ "text": "###figure_3### Firstly, we performed a small experiment in order to ensure that our simulator works as expected. Performance of the JSQ algorithm was evaluated for TORUS graph with , and for the JSQ algorithm. It can be seen in Fig. 6 ###reference_###(a) that on increasing there comes a point when the performance of JSQ starts to converge, which is the expected behaviour for finite systems.\nWe also tried different sizes of neural network while keeping all other parameters and environment. The training was performed for the CCC graph with and . The tested network sizes are shown in the legend of Fig. 6 ###reference_###(b), where the first value is the size of the layer and the second value is the number of layers used. The performance was quite similar for all of them, and we used the default network size for all our experiments.\nFinding more parameters could further improve the performance and will be investigated in the future.\n###figure_4### For the sake of completeness, we also compared different policies for the setup of CYC-1D graph at . First is the MF-R where the policy learned tells you the probability with which to send to your queue or randomly to one of your neighbors. Second is the MF-L where the policy learned tells you the probability with which to send to your queue or ones of your neighbors. Lastly, is the MF-NGH where a separate action is learned for every possible combination of your state and state of your neighbors. However, the neighbors are still kept anonymized by not learning a separate policy for neighbors\u2019 state and for , meaning that it does not matter which of your neighbors has queue filling or , a single policy is learned for these combinations. In Fig. 7 ###reference_### it can be seen that MF-R performs the best, and we believe this is because it has the smallest action space, so learning an optimal policy is more probable. Note that the input observation was the same for all, which is the empirical distribution of the agents\u2019 state as explained in Section 2.4.1. Hence, using any kind of action space is feasible, one just needs to consider the increase in training time and resources when the action space increases. Based on this we have used the MF-R model and policy for all our experiments.\n###figure_5### The scalable-actor-critic (SAC) (Lin et al., 2021 ###reference_b23###) method learns a discrete policy of sending to any one of the accessible queues. Each agent learns an individual policy while using as observation the agent\u2019s own queue state and also the queue states of all its neighbours. However, a trained policy cannot be used for an arbitrary number of agents since the policy of an agent is influenced by its neighbours states as well, which is not assumed to be the same for every agent.\nTo begin, while training for SAC, we observed that on increasing the number of agents, the convergence to a locally optimal policy takes longer (using one core of Intel Xeon Gold 6130), making it not too feasible for the larger setups we consider in this work. See Figure 8 ###reference_###(a) for time taken to learn a SAC policy on a -D cyclic topology with , , and same computational resources for all training setups of . Our implementation of SAC was adapted from (Li, 2021 ###reference_b21###). Furthermore, Figure 8 ###reference_###(b) shows the performance of the learned SAC policy as compared to other algorithms. Although performance did improve as the number of agents rises, indicating the scalability of SAC to many agents, the performance of SAC remained suboptimal. Due to these limitations, we did not investigate the SAC algorithm further in our experiments.\n###figure_6### For completeness and to illustrate generality, we also performed an experiment in which the buffer size for each queue was increased from to . To achieve faster convergence to a learned policy, we increased the arrival rate to the service rate of . We also increased the time steps from to so that the queues can be sufficiently filled, and the policy can be learned over the increased state space. Fig. 9 ###reference_###(a) shows that our learned policy outperforms the other algorithms at for TORUS graph.\nHowever, this increased the training time.\nWe also conducted experiments where we considered the servers to be heterogeneous with randomly assigned speed of fast(rate ) or slow(rate ). The workload was the same; modulating between . Fig. 9 ###reference_###(b) shows that our algorithm has comparable performance to RND and OWN while outperforming JSQ. We also compared with Shortest-expected-delay (SED), the state-of-the-art load balancing algorithm for heterogeneous servers (Selen et al., 2016 ###reference_b36###), which performs better since it makes decision based on both the neighbors\u2019 queue state information and the server speeds. We believe our model is not an ideal fit for this setting and further work needs to be done in this direction.\n###figure_7### We additionally trained our MF-R policy with recurrent neural networks in PPO, as is typical for partially-observed problems (Ni et al., 2022 ###reference_b30###), MF-RNN. In Figure 10 ###reference_###(a), we see that their effect is negligible and can even be negative. Finally, we used partially observed decentralized alternatives to the empirical distribution as input to the learned MF-R policy in the evaluation, see Figure 10 ###reference_###(b). In MF-N we use the distribution of neighbours\u2019 queue states, in MF-G we use the empirical distribution of all queues in the system, and in MF-R only the agent\u2019s own queue state information is used as a one-hot vector. Similar performances indicate that the learned policy can be executed locally without using global information."
150
+ }
151
+ ],
152
+ "tables": {
153
+ "1": {
154
+ "table_html": "<figure class=\"ltx_table\" id=\"A3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1. </span>System parameters used in the experiments.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A3.T1.24\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A3.T1.24.25.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T1.24.25.1.1\"><span class=\"ltx_text\" id=\"A3.T1.24.25.1.1.1\" style=\"font-size:80%;\">Symbol</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T1.24.25.1.2\"><span class=\"ltx_text\" id=\"A3.T1.24.25.1.2.1\" style=\"font-size:80%;\">Name</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T1.24.25.1.3\"><span class=\"ltx_text\" id=\"A3.T1.24.25.1.3.1\" style=\"font-size:80%;\">Value</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A3.T1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T1.2.2.3\"><span class=\"ltx_text\" id=\"A3.T1.2.2.3.1\" style=\"font-size:80%;\">Synchronization delay [ms]</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T1.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.3.3.2\"><span class=\"ltx_text\" id=\"A3.T1.3.3.2.1\" style=\"font-size:80%;\">Service rate [1/ms]</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.3.3.3\"><span class=\"ltx_text\" id=\"A3.T1.3.3.3.1\" style=\"font-size:80%;\">1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.5.5.3\"><span class=\"ltx_text\" id=\"A3.T1.5.5.3.1\" style=\"font-size:80%;\">Arrival rates [1/ms]</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.5.5.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.7.7.3\"><span class=\"ltx_text\" id=\"A3.T1.7.7.3.1\" style=\"font-size:80%;\">Number of agents</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.7.7.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.9.9\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.8.8.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.9.9.3\"><span class=\"ltx_text\" id=\"A3.T1.9.9.3.1\" style=\"font-size:80%;\">Number of queues</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.9.9.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.11.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.11.11.3\"><span class=\"ltx_text\" id=\"A3.T1.11.11.3.1\" style=\"font-size:80%;\">Number of accessible queues</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.11.11.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.13.13\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.12.12.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.13.13.3\"><span class=\"ltx_text\" id=\"A3.T1.13.13.3.1\" style=\"font-size:80%;\">Monte Carlo simulations</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.13.13.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.15.15\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.14.14.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.15.15.3\"><span class=\"ltx_text\" id=\"A3.T1.15.15.3.1\" style=\"font-size:80%;\">Queue buffer size</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.15.15.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.17.17\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.16.16.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.17.17.3\"><span class=\"ltx_text\" id=\"A3.T1.17.17.3.1\" style=\"font-size:80%;\">Queue (agent) starting state</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.17.17.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.19.19\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.18.18.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.19.19.3\"><span class=\"ltx_text\" id=\"A3.T1.19.19.3.1\" style=\"font-size:80%;\">Queue starting state distribution</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.19.19.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.21.21\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.20.20.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.21.21.3\"><span class=\"ltx_text\" id=\"A3.T1.21.21.3.1\" style=\"font-size:80%;\">Drop penalty per job</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.21.21.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.23.23\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.22.22.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.23.23.3\"><span class=\"ltx_text\" id=\"A3.T1.23.23.3.1\" style=\"font-size:80%;\">Training episode length</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T1.23.23.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T1.24.24\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T1.24.24.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T1.24.24.2\"><span class=\"ltx_text\" id=\"A3.T1.24.24.2.1\" style=\"font-size:80%;\">Graph topologies</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T1.24.24.3\">\n<span class=\"ltx_text\" id=\"A3.T1.24.24.3.1\" style=\"font-size:80%;\">Section </span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2312.12973v2#S3.SS1\" style=\"font-size:80%;\" title=\"3.1. Topologies \u2023 3. Experiment Setup \u2023 Sparse Mean Field Load Balancing in Large Localized Queueing Systems\"><span class=\"ltx_text ltx_ref_tag\">3.1</span></a>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
155
+ "capture": "Table 1. System parameters used in the experiments."
156
+ },
157
+ "2": {
158
+ "table_html": "<figure class=\"ltx_table\" id=\"A3.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2. </span>Hyperparameter configuration for PPO.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A3.T2.20\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A3.T2.20.21.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T2.20.21.1.1\"><span class=\"ltx_text\" id=\"A3.T2.20.21.1.1.1\" style=\"font-size:80%;\">Symbol</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T2.20.21.1.2\"><span class=\"ltx_text\" id=\"A3.T2.20.21.1.2.1\" style=\"font-size:80%;\">Name</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A3.T2.20.21.1.3\"><span class=\"ltx_text\" id=\"A3.T2.20.21.1.3.1\" style=\"font-size:80%;\">Value</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A3.T2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T2.2.2.3\"><span class=\"ltx_text\" id=\"A3.T2.2.2.3.1\" style=\"font-size:80%;\">Discount factor</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T2.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T2.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.4.4.3\"><span class=\"ltx_text\" id=\"A3.T2.4.4.3.1\" style=\"font-size:80%;\">GAE lambda</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T2.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.6.6.3\"><span class=\"ltx_text\" id=\"A3.T2.6.6.3.1\" style=\"font-size:80%;\">KL coefficient</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.6.6.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T2.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.8.8.3\"><span class=\"ltx_text\" id=\"A3.T2.8.8.3.1\" style=\"font-size:80%;\">KL target</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.8.8.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T2.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.10.10.3\"><span class=\"ltx_text\" id=\"A3.T2.10.10.3.1\" style=\"font-size:80%;\">Clip parameter</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.10.10.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T2.12.12\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.12.12.3\"><span class=\"ltx_text\" id=\"A3.T2.12.12.3.1\" style=\"font-size:80%;\">Learning rate</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.12.12.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T2.14.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.14.14.3\"><span class=\"ltx_text\" id=\"A3.T2.14.14.3.1\" style=\"font-size:80%;\">Training batch size</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.14.14.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T2.16.16\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.16.16.3\"><span class=\"ltx_text\" id=\"A3.T2.16.16.3.1\" style=\"font-size:80%;\">SGD Mini batch size</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.16.16.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T2.18.18\">\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.17.17.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.18.18.3\"><span class=\"ltx_text\" id=\"A3.T2.18.18.3.1\" style=\"font-size:80%;\">Number of SGD iterations</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T2.18.18.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T2.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T2.19.19.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T2.20.20.3\"><span class=\"ltx_text\" id=\"A3.T2.20.20.3.1\" style=\"font-size:80%;\">Number of epochs</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T2.20.20.2\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
159
+ "capture": "Table 2. Hyperparameter configuration for PPO."
160
+ }
161
+ },
162
+ "image_paths": {
163
+ "1": {
164
+ "figure_path": "2312.12973v2_figure_1.png",
165
+ "caption": "Figure 1. Visualization of how agents implement their policy. For instance, agent i=2\ud835\udc562i=2italic_i = 2 has neighbours j\u2208{1,3}\ud835\udc5713j\\in\\{1,3\\}italic_j \u2208 { 1 , 3 }. If its action is a2=0subscript\ud835\udc4e20a_{2}=0italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0, it will allocate all its arriving jobs to its own queue j=2\ud835\udc572j=2italic_j = 2 (green arrow). In contrast, if a2=1subscript\ud835\udc4e21a_{2}=1italic_a start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 1 then the arriving jobs are allocated randomly to one of its neighbour j\ud835\udc57jitalic_j (red arrows).",
166
+ "url": "http://arxiv.org/html/2312.12973v2/x1.png"
167
+ },
168
+ "2": {
169
+ "figure_path": "2312.12973v2_figure_2.png",
170
+ "caption": "Figure 2. Performance comparison of the learned MF-R policy to NA-PS, JSQ, RND and OWN algorithms, on a CYC-1D graph,\nover a range of \u0394\u2062t\u0394\ud835\udc61\\Delta troman_\u0394 italic_ts\nis shown, with 95%percent9595\\%95 % confidence intervals depicted by error bars. The degree of each agent is d=2\ud835\udc512d=2italic_d = 2 and the number of agents (queues) used to make the graph are N\u2208{9,21,91,901,3501,5001}\ud835\udc419219190135015001N\\in\\{9,21,91,901,3501,5001\\}italic_N \u2208 { 9 , 21 , 91 , 901 , 3501 , 5001 }.",
171
+ "url": "http://arxiv.org/html/2312.12973v2/x2.png"
172
+ },
173
+ "3": {
174
+ "figure_path": "2312.12973v2_figure_3.png",
175
+ "caption": "Figure 3. Performance of the MF-R policy for increasingly large CYC-1D graphs. The red horizontal line indicates the evaluated episode return of the learned MF-R policy during training on N=101\ud835\udc41101N=101italic_N = 101, (MF-MFC). Shaded regions depict the 95%percent9595\\%95 % confidence intervals.",
176
+ "url": "http://arxiv.org/html/2312.12973v2/x3.png"
177
+ },
178
+ "4": {
179
+ "figure_path": "2312.12973v2_figure_4.png",
180
+ "caption": "Figure 4. Performance over \u0394\u2062t\u2208{1,2,\u2026,10}\u0394\ud835\udc6112\u202610\\Delta t\\in\\{1,2,\\ldots,10\\}roman_\u0394 italic_t \u2208 { 1 , 2 , \u2026 , 10 } for large sparse graphs with underlying topologies of CCC in (a), TORUS in (b) and CM in (c). The number of nodes used to generate the graphs is given at the top of each subfigure.",
181
+ "url": "http://arxiv.org/html/2312.12973v2/x4.png"
182
+ },
183
+ "5": {
184
+ "figure_path": "2312.12973v2_figure_5.png",
185
+ "caption": "Figure 5. Performance comparison on large-sized Bethe lattice graph. The MF-R can be worse than RND due to violation of modelling assumptions.",
186
+ "url": "http://arxiv.org/html/2312.12973v2/x5.png"
187
+ },
188
+ "6": {
189
+ "figure_path": "2312.12973v2_figure_6.png",
190
+ "caption": "Figure 6. (a) JSQ converges as \u0394\u2062t\u0394\ud835\udc61\\Delta troman_\u0394 italic_t increases, validating our simulator. (b) Performance evaluation on a fixed environment while changing only the neural network parameters.",
191
+ "url": "http://arxiv.org/html/2312.12973v2/x6.png"
192
+ },
193
+ "7": {
194
+ "figure_path": "2312.12973v2_figure_7.png",
195
+ "caption": "Figure 7. MF-R is with symmetric actions, MF-L is learning actions for each neighbor separately but without knowing the exact state of the neighbor, MF-NGH is learning the action for each possible non-repeating combination of states for agents\u2019 own state and states of the neighbors. This experiment is done for the CYC-1D graph.",
196
+ "url": "http://arxiv.org/html/2312.12973v2/x7.png"
197
+ },
198
+ "8": {
199
+ "figure_path": "2312.12973v2_figure_8.png",
200
+ "caption": "Figure 8. (a): SAC training time to convergence increases with the number of agents, making it difficult to train and use for larger setups. (b): SAC performs worse than our proposed MF-R policy for a number of agents between 5555 and 101101101101.",
201
+ "url": "http://arxiv.org/html/2312.12973v2/x8.png"
202
+ },
203
+ "9": {
204
+ "figure_path": "2312.12973v2_figure_9.png",
205
+ "caption": "Figure 9. (a) Performance comparison for increased buffer capacity of each queue to 20202020. The system utilization was increased to ensure occurrence of packet drops in a limited amount of time. (b) Performance comparison of MF-R when the servers are considered to be heterogeneous. Additional comparison was done with the Shortest-expected-delay algorithm (Selen et al., 2016), which is the state-of-the-art when the servers are heterogeneous.",
206
+ "url": "http://arxiv.org/html/2312.12973v2/x9.png"
207
+ },
208
+ "10": {
209
+ "figure_path": "2312.12973v2_figure_10.png",
210
+ "caption": "Figure 10. (a): The training for TORUS with and without RNN policies converges almost to the same return. (b): Evaluation of the learned policy using different observations. MF-R uses only own queue state, MF-N additionally uses neighbour queue states, and MF-G uses state of all queues in the system.",
211
+ "url": "http://arxiv.org/html/2312.12973v2/x10.png"
212
+ }
213
+ },
214
+ "validation": true,
215
+ "references": [
216
+ {
217
+ "1": {
218
+ "title": "Mean field games and mean field type control theory. Vol. 101.",
219
+ "author": "Alain Bensoussan, Jens Frehse, Phillip Yam, et al. 2013.",
220
+ "venue": "Springer.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "2": {
226
+ "title": "Graphon mean field games and their equations.",
227
+ "author": "Peter E Caines and Minyi Huang. 2021.",
228
+ "venue": "SIAM Journal on Control and Optimization 59, 6 (2021), 4373\u20134399.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "3": {
234
+ "title": "Shared experience actor-critic for multi-agent reinforcement learning.",
235
+ "author": "Filippos Christianos, Lukas Sch\u00e4fer, and Stefano Albrecht. 2020.",
236
+ "venue": "Advances in Neural Information Processing Systems 33 (2020), 10707\u201310717.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "4": {
242
+ "title": "Multi-agent Reinforcement Learning for Networked System Control. In 8th International Conference on Learning Representations. OpenReview.net, Addis Ababa, Ethiopia, April 26-30, 2020, 1\u201317.",
243
+ "author": "Tianshu Chu, Sandeep Chinchali, and Sachin Katti. 2020.",
244
+ "venue": "",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "5": {
250
+ "title": "Learning Graphon Mean Field Games and Approximate Nash Equilibria. In The 10th International Conference on Learning Representations. OpenReview.net, 1\u201331.",
251
+ "author": "Kai Cui and Heinz Koeppl. 2022.",
252
+ "venue": "",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "6": {
258
+ "title": "A Survey on Large-Population Systems and Scalable Multi-Agent Reinforcement Learning.",
259
+ "author": "Kai Cui, Anam Tahir, Gizem Ekinci, Ahmed Elshamanhory, Yannick Eich, Mengguang Li, and Heinz Koeppl. 2022.",
260
+ "venue": "arXiv preprint arXiv:2209.03859 (2022), 1\u201321.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "7": {
266
+ "title": "Balancing queues by mean field interaction.",
267
+ "author": "Donald A Dawson, Jiashan Tang, and Yiqiang Q Zhao. 2005.",
268
+ "venue": "Queueing Systems 49, 3 (2005), 335\u2013361.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "8": {
274
+ "title": "Scalable load balancing in networked systems: A survey of recent advances.",
275
+ "author": "Mark Van der Boor, Sem C Borst, Johan SH Van Leeuwaarden, and Debankur Mukherjee. 2022.",
276
+ "venue": "SIAM Rev. 64, 3 (2022), 554\u2013622.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "9": {
282
+ "title": "Geometric mapping of tasks to processors on parallel computers with mesh or torus networks.",
283
+ "author": "Mehmet Deveci, Karen D Devine, Kevin Pedretti, Mark A Taylor, Sivasankaran Rajamanickam, and \u00dcmit V \u00c7ataly\u00fcrek. 2019.",
284
+ "venue": "IEEE Transactions on Parallel and Distributed Systems 30, 9 (2019), 2018\u20132032.",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "10": {
290
+ "title": "Learning Sparse Graphon Mean Field Games.",
291
+ "author": "Christian Fabian, Kai Cui, and Heinz Koeppl. 2022.",
292
+ "venue": "arXiv preprint arXiv:2209.03880 (2022), 1\u201332.",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "11": {
298
+ "title": "The Markov-modulated Poisson process (MMPP) cookbook.",
299
+ "author": "Wolfgang Fischer and Kathleen Meier-Hellstern. 1993.",
300
+ "venue": "Performance evaluation 18, 2 (1993), 149\u2013171.",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "12": {
306
+ "title": "Configuring random graph models with fixed degree sequences.",
307
+ "author": "Bailey K Fosdick, Daniel B Larremore, Joel Nishimura, and Johan Ugander. 2018.",
308
+ "venue": "SIAM Rev. 60, 2 (2018), 315\u2013355.",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "13": {
314
+ "title": "The power of two choices on graphs: The pair-approximation is accurate?",
315
+ "author": "Nicolas Gast. 2015.",
316
+ "venue": "ACM SIGMETRICS Performance Evaluation Review 43, 2 (2015), 69\u201371.",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "14": {
322
+ "title": "Review on Dec-POMDP Model for MARL Algorithms.",
323
+ "author": "Shen Guicheng and Wang Yang. 2022.",
324
+ "venue": "In Smart Communications, Intelligent Algorithms and Interactive Methods. Springer, 29\u201335.",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "15": {
330
+ "title": "Fault-tolerant routing methodology for hypercube and cube-connected cycles interconnection networks.",
331
+ "author": "Hossein Habibian and Ahmad Patooghy. 2017.",
332
+ "venue": "The Journal of Supercomputing 73, 10 (2017), 4560\u20134579.",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "16": {
338
+ "title": "Thinning and the law of small numbers. In IEEE International Symposium on Information Theory. IEEE, 1491\u20131495.",
339
+ "author": "Peter Harremo\u00ebs, Oliver Johnson, and Ioannis Kontoyiannis. 2007.",
340
+ "venue": "",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "17": {
346
+ "title": "Graphon Mean-Field Control for Cooperative Multi-Agent Reinforcement Learning.",
347
+ "author": "Yuanquan Hu, Xiaoli Wei, Junji Yan, and Hengxi Zhang. 2022.",
348
+ "venue": "arXiv preprint arXiv:2209.04808 (2022), 1\u201325.",
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "18": {
354
+ "title": "Local weak convergence for sparse networks of interacting processes.",
355
+ "author": "Daniel Lacker, Kavita Ramanan, and Ruoyu Wu. 2023.",
356
+ "venue": "The Annals of Applied Probability 33, 2 (2023), 843\u2013888.",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "19": {
362
+ "title": "Mean field games.",
363
+ "author": "Jean-Michel Lasry and Pierre-Louis Lions. 2007.",
364
+ "venue": "Japanese Journal of Mathematics 2, 1 (2007), 229\u2013260.",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "20": {
370
+ "title": "Networked-MARL.",
371
+ "author": "Yiheng Li. 2021.",
372
+ "venue": "https://github.com/yihenglin97/Networked-MARL. (2021).",
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "21": {
378
+ "title": "RLlib: Abstractions for distributed reinforcement learning. In International Conference on Machine Learning. PMLR, 3053\u20133062.",
379
+ "author": "Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph Gonzalez, Michael Jordan, and Ion Stoica. 2018.",
380
+ "venue": "",
381
+ "url": null
382
+ }
383
+ },
384
+ {
385
+ "22": {
386
+ "title": "Multi-agent reinforcement learning in stochastic networked systems.",
387
+ "author": "Yiheng Lin, Guannan Qu, Longbo Huang, and Adam Wierman. 2021.",
388
+ "venue": "Advances in Neural Information Processing Systems 34 (2021), 7825\u20137837.",
389
+ "url": null
390
+ }
391
+ },
392
+ {
393
+ "23": {
394
+ "title": "Open problem\u2014load balancing using delayed information.",
395
+ "author": "David Lipshutz. 2019.",
396
+ "venue": "Stochastic Systems 9, 3 (2019), 305\u2013306.",
397
+ "url": null
398
+ }
399
+ },
400
+ {
401
+ "24": {
402
+ "title": "Large networks and graph limits. Vol. 60.",
403
+ "author": "L\u00e1szl\u00f3 Lov\u00e1sz. 2012.",
404
+ "venue": "American Mathematical Society.",
405
+ "url": null
406
+ }
407
+ },
408
+ {
409
+ "25": {
410
+ "title": "Join-Idle-Queue: A novel load balancing algorithm for dynamically scalable web services.",
411
+ "author": "Yi Lu, Qiaomin Xie, Gabriel Kliot, Alan Geller, James R Larus, and Albert Greenberg. 2011.",
412
+ "venue": "Performance Evaluation 68, 11 (2011), 1056\u20131071.",
413
+ "url": null
414
+ }
415
+ },
416
+ {
417
+ "26": {
418
+ "title": "Load balancing in cloud computing: a big picture.",
419
+ "author": "Sambit Kumar Mishra, Bibhudatta Sahoo, and Priti Paramita Parida. 2020.",
420
+ "venue": "Journal of King Saud University-Computer and Information Sciences 32, 2 (2020), 149\u2013158.",
421
+ "url": null
422
+ }
423
+ },
424
+ {
425
+ "27": {
426
+ "title": "The power of two choices in randomized load balancing.",
427
+ "author": "Michael Mitzenmacher. 2001.",
428
+ "venue": "IEEE Transactions on Parallel and Distributed Systems 12, 10 (2001), 1094\u20131104.",
429
+ "url": null
430
+ }
431
+ },
432
+ {
433
+ "28": {
434
+ "title": "Asymptotically optimal load balancing topologies.",
435
+ "author": "Debankur Mukherjee, Sem C Borst, and Johan SH Van Leeuwaarden. 2018.",
436
+ "venue": "Proceedings of the ACM on Measurement and Analysis of Computing Systems 2, 1 (2018), 1\u201329.",
437
+ "url": null
438
+ }
439
+ },
440
+ {
441
+ "29": {
442
+ "title": "Recurrent Model-Free RL Can Be a Strong Baseline for Many POMDPs. In Proceedings of the 39th International Conference on Machine Learning, Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato (Eds.), Vol. 162. PMLR, 16691\u201316723.",
443
+ "author": "Tianwei Ni, Benjamin Eysenbach, and Ruslan Salakhutdinov. 2022.",
444
+ "venue": "",
445
+ "url": null
446
+ }
447
+ },
448
+ {
449
+ "30": {
450
+ "title": "Benchmarking multi-agent deep reinforcement learning algorithms in cooperative tasks. In Proceedings NeurIPS Datasets and Benchmarks. The MIT Press, 15\u201319.",
451
+ "author": "Georgios Papoudakis, Filippos Christianos, Lukas Sch\u00e4fer, and Stefano V Albrecht. 2021.",
452
+ "venue": "",
453
+ "url": null
454
+ }
455
+ },
456
+ {
457
+ "31": {
458
+ "title": "Bellman equation and viscosity solutions for mean-field stochastic control problem.",
459
+ "author": "Huy\u00ean Pham and Xiaoli Wei. 2018.",
460
+ "venue": "ESAIM: Control, Optimisation and Calculus of Variations 24, 1 (2018), 437\u2013461.",
461
+ "url": null
462
+ }
463
+ },
464
+ {
465
+ "32": {
466
+ "title": "Cloud-RAN modeling based on parallel processing.",
467
+ "author": "Veronica Quintuna Rodriguez and Fabrice Guillemin. 2018.",
468
+ "venue": "IEEE Journal on Selected Areas in Communications 36, 3 (2018), 457\u2013468.",
469
+ "url": null
470
+ }
471
+ },
472
+ {
473
+ "33": {
474
+ "title": "Mean-field analysis for load balancing on spatial graphs.",
475
+ "author": "Daan Rutten and Debankur Mukherjee. 2023.",
476
+ "venue": "arXiv preprint arXiv:2301.03493 (2023), 1\u201327.",
477
+ "url": null
478
+ }
479
+ },
480
+ {
481
+ "34": {
482
+ "title": "Proximal policy optimization algorithms.",
483
+ "author": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017.",
484
+ "venue": "arXiv preprint arXiv:1707.06347 (2017), 1\u201312.",
485
+ "url": null
486
+ }
487
+ },
488
+ {
489
+ "35": {
490
+ "title": "Steady-state analysis of shortest expected delay routing.",
491
+ "author": "Jori Selen, Ivo Adan, Stella Kapodistria, and Johan van Leeuwaarden. 2016.",
492
+ "venue": "Queueing Systems 84, 3 (2016), 309\u2013354.",
493
+ "url": null
494
+ }
495
+ },
496
+ {
497
+ "36": {
498
+ "title": "Fundamentals of queueing theory. Vol. 399.",
499
+ "author": "John F Shortle, James M Thompson, Donald Gross, and Carl M Harris. 2018.",
500
+ "venue": "John Wiley & Sons.",
501
+ "url": null
502
+ }
503
+ },
504
+ {
505
+ "37": {
506
+ "title": "Reinforcement learning: An introduction.",
507
+ "author": "Richard S Sutton and Andrew G Barto. 2018.",
508
+ "venue": "MIT press.",
509
+ "url": null
510
+ }
511
+ },
512
+ {
513
+ "38": {
514
+ "title": "Evolutionary potential games on lattices.",
515
+ "author": "Gy\u00f6rgy Szab\u00f3 and Istv\u00e1n Borsos. 2016.",
516
+ "venue": "Physics Reports 624 (2016), 1\u201360.",
517
+ "url": null
518
+ }
519
+ },
520
+ {
521
+ "39": {
522
+ "title": "Learning mean-field control for delayed information load balancing in large queuing systems. In Proceedings of the 51st International Conference on Parallel Processing. ACM New York, NY, USA, 1\u201311.",
523
+ "author": "Anam Tahir, Kai Cui, and Heinz Koeppl. 2022.",
524
+ "venue": "",
525
+ "url": null
526
+ }
527
+ },
528
+ {
529
+ "40": {
530
+ "title": "Performance and scaling of parallel systems with blocking start and/or departure barriers. In IEEE Conference on Computer Communications. IEEE, 460\u2013469.",
531
+ "author": "Brenton Walker, Stefan Bora, and Markus Fidler. 2022.",
532
+ "venue": "",
533
+ "url": null
534
+ }
535
+ },
536
+ {
537
+ "41": {
538
+ "title": "Optimality of the shortest line discipline.",
539
+ "author": "Wayne Winston. 1977.",
540
+ "venue": "Journal of Applied Probability 14, 1 (1977), 181\u2013189.",
541
+ "url": null
542
+ }
543
+ },
544
+ {
545
+ "42": {
546
+ "title": "The surprising effectiveness of PPO in cooperative, multi-agent games.",
547
+ "author": "Chao Yu, Akash Velu, Eugene Vinitsky, Yu Wang, Alexandre Bayen, and Yi Wu. 2022.",
548
+ "venue": "Proceedings NeurIPS Datasets and Benchmarks (2022), 1\u201314.",
549
+ "url": null
550
+ }
551
+ },
552
+ {
553
+ "43": {
554
+ "title": "Multi-agent reinforcement learning: A selective overview of theories and algorithms.",
555
+ "author": "Kaiqing Zhang, Zhuoran Yang, and Tamer Ba\u015far. 2021.",
556
+ "venue": "Handbook of Reinforcement Learning and Control (2021), 321\u2013384.",
557
+ "url": null
558
+ }
559
+ },
560
+ {
561
+ "44": {
562
+ "title": "Asymptotically optimal load balancing in large-scale heterogeneous systems with multiple dispatchers.",
563
+ "author": "Xingyu Zhou, Ness Shroff, and Adam Wierman. 2021.",
564
+ "venue": "Performance Evaluation 145 (2021), 102146.",
565
+ "url": null
566
+ }
567
+ }
568
+ ],
569
+ "url": "http://arxiv.org/html/2312.12973v2"
570
+ }
20240322/2312.17543v2.json ADDED
@@ -0,0 +1,704 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Building Efficient Universal Classifiers with Natural Language Inference",
3
+ "abstract": "Generative Large Language Models (LLMs) have become the mainstream choice for fewshot and zeroshot learning thanks to the universality of text generation. Many users, however, do not need the broad capabilities of generative LLMs when they only want to automate a classification task. Smaller BERT-like models can also learn universal tasks, which allow them to do any text classification task without requiring fine-tuning (zeroshot classification) or to learn new tasks with only a few examples (fewshot), while being significantly more efficient than generative LLMs. This paper (1) explains how Natural Language Inference (NLI) can be used as a universal classification task that follows similar principles as instruction fine-tuning of generative LLMs, (2) provides a step-by-step guide with reusable Jupyter notebooks for building a universal classifier,111https://github.com/MoritzLaurer/zeroshot-classifier and (3) shares the resulting universal classifier that is trained on 33 datasets with 389 diverse classes. Parts of the code we share has been used to train our older zeroshot classifiers that have been downloaded more than 55 million times via the Hugging Face Hub as of December 2023. Our new classifier improves zeroshot performance by 9.4%.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Over the past year, generative models have taken both academia and public attention by storm. The main appeal of text generation is that it is so universal, that almost any other text-related task can be reformulated as a text generation task (Radford et al., 2019 ###reference_b40###; Raffel et al., 2020 ###reference_b41###). Especially when text generators are massively scaled up and tuned on human instructions, they acquire impressive capabilities to generalise to new tasks without requiring task-specific fine-tuning (Sanh et al., 2022 ###reference_b44###; Ouyang et al., 2022 ###reference_b35###; Chung et al., 2022 ###reference_b10###; OpenAI, 2023 ###reference_b34###; Touvron et al., 2023 ###reference_b55###). Since the utility of these generative Large Language Models (LLMs) has become evident, large amounts of intellectual, financial and energy resources are being invested in improving and scaling generative LLMs.\nGiven that the resource requirements for training and deploying generative LLMs are prohibitive for many researchers and practitioners, this paper investigates other types of universal models, that make a different trade-off between resource requirements and universality. The literature has developed several other universal tasks that cannot solve generative tasks (summarization, translation etc.), but can solve any classification task with smaller size and performance competitive with generative LLMs Xu et al. (2023 ###reference_b62###); Schick and Sch\u00fctze (2021b ###reference_b48###).\nThe principle of universal classifiers is similar to generative models: A model is trained on a universal task, and a form of instruction or prompt enable it to generalize to unseen classification tasks. While several efficient approaches to universal classification exist (Schick and Sch\u00fctze, 2021a ###reference_b47###; Xia et al., 2022 ###reference_b61###; Yao et al., 2022 ###reference_b63###; Xu et al., 2023 ###reference_b62###; Bragg et al., 2021 ###reference_b5###; Ma et al., 2021 ###reference_b28###; Sun et al., 2022 ###reference_b52###), this paper focuses on guidance for one approach: Natural Language Inference. Several papers have used the universal NLI task for zero- and fewshot classification, but stopped short of mixing NLI data with multiple other non-NLI datasets to build more universal classifiers (Yin et al., 2019 ###reference_b64###, 2020 ###reference_b66###; Wang et al., 2021 ###reference_b58###; Laurer et al., 2023a ###reference_b22###).\nThe main contribution of this paper are: (1) easy-to-use universal classifiers trained on 5 NLI datasets and 28 non-NLI datasets with 389 diverse classes, improving zeroshot performance by 9.4% compared to NLI-only models; (2) a step-by-step guide with Juypter notebooks enabling users to train and adapt their own universal classifiers."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "NLI as a universal task",
15
+ "text": "The Natural Language Inference (NLI) task222An older but more expressive name for the task is RTE, Recognising Textual Entailment (Dagan et al., 2006 ###reference_b13###) is defined as recognising if the meaning of one text (the hypothesis) is entailed in another text (the premise). For example, the hypothesis \u201cThe EU is not trustworthy\" is entailed in the premise \u201cThe EU has betrayed its partners during the negotiations on Sunday\". To create NLI datasets, workers are presented with a text (the premise) and are tasked with writing a hypothesis that is either clearly true given the premise (entailment), clearly false given the premise (contradiction), or that might be true or false but is not clearly entailed or a contradiction (neutral). Several large scale NLI datasets with hundreds of thousands of unique hypothesis-premise pairs for these three classes have been created by crowd workers or language models (Bowman et al., 2015 ###reference_b4###; Williams et al., 2018 ###reference_b59###; Conneau et al., 2018 ###reference_b12###; Nie et al., 2020 ###reference_b33###; Parrish et al., 2021 ###reference_b37###; Liu et al., 2022 ###reference_b25###). For simplicity and to increase universality, the task can be simplified into a binary entailment vs. not-entailment task by merging the \u2018neutral\u2019 and \u2018contradiction\u2019 labels (Yin et al., 2021 ###reference_b65###).\n###figure_1### This binary NLI task is universal, because any text classification task can be reformulated into this entailment vs. not-entailment decision through label verbalisation (see figure 1 ###reference_###). Take topic classification as an example. The task could be to determine if the text \u201cWe need to raise tariffs\" belongs to the topic \u201ceconomy\" or \u201cwelfare\". From an NLI perspective, we can interpret the text \u201cWe need to raise tariffs\" as the premise and verbalise the topic labels in two topic hypotheses: \u201cThis text is about economy\" and \u201cThis text is about welfare\". The classification task reformulated as an NLI task then consists of determining which of the two topic hypotheses is more entailed in the text of interest (premise). In different words: Which hypothesis is more consistent with the text of interest?\nA model fine-tuned on NLI data (e.g. \u201cBERT-NLI\") can then be used to test any hypothesis formulated by a human against any text of interest (premise). For each individual hypothesis-premise pair, an NLI models will output a probability for entailment and not-entailment. To choose the most probable topic, we can select the hypothesis with the highest entailment score. Following the same procedure, any other text classification task can be reformulated as an NLI task, from stance detection, sentiment classification to factuality classification (see figure 1 ###reference_###). Any class can be verbalised as a hypothesis (similar to the prompt of a generative LLM) and can then be tested against any text of interest.333Note that an NLI model will always only do one task (NLI) just like a GPT model can only predict the next token. These tasks are universal because any other specific task can be reformatted into these more general tasks.\nThe main disadvantage of NLI for universal classification is that it requires a separate prediction for each of N class hypotheses, creating computational overhead for tasks with many classes. The main advantage is that identifying a new class only requires verbalising it as a hypothesis and passing it to an NLI model without the need of fine-tuning a new task-specific model from scratch (zeroshot classification). The most prominent implementation of this approach is probably the Hugging Face ZeroShotClassificationPipeline (see figure 2 ###reference_###) which uses this NLI-based approach under the hood (Wolf et al., 2020 ###reference_b60###).444https://huggingface.co/docs/transformers/v4.21.2/en/main_classes/pipelines##transformers.ZeroShotClassificationPipeline ###reference_4.21.2/en/main_classes/pipelines##transformers.ZeroShotClassificationPipeline### The models created in the paper are designed to be directly compatible with this pipeline."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "A guide to building a universal classifier",
21
+ "text": "In this guide we explain how this type of universal classifier is built. Each step is accompanied by a Jupyter notebook available on GitHub that implements each step end-to-end.555https://github.com/MoritzLaurer/zeroshot-classifier ###reference_lassifier### The main steps are:\nDataset preprocessing and harmonization\nAutomatic data cleaning (optional)\nHypothesis formulation and formatting\nTraining and evaluation\nVisualisation of results\nGuidance for using the resulting model is provided in section 4."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Data selection, preprocessing and harmonization",
27
+ "text": "We use two main types of data to train our universal classifier: Five NLI datasets and 28 other classification datasets."
28
+ },
29
+ {
30
+ "section_id": "3.1.x",
31
+ "parent_section_id": "3.1",
32
+ "section_name": "data-harmonization-nli.ipynb",
33
+ "text": "First, we use a set of established NLI datasets: MNLI (Williams et al., 2018 ###reference_b59###), ANLI, FEVER-NLI (Nie et al., 2020 ###reference_b33###), WANLI (Liu et al., 2022 ###reference_b25###), Ling-NLI (Parrish et al., 2021 ###reference_b37###).666We exclude the large SNLI datasets Bowman et al. (2015 ###reference_b4###) due to known issues of data quality. Each dataset contains tens of thousands of unique hypothesis-premise pairs classified into one of the three classes \u201centailment\", \u201cneutral\", \u201ccontradiction\". We merge the \u201cneutral\" and \u201ccontradiction\" class into one \u201cnot-entailment\" class to obtain the universal binary format. As figure 1 ###reference_### shows, only the probabilities for the \u201centailment\" class are relevant for universal classification. We merge all five NLI datasets into one harmonized dataset with three columns: \u201cpremise\", \u201chypothesis\", \u201clabel\".\nThe resulting merged ~885000 hypothesis-premise pairs would be enough to train a decent NLI model capable of zeroshot classification. The NLI datasets were, however, not created with zeroshot classification in mind. Crowd workers were instructed to write hypotheses that are entailed, contradictory or neutral towards a text, which led to a wide range of hypothesis-premise pairs. They were not specifically instructed to create data for typical classification tasks such as identifying topics, sentiment, stances, emotions, toxicity, factuality etc. which users might be interested in in practice (e.g. \u201cThis text is about topic X\"). To improve performance on these types of tasks, we therefore add a second collection of standard non-NLI classification datasets reformatted into the NLI format."
34
+ },
35
+ {
36
+ "section_id": "3.1.x",
37
+ "parent_section_id": "3.1",
38
+ "section_name": "data-harmonization-huggingface.ipynb",
39
+ "text": "We choose 28 popular non-NLI datasets with diverse classification tasks linked to sentiment, emotions, intent, toxicity, bias, topics, factuality, spam etc. with 387 classes in total. We selected most datasets based on their popularity (downloads) on the Hugging Face Hub. We also add some non-NLI datasets that are not available on the Hugging Face hub and create separate preprocessing notebooks for each of them (e.g. 1-data-harmonization-manifesto.ipynb). The full list of datasets with information on tasks, licenses and data quality is available in our dataset overview file.777https://github.com/MoritzLaurer/zeroshot-classifier/blob/main/v1_human_data/datasets_overview.csv ###reference_lassifier/blob/main/v1_human_data/datasets_overview.csv###\nFor creating this kind of collection, we strongly recommend manually inspecting each dataset and the corresponding paper to understand data quality and the underlying task. Depending on the datasets, the preprocessing steps can include: removing NAs, deduplication, downsampling majority classes, merging texts (e.g. titles with text bodies), converting continuous labels into simpler classes (e.g. star ratings to binary sentiment classes), removing texts with low certainty or annotator agreement, splitting datasets with multiple implicit tasks into separate tasks, removing and renaming columns, and splitting the data into a 80-20 train-test split if no test-set exists. As a result of these steps, each processed dataset only has three harmonized column: \u201ctext\", \u201clabel_text\" (a word expressing the meaning of each class), and \u201clabel_standard\" (a number for each class).\nIf readers want to improve the classifier on a specific domain or a family of other tasks, they can add their datasets during this step."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Automatic data cleaning",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "3.2.x",
49
+ "parent_section_id": "3.2",
50
+ "section_name": "data-cleaning.ipynb",
51
+ "text": "Manual inspection of the non-NLI datasets reveal relevant quality issues in many datasets. We therefore use the CleanLab library to remove texts with a high probability of noise.888https://github.com/cleanlab/cleanlab ###reference_### CleanLab provides automated means for identifying noisy labels by embedding texts with a SentenceBERT model, training a simple logistic classifier on these embeddings and analysing prediction uncertainty and prediction overlaps between classes.\nTwo relevant limitations of this process are that it can disproportionately remove minority classes and it probably does not work well for very complex tasks. We therefore applied this automatic approach to 25 tasks, but not to complex tasks like NLI or factuality detection. This process removes roughly 17% (or ~135 000) texts with probable misclassifications or label overlaps. We highly recommend readers to inspect our cleaning notebook to get a feeling for the amount of noise that is still present in established datasets.\nAs an additional measure to increase data quality and diversity in the following script, we also radically downsample data for each non-NLI dataset. We only take a sample of maximum 500 texts per class and maximum 5000 texts per dataset to avoid overfitting to a specific large dataset. This leads to 51731 non-NLI texts (down from more than one million texts) that will be merged with the ~885000 NLI texts in the following step. We could have added hundreds of thousands of additional texts, but our experience indicates that data diversity and quality is more important than quantity. Moreover, our objective is not to build a classifier that beats (and overfits to) a benchmark, but to build a classifier that generalizes well."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Hypothesis formulation and NLI formatting",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "3.3.x",
61
+ "parent_section_id": "3.3",
62
+ "section_name": "data-formatting-universal-nli.ipynb",
63
+ "text": "We now need to transform the (cleaned) non-NLI datasets into the universal NLI format. First, we need to verbalise each class as a class hypothesis. For this label verbalisation step we read the underlying paper or annotator instructions for each dataset and express them as a class hypothesis. For a binary sentiment classification task on app reviews, for example, the hypotheses could be \u201cThis app review text expresses positive sentiment\" and \u201cThis app review text expresses negative sentiment\". We add information on the domain or type of dataset (\u201capp review text\") in some hypotheses, to help the model differentiate between texts from the same task type (e.g. binary sentiment classification) that come from different domains or datasets (e.g. app reviews vs. movie reviews vs. product reviews). This helps reduce negative transfer risks across datasets. As a general rule, we try to formulate the hypotheses in simple every-day language and avoid complex academic definitions, thinking of the model a bit like a simple crowd worker. Each class hypothesis is linked to its corresponding class label in a dictionary. All our hypotheses are available in 3-data-formatting-universal-nli.ipynb.999Research indicates that providing multiple different instructions (hypotheses) for the same class can help increase generalisation (Sanh et al., 2022 ###reference_b44###).\nFor each row in each non-NLI training dataset we now add a new \u201chypothesis\" column with the correct class hypothesis corresponding to the respective text. Moreover, in a new \u201clabel\" column, these text-hypothesis pairs receive the label \u201c0\" for \u201centailment\". We then multiply each text by two and pair the copied text with a random incorrect class hypothesis and the label \u201c1\" for \u201cnot-entailment\". This multiplication ensures that the model does not only learn that class hypotheses are always true and it functions as a form of data augmentation. When we rename the \u201ctext\" column to \u201cpremise\", this dataset now has exactly the same format as the NLI dataset with the columns \u201cpremise\", \u201chypothesis\", \u201clabel\" for binary entailment vs. not-entailment classification. This conversion is implemented in the function format_nli_trainset. We can now simply concatenate the non-NLI and the NLI training data.\nThe non-NLI test data needs to be formatted slightly differently. During test-time, all class hypotheses for a task need to be tested on each text to select the \u201cmost entailed\" hypothesis. This means that we need to multiply each test text by N for N classes, pairing the text with all N possible class hypotheses in N rows. This conversion is implemented in the function format_nli_testset. After this task-specific multiplication, these test sets cannot be concatenated and they need to be evaluated separately."
64
+ },
65
+ {
66
+ "section_id": "3.4",
67
+ "parent_section_id": "3",
68
+ "section_name": "Training and evaluation",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "3.4.x",
73
+ "parent_section_id": "3.4",
74
+ "section_name": "train-eval.ipynb",
75
+ "text": "With the data fully cleaned and formatted, we can now start training. We can use any pre-trained transformer model as the foundation. Since the only purpose of the model is classification, we discard models with a decoder such as T5 or Llama-2 Raffel et al. (2020 ###reference_b41###); Touvron et al. (2023 ###reference_b55###). Among encoder-only models, we had the best experience with DeBERTaV3 which is pre-trained with the highly effective RTD objective and exists in multiple sizes and with a multilingual variant (He et al., 2021 ###reference_b20###). Processing and training is implemented with Hugging Face Transformers. We use label2id = {\"entailment\": 0, \"not_entailment\": 1} for compatibility with the ZeroShotClassificationPipeline; pad and truncate to a maximum length of 512 tokens; base hyperparameters on the recommended fine-tuning hyperparameters in the appendix of the DeBERTaV3 paper He et al. (2021 ###reference_b20###) and do not conduct a hyperparameter search as it adds little value over the recommended hyperparameters in our experience while adding complexity.\nWe fine-tune models with three different data compositions for evaluation: (1) one model trained on all datasets (deberta-v3-zeroshot-v1.1-all-33); (2) one model trained on only the five NLI datasets as a baseline representing previous NLI-only zeroshot models (deberta-v3-nli-only); (3) 28 different models, each trained with all datasets, except one non-NLI dataset is held out. This last group of models is trained to test zeroshot generalisation to tasks the model has not seen during training. For each of the 28 models, we take the performance metric for the dataset that was held out in the respective training run. Based on these 28 metrics, we know what the performance for each task would be, if the model had seen all datasets, except the respective held out dataset.\nOne training run on around 9000000 concatenated hypothesis-premise pairs for 3 epochs takes around 5 hours for DeBERTaV3-base and 10 hours for DeBERTaV3-large on one A100 40GB GPU. Training and evaluating all 30 models takes around 6 (base) or 15 (large) full days of compute, mostly due to the the 28 models trained for held-out testing.\nWe use balanced accuracy as our main evaluation metric (Buitinck et al., 2013 ###reference_b6###) as many of our datasets are class imbalanced and the metric is easier to interpret than F1 macro. For evaluation on non-NLI datasets, remember that rows have been multiplied with one row per class hypothesis. The compute_metrics_nli_binary function handles the calculation of metrics for these reformatted datasets.\ndeberta-v3-zeroshot-v1.1-all-33 is the model we recommend for downstream use. The model is available in different sizes in our zeroshot collection on the Hugging Face Hub.101010https://huggingface.co/collections/MoritzLaurer/zeroshot-classifiers-6548b4ff407bb19ff5c3ad6f ###reference_aurer/zeroshot-classifiers-6548b4ff407bb19ff5c3ad6f###"
76
+ },
77
+ {
78
+ "section_id": "3.5",
79
+ "parent_section_id": "3",
80
+ "section_name": "Visualisation and interpretation of results",
81
+ "text": ""
82
+ },
83
+ {
84
+ "section_id": "3.5.x",
85
+ "parent_section_id": "3.5",
86
+ "section_name": "viz.ipynb",
87
+ "text": "###figure_2### The NLI-only classifier (deberta-v3-nli-only) is very similar to existing zeroshot classifiers on the Hugging Face hub. It can do all tasks to some extent, given it\u2019s training on the universal NLI task. It performs well on simple binary tasks such as sentiment classification, but struggles on other tasks that are too dissimilar from standard NLI texts and have more classes.\ndeberta-v3-zeroshot-v1.1-all-33 has seen up to 500 examples for each class in each dataset. Only based on this small amount of data, it achieves strongly improved performance across all tasks. This is in line with prior research indicating that little, but good quality data is necessary for language models to generalize well (Zhou et al., 2023 ###reference_b68###).\ndeberta-v3-zeroshot-v1.1-heldout provides an indication of zeroshot performance for tasks the model has not seen during training. We highlight two main insights: First, models trained with a mix of NLI data and non-NLI data achieve overall better zeroshot performance than the NLI-only model (+9.4% on average). Having seen different zeroshot-style hypotheses helps the model generalize to other unseen tasks and hypotheses (positive transfer). Second, there are a few cases of negative transfer. On a few datasets, the NLI-only model performs better than deberta-v3-zeroshot-v1.1-heldout, indicating that the additional task-mix can make the model over- or underpredict a few classes.\nOverall, deberta-v3-zeroshot-v1.1-all-33 significantly outperforms the NLI-only model both on held-in and held-out tasks. Its performance on datasets it has not seen during training can expected to be around 9.4% higher than NLI-only models. Moreover, it can simultaneously perform many different tasks it has seen during training with even better performance. Detailed metrics are available in the appendix and the model cards.111111https://huggingface.co/MoritzLaurer/deberta-v3-large-zeroshot-v1.1-all-33 ###reference_a-v3-large-zeroshot-v1.1-all-33###"
88
+ },
89
+ {
90
+ "section_id": "4",
91
+ "parent_section_id": null,
92
+ "section_name": "Reusing our models and code",
93
+ "text": "We envisage three main ways in which our models and code can be reused. First, users can directly use deberta-v3-zeroshot-v1.1-all-33 for zeroshot classification in just a few lines of code with the Hugging Face ZeroShotClassificationPipeline (see code in figure 2 ###reference_###). This should work particularly well for tasks that are similar to one of the 33 datasets and 389 classes we used for training, including many different topics, sentiment, emotions, or types of toxicity.\nSecond, the models can be used as a base models to fine-tune a task-specific classifier. Prior research shows that fine-tuning an NLI-based classifier requires less training data and increases robustness compared standard fine-tuning of DeBERTaV3-base (Laurer et al., 2023a ###reference_b22###; Raman et al., 2023 ###reference_b42###; Le Scao and Rush, 2021 ###reference_b24###). Good performance can be achieved with just a few hundred examples per class, requiring only some minutes of fine-tuning on a free GPU (Laurer et al., 2023b ###reference_b23###). We provide code examples for this approach in an online workshop.121212See the notebook 4_tune_bert_nli.ipynb at https://github.com/MoritzLaurer/summer-school-transformers-2023/tree/main ###reference_ool-transformers-2023/tree/main###.\nThird, researchers can modify our notebooks, for example by adding more datasets for a specific domain and task family, and rerun the improved pipeline to build a universal classifier that is better adapted to their domain and tasks. While fine-tuning deberta-v3-zeroshot-v1.1-all-33 is recommended for individual tasks, rerunning the pipeline could add value if researchers want to build a new universal model adapted to a broader set of tasks or domains. We estimate that the final model can be trained with a \u20ac 50 Google Colab Pro+ subscription.\nIn all three use-cases, making predictions with the resulting models (inference) is highly efficient with cheap GPUs, but is also possible with on a laptop CPU."
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Limitations",
99
+ "text": "We outline several limitations of this paper and invite readers to improve on our implementation. First, while we have included 28 non-NLI datasets, the diversity of these academic datasets is limited and they do not cover the full diversity of classification use-cases users will need in practice. All datasets are only in English. The instruction fine-tuning literature for generative LLMs has shown the potential of using SotA models like GPT-4 to generate diverse training data and distilling their capabilities into much smaller models (Taori et al., 2023 ###reference_b53###; Tunstall et al., 2023 ###reference_b56###). While many such datasets exist for generative tasks, hardly any are available for encoder-only classifiers like BERT (Sileo, 2023 ###reference_b49###; Longpre et al., 2023a ###reference_b26###, b ###reference_b27###). We assume that smart LLM prompting could result in a more diverse dataset than our collection and could further improve generalisation.\nSecond, the model comparisons are limited as we only compare BERT-NLI models among each other. We do not compare classification performance, inference speed, memory requirements, and costs to larger generative LLMs or APIs.\nThird, we assume that our data still contains a certain degree of noise. Additional data cleaning techniques could be used, for example discarding training data where the DeBERTa-v3 model still disagrees with the label after fine-tuning or targeted manual inspection enabled by active learning.\nFourth, an inherent limitation of NLI for zeroshot classification is that each additional class requires an additional forward pass (prediction) through the model. This makes the approach less suitable for tasks with a high amount of classes. At the same time, even if multiple forward passes are required, encoder-only models with only around a hundred million parameters are still more efficient than decoder models with multiple billion parameters while possibly being more accurate (Xu et al., 2023 ###reference_b62###; Schick and Sch\u00fctze, 2021b ###reference_b48###).\nFifth, we use the relatively old DeBERTa-v3 from November 2021 (He et al., 2021 ###reference_b20###), which misses relevant recent innovations like longer context windows or flash attention (Dao et al., 2022 ###reference_b14###). Unfortunately we are not aware of a better encoder-only model and releases have recently been dominated by larger generative decoder models.\nSixth, several other universal classification approaches exist that were beyond the scope of this paper: PET, which combines masked-language-modeling and label verbalisation (Schick and Sch\u00fctze, 2021a ###reference_b47###), replaced-token-detection combined with prompts (Xia et al., 2022 ###reference_b61###; Yao et al., 2022 ###reference_b63###; Xu et al., 2023 ###reference_b62###), question-answering (Bragg et al., 2021 ###reference_b5###), or next-sentence-prediction as an interesting self-supervised alternative to NLI (Ma et al., 2021 ###reference_b28###; Sun et al., 2022 ###reference_b52###)."
100
+ },
101
+ {
102
+ "section_id": "6",
103
+ "parent_section_id": null,
104
+ "section_name": "Conclusion and call for a new foundation model",
105
+ "text": "This paper explains how to use the Natural Language Inference task to build a universal classifier and provides practical guidance to users. Looking forward, we believe that there is significant room for improvement by building upon the insights from generative LLM research for more efficient classifiers.\nFirst, generative LLMs gain their power by learning their universal task (next-token-prediction) already during self-supervised pre-training and not only during fine-tuning (a limitation of our models). It is possible that universal self-supervised tasks exist for classification tasks as well (or discriminative tasks more generally). The most promising candidate is ELECTRA\u2019s replaced-token-detection (RTD) objective (Clark et al., 2020 ###reference_b11###), which can make models with only a few hundred million parameters perform comparably to models with 1.5 billion parameters that are trained on the the less efficient generative masked-language-modeling objective (He et al., 2021 ###reference_b20###). We hypothesize that the RTD objective could be supplemented with a binary \u201coriginal text\" vs. \u201cnot-original text\" objective, resulting in a universal classification head similar to the universal \u201centailment\" vs. \u201cnot-entailment\" task - without requiring supervision. Xu et al. (2023 ###reference_b62###) go in this direction, but did not experiment with a self-supervised task.\nSecond, a new foundation model trained on this task could then also be trained with other more recent innovations, which existing encoder-only models are currently lacking: flash attention (Dao et al., 2022 ###reference_b14###), grouped-query attention (Ainslie et al., 2023 ###reference_b2###), better positional embeddings like RoPe or AliBi to enable longer context windows (Su et al., 2023 ###reference_b51###; Press et al., 2022 ###reference_b38###), and scaling pre-training data and compute while only moderately scaling model size for inference-time efficiency (Hoffmann et al., 2022 ###reference_b21###).\nThird, similar to generative LLMs, better instruction data could make universal classifiers more useful. As discussed in the limitations section, especially synthetic data from much larger generative LLMs tailored to universal classifiers has the potential to flexibly teach efficient classifiers more diverse and more practically relevant tasks. The creators of the WANLI dataset have already demonstrated this potential with GPT3 (Liu et al., 2022 ###reference_b25###) and it is safe to assume that newer generators will produce even better data.\nThese points would entail pre-training a new foundation model from scratch, which requires large amounts of resources. We believe that such a foundation model for text classification would be a useful addition to the open-source ecosystem as the field has progress significantly since the last encoder-only models were released and classification tasks constitute a relevant share of both academic and practical applications for language models."
106
+ }
107
+ ],
108
+ "appendix": [
109
+ {
110
+ "section_id": "Appendix 1",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix A Metrics details",
113
+ "text": "Detailed metrics per dataset are reported in the model cards for the base-sized model at https://huggingface.co/MoritzLaurer/deberta-v3-base-zeroshot-v1.1-all-33 ###reference_a-v3-base-zeroshot-v1.1-all-33### and for the large-sized model at https://huggingface.co/MoritzLaurer/deberta-v3-large-zeroshot-v1.1-all-33 ###reference_a-v3-large-zeroshot-v1.1-all-33###. See also figures 4 ###reference_### and 5 ###reference_### below.\n###figure_3### The financialphrasebank dataset provides a good example for negative transfer that is present for the base sized model, but not the large model. financialphrasebank is a three class sentiment classification task with a third neutral category. The task mix includes other binary sentiment tasks with only the classes \u201cpositive\" vs \u201cnegative\". We assume that the base-sized model underpredicted the \u201cneutral\" class on financialphrasebank under the heldout condition, as it was not sufficiently represented in the remaining data. This presumable led to a negative transfer where the NLI-only model performed better without the additional task-mix.\n###figure_4###"
114
+ },
115
+ {
116
+ "section_id": "Appendix 2",
117
+ "parent_section_id": null,
118
+ "section_name": "Appendix B Hypotheses per task",
119
+ "text": "The exact hypotheses used for each task and class is available in the notebook data-harmonization-nli.ipynb ###reference_lassifier/blob/main/v1_human_data/1_data_harmonization_nli.ipynb### or in the model cards on the Hugging Face Hub:\nhttps://huggingface.co/MoritzLaurer/deberta-v3-large-zeroshot-v1.1-all-33 ###reference_a-v3-large-zeroshot-v1.1-all-33###. For optimal performance, we recommend that users formulate their hypotheses in a similar fashion."
120
+ },
121
+ {
122
+ "section_id": "Appendix 3",
123
+ "parent_section_id": null,
124
+ "section_name": "Appendix C Datasets",
125
+ "text": "For details on all datasets used, see the overview table.131313https://github.com/MoritzLaurer/zeroshot-classifier/blob/main/v1_human_data/datasets_overview.csv ###reference_lassifier/blob/main/v1_human_data/datasets_overview.csv### To give citation credit to the authors of all datasets, here is the full list of dataset sources: Grano et al. (2017 ###reference_b19###); Davidson et al. (2017 ###reference_b15###); Saravia et al. (2018 ###reference_b46###); Zhang et al. (2015 ###reference_b67###); Almeida et al. (2011 ###reference_b3###); Casanueva et al. (2020 ###reference_b8###); Malo et al. (2014 ###reference_b30###); Mathew et al. (2021 ###reference_b31###); McAuley and Leskovec (2013 ###reference_b32###); Soups (2015 ###reference_b50###); Faruqui and Das (2018 ###reference_b16###); Maas et al. (2011 ###reference_b29###); FitzGerald et al. (2023 ###reference_b17###); Pang and Lee (2005 ###reference_b36###); Chatterjee et al. (2019 ###reference_b9###); Sap et al. (2020 ###reference_b45###); Rashkin et al. (2019 ###reference_b43###); Adams et al. (2017 ###reference_b1###); Gekhman et al. (2023 ###reference_b18###); Unknown (2024 ###reference_b57###); Parrish et al. (2021 ###reference_b37###); Nie et al. (2020 ###reference_b33###); Williams et al. (2018 ###reference_b59###); Liu et al. (2022 ###reference_b25###); Burst et al. (2020 ###reference_b7###); Project (2015 ###reference_b39###); Thorne et al. (2018 ###reference_b54###)"
126
+ }
127
+ ],
128
+ "tables": {},
129
+ "image_paths": {
130
+ "1": {
131
+ "figure_path": "2312.17543v2_figure_1.png",
132
+ "caption": "Figure 1: Illustration of universal classification with BERT-NLI based on Laurer et al., 2023a",
133
+ "url": "http://arxiv.org/html/2312.17543v2/x1.png"
134
+ },
135
+ "2": {
136
+ "figure_path": "2312.17543v2_figure_2.png",
137
+ "caption": "Figure 2: Example for using the resulting universal classifiers in the zeroshot pipeline",
138
+ "url": "http://arxiv.org/html/2312.17543v2/extracted/5489545/HuggingFace.png"
139
+ },
140
+ "3": {
141
+ "figure_path": "2312.17543v2_figure_3.png",
142
+ "caption": "Figure 3: Mean performance across 28 classification tasks.",
143
+ "url": "http://arxiv.org/html/2312.17543v2/extracted/5489545/fig_large_v1.1_avg.png"
144
+ },
145
+ "4": {
146
+ "figure_path": "2312.17543v2_figure_4.png",
147
+ "caption": "Figure 4: Metrics for large-sized model",
148
+ "url": "http://arxiv.org/html/2312.17543v2/extracted/5489545/fig_large_v1.1.png"
149
+ },
150
+ "5": {
151
+ "figure_path": "2312.17543v2_figure_5.png",
152
+ "caption": "Figure 5: Metrics for base-sized model",
153
+ "url": "http://arxiv.org/html/2312.17543v2/extracted/5489545/fig_base_v1.1.png"
154
+ }
155
+ },
156
+ "validation": true,
157
+ "references": [
158
+ {
159
+ "1": {
160
+ "title": "Toxic Comment Classification Challenge.",
161
+ "author": "C.J Adams, Will Cukierski, Jeffrey Sorensen, Julia Elliott, Lucas Dixon, Mark McDonald, and nithum. 2017.",
162
+ "venue": null,
163
+ "url": "https://kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge"
164
+ }
165
+ },
166
+ {
167
+ "2": {
168
+ "title": "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints.",
169
+ "author": "Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebr\u00f3n, and Sumit Sanghai. 2023.",
170
+ "venue": "ArXiv:2305.13245 [cs].",
171
+ "url": "http://arxiv.org/abs/2305.13245"
172
+ }
173
+ },
174
+ {
175
+ "3": {
176
+ "title": "Contributions to the study of SMS spam filtering: new collection and results.",
177
+ "author": "Tiago A. Almeida, Jos\u00e9 Mar\u00eda G. Hidalgo, and Akebo Yamakami. 2011.",
178
+ "venue": "In Proceedings of the 11th ACM symposium on Document engineering, DocEng \u201911, pages 259\u2013262, New York, NY, USA. Association for Computing Machinery.",
179
+ "url": "https://doi.org/10.1145/2034691.2034742"
180
+ }
181
+ },
182
+ {
183
+ "4": {
184
+ "title": "A large annotated corpus for learning natural language inference.",
185
+ "author": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015.",
186
+ "venue": "arXiv:1508.05326 [cs].",
187
+ "url": "http://arxiv.org/abs/1508.05326"
188
+ }
189
+ },
190
+ {
191
+ "5": {
192
+ "title": "FLEX: Unifying Evaluation for Few-Shot NLP.",
193
+ "author": "Jonathan Bragg, Arman Cohan, Kyle Lo, and Iz Beltagy. 2021.",
194
+ "venue": "arXiv:2107.07170 [cs].",
195
+ "url": "http://arxiv.org/abs/2107.07170"
196
+ }
197
+ },
198
+ {
199
+ "6": {
200
+ "title": "API design for machine learning software: experiences from the scikit-learn project.",
201
+ "author": "Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake Vanderplas, Arnaud Joly, Brian Holt, and Ga\u00ebl Varoquaux. 2013.",
202
+ "venue": "ArXiv:1309.0238 [cs].",
203
+ "url": "https://doi.org/10.48550/arXiv.1309.0238"
204
+ }
205
+ },
206
+ {
207
+ "7": {
208
+ "title": "Manifesto Corpus.",
209
+ "author": "Tobias Burst, Krause Werner, Pola Lehmann, Lewandowski Jirka, Theres Matthei\u00df, Nicolas Merz, Sven Regel, and Lisa Zehnter. 2020.",
210
+ "venue": null,
211
+ "url": "https://manifesto-project.wzb.eu/information/documents/corpus"
212
+ }
213
+ },
214
+ {
215
+ "8": {
216
+ "title": "Efficient Intent Detection with Dual Sentence Encoders.",
217
+ "author": "I\u00f1igo Casanueva, Tadas Tem\u010dinas, Daniela Gerz, Matthew Henderson, and Ivan Vuli\u0107. 2020.",
218
+ "venue": "In Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI, pages 38\u201345, Online. Association for Computational Linguistics.",
219
+ "url": "https://doi.org/10.18653/v1/2020.nlp4convai-1.5"
220
+ }
221
+ },
222
+ {
223
+ "9": {
224
+ "title": "SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text.",
225
+ "author": "Ankush Chatterjee, Kedhar Nath Narahari, Meghana Joshi, and Puneet Agrawal. 2019.",
226
+ "venue": "In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 39\u201348, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
227
+ "url": "https://doi.org/10.18653/v1/S19-2005"
228
+ }
229
+ },
230
+ {
231
+ "10": {
232
+ "title": "Scaling Instruction-Finetuned Language Models.",
233
+ "author": "Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022.",
234
+ "venue": "ArXiv:2210.11416 [cs].",
235
+ "url": "http://arxiv.org/abs/2210.11416"
236
+ }
237
+ },
238
+ {
239
+ "11": {
240
+ "title": "ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators.",
241
+ "author": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020.",
242
+ "venue": "arXiv:2003.10555 [cs].",
243
+ "url": "http://arxiv.org/abs/2003.10555"
244
+ }
245
+ },
246
+ {
247
+ "12": {
248
+ "title": "XNLI: Evaluating Cross-lingual Sentence Representations.",
249
+ "author": "Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018.",
250
+ "venue": "arXiv:1809.05053 [cs].",
251
+ "url": "http://arxiv.org/abs/1809.05053"
252
+ }
253
+ },
254
+ {
255
+ "13": {
256
+ "title": "The PASCAL Recognising Textual Entailment Challenge.",
257
+ "author": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006.",
258
+ "venue": "In Joaquin Qui\u00f1onero-Candela, Ido Dagan, Bernardo Magnini, and Florence d\u2019Alch\u00e9 Buc, editors, Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, volume 3944, pages 177\u2013190. Springer Berlin Heidelberg, Berlin, Heidelberg.",
259
+ "url": "https://doi.org/10.1007/11736790_9"
260
+ }
261
+ },
262
+ {
263
+ "14": {
264
+ "title": "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness.",
265
+ "author": "Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher R\u00e9. 2022.",
266
+ "venue": "ArXiv:2205.14135 [cs].",
267
+ "url": "http://arxiv.org/abs/2205.14135"
268
+ }
269
+ },
270
+ {
271
+ "15": {
272
+ "title": "Automated Hate Speech Detection and the Problem of Offensive Language.",
273
+ "author": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017.",
274
+ "venue": "Proceedings of the International AAAI Conference on Web and Social Media, 11(1):512\u2013515.",
275
+ "url": "https://doi.org/10.1609/icwsm.v11i1.14955"
276
+ }
277
+ },
278
+ {
279
+ "16": {
280
+ "title": "Identifying Well-formed Natural Language Questions.",
281
+ "author": "Manaal Faruqui and Dipanjan Das. 2018.",
282
+ "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 798\u2013803, Brussels, Belgium. Association for Computational Linguistics.",
283
+ "url": "https://doi.org/10.18653/v1/D18-1091"
284
+ }
285
+ },
286
+ {
287
+ "17": {
288
+ "title": "MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages.",
289
+ "author": "Jack FitzGerald, Christopher Hench, Charith Peris, Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron Nash, Liam Urbach, Vishesh Kakarala, Richa Singh, Swetha Ranganath, Laurie Crist, Misha Britan, Wouter Leeuwis, Gokhan Tur, and Prem Natarajan. 2023.",
290
+ "venue": "In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4277\u20134302, Toronto, Canada. Association for Computational Linguistics.",
291
+ "url": "https://doi.org/10.18653/v1/2023.acl-long.235"
292
+ }
293
+ },
294
+ {
295
+ "18": {
296
+ "title": "TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models.",
297
+ "author": "Zorik Gekhman, Jonathan Herzig, Roee Aharoni, Chen Elkind, and Idan Szpektor. 2023.",
298
+ "venue": "ArXiv:2305.11171 [cs].",
299
+ "url": "http://arxiv.org/abs/2305.11171"
300
+ }
301
+ },
302
+ {
303
+ "19": {
304
+ "title": "Android apps and user feedback: a dataset for software evolution and quality improvement.",
305
+ "author": "Giovanni Grano, Andrea Di Sorbo, Francesco Mercaldo, Corrado A. Visaggio, Gerardo Canfora, and Sebastiano Panichella. 2017.",
306
+ "venue": "In Proceedings of the 2nd ACM SIGSOFT International Workshop on App Market Analytics, pages 8\u201311, Paderborn Germany. ACM.",
307
+ "url": "https://doi.org/10.1145/3121264.3121266"
308
+ }
309
+ },
310
+ {
311
+ "20": {
312
+ "title": "DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing.",
313
+ "author": "Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2021.",
314
+ "venue": "arXiv:2111.09543 [cs].",
315
+ "url": "http://arxiv.org/abs/2111.09543"
316
+ }
317
+ },
318
+ {
319
+ "21": {
320
+ "title": "Training Compute-Optimal Large Language Models.",
321
+ "author": "Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022.",
322
+ "venue": "ArXiv:2203.15556 [cs].",
323
+ "url": "https://doi.org/10.48550/arXiv.2203.15556"
324
+ }
325
+ },
326
+ {
327
+ "22": {
328
+ "title": "Less Annotating, More Classifying: Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT-NLI.",
329
+ "author": "Moritz Laurer, Wouter Van Atteveldt, Andreu Casas, and Kasper Welbers. 2023a.",
330
+ "venue": "Political Analysis, pages 1\u201333.",
331
+ "url": "https://doi.org/10.1017/pan.2023.20"
332
+ }
333
+ },
334
+ {
335
+ "23": {
336
+ "title": "Lowering the Language Barrier: Investigating Deep Transfer Learning and Machine Translation for Multilingual Analyses of Political Texts.",
337
+ "author": "Moritz Laurer, Wouter Van Atteveldt, Andreu Casas, and Kasper Welbers. 2023b.",
338
+ "venue": "Computational Communication Research, 5(2):1.",
339
+ "url": "https://doi.org/10.5117/CCR2023.2.7.LAUR"
340
+ }
341
+ },
342
+ {
343
+ "24": {
344
+ "title": "How many data points is a prompt worth?",
345
+ "author": "Teven Le Scao and Alexander Rush. 2021.",
346
+ "venue": "In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2627\u20132636, Online. Association for Computational Linguistics.",
347
+ "url": "https://doi.org/10.18653/v1/2021.naacl-main.208"
348
+ }
349
+ },
350
+ {
351
+ "25": {
352
+ "title": "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation.",
353
+ "author": "Alisa Liu, Swabha Swayamdipta, Noah A. Smith, and Yejin Choi. 2022.",
354
+ "venue": "In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6826\u20136847, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
355
+ "url": "https://doi.org/10.18653/v1/2022.findings-emnlp.508"
356
+ }
357
+ },
358
+ {
359
+ "26": {
360
+ "title": "The Flan Collection: Designing Data and Methods for Effective Instruction Tuning.",
361
+ "author": "Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023a.",
362
+ "venue": "ArXiv:2301.13688 [cs].",
363
+ "url": "http://arxiv.org/abs/2301.13688"
364
+ }
365
+ },
366
+ {
367
+ "27": {
368
+ "title": "The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing & Attribution in AI.",
369
+ "author": "Shayne Longpre, Robert Mahari, Anthony Chen, Naana Obeng-Marnu, Damien Sileo, William Brannon, Niklas Muennighoff, Nathan Khazam, Jad Kabbara, Kartik Perisetla, Xinyi Wu, Enrico Shippole, Kurt Bollacker, Tongshuang Wu, Luis Villa, Sandy Pentland, and Sara Hooker. 2023b.",
370
+ "venue": "ArXiv:2310.16787 [cs].",
371
+ "url": "http://arxiv.org/abs/2310.16787"
372
+ }
373
+ },
374
+ {
375
+ "28": {
376
+ "title": "Issues with Entailment-based Zero-shot Text Classification.",
377
+ "author": "Tingting Ma, Jin-Ge Yao, Chin-Yew Lin, and Tiejun Zhao. 2021.",
378
+ "venue": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 786\u2013796, Online. Association for Computational Linguistics.",
379
+ "url": "https://doi.org/10.18653/v1/2021.acl-short.99"
380
+ }
381
+ },
382
+ {
383
+ "29": {
384
+ "title": "Learning Word Vectors for Sentiment Analysis.",
385
+ "author": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011.",
386
+ "venue": "In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142\u2013150, Portland, Oregon, USA. Association for Computational Linguistics.",
387
+ "url": "https://aclanthology.org/P11-1015"
388
+ }
389
+ },
390
+ {
391
+ "30": {
392
+ "title": "Good debt or bad debt: Detecting semantic orientations in economic texts.",
393
+ "author": "Pekka Malo, Ankur Sinha, Pekka Korhonen, Jyrki Wallenius, and Pyry Takala. 2014.",
394
+ "venue": "Journal of the Association for Information Science and Technology, 65(4):782\u2013796.",
395
+ "url": "https://doi.org/10.1002/asi.23062"
396
+ }
397
+ },
398
+ {
399
+ "31": {
400
+ "title": "HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection.",
401
+ "author": "Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2021.",
402
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence, 35(17):14867\u201314875.",
403
+ "url": "https://doi.org/10.1609/aaai.v35i17.17745"
404
+ }
405
+ },
406
+ {
407
+ "32": {
408
+ "title": "Hidden factors and hidden topics: understanding rating dimensions with review text.",
409
+ "author": "Julian McAuley and Jure Leskovec. 2013.",
410
+ "venue": "In Proceedings of the 7th ACM conference on Recommender systems, pages 165\u2013172, Hong Kong China. ACM.",
411
+ "url": "https://doi.org/10.1145/2507157.2507163"
412
+ }
413
+ },
414
+ {
415
+ "33": {
416
+ "title": "Adversarial NLI: A New Benchmark for Natural Language Understanding.",
417
+ "author": "Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020.",
418
+ "venue": "arXiv:1910.14599 [cs].",
419
+ "url": "http://arxiv.org/abs/1910.14599"
420
+ }
421
+ },
422
+ {
423
+ "34": {
424
+ "title": "GPT-4 Technical Report.",
425
+ "author": "OpenAI. 2023.",
426
+ "venue": "ArXiv:2303.08774 [cs].",
427
+ "url": "https://doi.org/10.48550/arXiv.2303.08774"
428
+ }
429
+ },
430
+ {
431
+ "35": {
432
+ "title": "Training language models to follow instructions with human feedback.",
433
+ "author": "Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022.",
434
+ "venue": "ArXiv:2203.02155 [cs].",
435
+ "url": "http://arxiv.org/abs/2203.02155"
436
+ }
437
+ },
438
+ {
439
+ "36": {
440
+ "title": "Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales.",
441
+ "author": "Bo Pang and Lillian Lee. 2005.",
442
+ "venue": "In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL\u201905), pages 115\u2013124, Ann Arbor, Michigan. Association for Computational Linguistics.",
443
+ "url": "https://doi.org/10.3115/1219840.1219855"
444
+ }
445
+ },
446
+ {
447
+ "37": {
448
+ "title": "Does Putting a Linguist in the Loop Improve NLU Data Collection?",
449
+ "author": "Alicia Parrish, William Huang, Omar Agha, Soo-Hwan Lee, Nikita Nangia, Alex Warstadt, Karmanya Aggarwal, Emily Allaway, Tal Linzen, and Samuel R. Bowman. 2021.",
450
+ "venue": "arXiv:2104.07179 [cs].",
451
+ "url": "http://arxiv.org/abs/2104.07179"
452
+ }
453
+ },
454
+ {
455
+ "38": {
456
+ "title": "Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation.",
457
+ "author": "Ofir Press, Noah A. Smith, and Mike Lewis. 2022.",
458
+ "venue": "ArXiv:2108.12409 [cs].",
459
+ "url": "http://arxiv.org/abs/2108.12409"
460
+ }
461
+ },
462
+ {
463
+ "39": {
464
+ "title": "US State of the Union Speeches.",
465
+ "author": "Policy Agendas Project. 2015.",
466
+ "venue": null,
467
+ "url": "https://www.comparativeagendas.net/datasets_codebooks"
468
+ }
469
+ },
470
+ {
471
+ "40": {
472
+ "title": "Language Models are Unsupervised Multitask Learners.",
473
+ "author": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019.",
474
+ "venue": null,
475
+ "url": "https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf"
476
+ }
477
+ },
478
+ {
479
+ "41": {
480
+ "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer.",
481
+ "author": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020.",
482
+ "venue": "ArXiv:1910.10683 [cs, stat].",
483
+ "url": "http://arxiv.org/abs/1910.10683"
484
+ }
485
+ },
486
+ {
487
+ "42": {
488
+ "title": "Model-tuning Via Prompts Makes NLP Models Adversarially Robust.",
489
+ "author": "Mrigank Raman, Pratyush Maini, J. Zico Kolter, Zachary C. Lipton, and Danish Pruthi. 2023.",
490
+ "venue": "ArXiv:2303.07320 [cs].",
491
+ "url": "http://arxiv.org/abs/2303.07320"
492
+ }
493
+ },
494
+ {
495
+ "43": {
496
+ "title": "Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset.",
497
+ "author": "Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019.",
498
+ "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370\u20135381, Florence, Italy. Association for Computational Linguistics.",
499
+ "url": "https://doi.org/10.18653/v1/P19-1534"
500
+ }
501
+ },
502
+ {
503
+ "44": {
504
+ "title": "Multitask Prompted Training Enables Zero-Shot Task Generalization.",
505
+ "author": "Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M. Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022.",
506
+ "venue": "ArXiv:2110.08207 [cs].",
507
+ "url": "http://arxiv.org/abs/2110.08207"
508
+ }
509
+ },
510
+ {
511
+ "45": {
512
+ "title": "Social Bias Frames: Reasoning about Social and Power Implications of Language.",
513
+ "author": "Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020.",
514
+ "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5477\u20135490, Online. Association for Computational Linguistics.",
515
+ "url": "https://doi.org/10.18653/v1/2020.acl-main.486"
516
+ }
517
+ },
518
+ {
519
+ "46": {
520
+ "title": "CARER: Contextualized Affect Representations for Emotion Recognition.",
521
+ "author": "Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. 2018.",
522
+ "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3687\u20133697, Brussels, Belgium. Association for Computational Linguistics.",
523
+ "url": "https://doi.org/10.18653/v1/D18-1404"
524
+ }
525
+ },
526
+ {
527
+ "47": {
528
+ "title": "Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference.",
529
+ "author": "Timo Schick and Hinrich Sch\u00fctze. 2021a.",
530
+ "venue": "In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255\u2013269, Online. Association for Computational Linguistics.",
531
+ "url": "https://doi.org/10.18653/v1/2021.eacl-main.20"
532
+ }
533
+ },
534
+ {
535
+ "48": {
536
+ "title": "It\u2019s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners.",
537
+ "author": "Timo Schick and Hinrich Sch\u00fctze. 2021b.",
538
+ "venue": "arXiv:2009.07118 [cs].",
539
+ "url": "http://arxiv.org/abs/2009.07118"
540
+ }
541
+ },
542
+ {
543
+ "49": {
544
+ "title": "tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation.",
545
+ "author": "Damien Sileo. 2023.",
546
+ "venue": "ArXiv:2301.05948 [cs].",
547
+ "url": "https://doi.org/10.48550/arXiv.2301.05948"
548
+ }
549
+ },
550
+ {
551
+ "50": {
552
+ "title": "Yelp Dataset Challenge is Doubling Up!",
553
+ "author": "R Soups. 2015.",
554
+ "venue": null,
555
+ "url": "https://engineeringblog.yelp.com/2015/02/yelp-dataset-challenge-is-doubling-up.html"
556
+ }
557
+ },
558
+ {
559
+ "51": {
560
+ "title": "RoFormer: Enhanced Transformer with Rotary Position Embedding.",
561
+ "author": "Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. 2023.",
562
+ "venue": "ArXiv:2104.09864 [cs].",
563
+ "url": "https://doi.org/10.48550/arXiv.2104.09864"
564
+ }
565
+ },
566
+ {
567
+ "52": {
568
+ "title": "NSP-BERT: A Prompt-based Few-Shot Learner Through an Original Pre-training Task\u2013Next Sentence Prediction.",
569
+ "author": "Yi Sun, Yu Zheng, Chao Hao, and Hangping Qiu. 2022.",
570
+ "venue": "ArXiv:2109.03564 [cs].",
571
+ "url": "http://arxiv.org/abs/2109.03564"
572
+ }
573
+ },
574
+ {
575
+ "53": {
576
+ "title": "Alpaca: A Strong, Replicable Instruction-Following Model.",
577
+ "author": "Rohan Taori, Ishaan Gulrajani, Zhang Tianyi, Yann Dubois, Li Xuechen, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023.",
578
+ "venue": null,
579
+ "url": "https://github.com/tatsu-lab/stanford_alpaca"
580
+ }
581
+ },
582
+ {
583
+ "54": {
584
+ "title": "FEVER: a large-scale dataset for Fact Extraction and VERification.",
585
+ "author": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018.",
586
+ "venue": "ArXiv:1803.05355 [cs].",
587
+ "url": "https://doi.org/10.48550/arXiv.1803.05355"
588
+ }
589
+ },
590
+ {
591
+ "55": {
592
+ "title": "Llama 2: Open Foundation and Fine-Tuned Chat Models.",
593
+ "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas\nScialom. 2023.",
594
+ "venue": "ArXiv:2307.09288 [cs].",
595
+ "url": "https://doi.org/10.48550/arXiv.2307.09288"
596
+ }
597
+ },
598
+ {
599
+ "56": {
600
+ "title": "Zephyr: Direct Distillation of LM Alignment.",
601
+ "author": "Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Cl\u00e9mentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, and Thomas Wolf. 2023.",
602
+ "venue": "ArXiv:2310.16944 [cs].",
603
+ "url": "https://doi.org/10.48550/arXiv.2310.16944"
604
+ }
605
+ },
606
+ {
607
+ "57": {
608
+ "title": "yahoo_answers_topics Datasets at Hugging Face.",
609
+ "author": "Unknown. 2024.",
610
+ "venue": null,
611
+ "url": "https://huggingface.co/datasets/yahoo_answers_topics"
612
+ }
613
+ },
614
+ {
615
+ "58": {
616
+ "title": "Entailment as Few-Shot Learner.",
617
+ "author": "Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, and Hao Ma. 2021.",
618
+ "venue": "arXiv:2104.14690 [cs].",
619
+ "url": "http://arxiv.org/abs/2104.14690"
620
+ }
621
+ },
622
+ {
623
+ "59": {
624
+ "title": "A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference.",
625
+ "author": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018.",
626
+ "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112\u20131122, New Orleans, Louisiana. Association for Computational Linguistics.",
627
+ "url": "https://doi.org/10.18653/v1/N18-1101"
628
+ }
629
+ },
630
+ {
631
+ "60": {
632
+ "title": "Transformers: State-of-the-Art Natural Language Processing.",
633
+ "author": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020.",
634
+ "venue": "In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38\u201345, Online. Association for Computational Linguistics.",
635
+ "url": "https://doi.org/10.18653/v1/2020.emnlp-demos.6"
636
+ }
637
+ },
638
+ {
639
+ "61": {
640
+ "title": "Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models.",
641
+ "author": "Mengzhou Xia, Mikel Artetxe, Jingfei Du, Danqi Chen, and Veselin Stoyanov. 2022.",
642
+ "venue": "In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11351\u201311361, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
643
+ "url": "https://aclanthology.org/2022.emnlp-main.780"
644
+ }
645
+ },
646
+ {
647
+ "62": {
648
+ "title": "A Universal Discriminator for Zero-Shot Generalization.",
649
+ "author": "Haike Xu, Zongyu Lin, Jing Zhou, Yanan Zheng, and Zhilin Yang. 2023.",
650
+ "venue": "In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10559\u201310575, Toronto, Canada. Association for Computational Linguistics.",
651
+ "url": "https://aclanthology.org/2023.acl-long.589"
652
+ }
653
+ },
654
+ {
655
+ "63": {
656
+ "title": "Prompt Tuning for Discriminative Pre-trained Language Models.",
657
+ "author": "Yuan Yao, Bowen Dong, Ao Zhang, Zhengyan Zhang, Ruobing Xie, Zhiyuan Liu, Leyu Lin, Maosong Sun, and Jianyong Wang. 2022.",
658
+ "venue": "In Findings of the Association for Computational Linguistics: ACL 2022, pages 3468\u20133473, Dublin, Ireland. Association for Computational Linguistics.",
659
+ "url": "https://doi.org/10.18653/v1/2022.findings-acl.273"
660
+ }
661
+ },
662
+ {
663
+ "64": {
664
+ "title": "Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach.",
665
+ "author": "Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019.",
666
+ "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914\u20133923, Hong Kong, China. Association for Computational Linguistics.",
667
+ "url": "https://doi.org/10.18653/v1/D19-1404"
668
+ }
669
+ },
670
+ {
671
+ "65": {
672
+ "title": "DocNLI: A large-scale dataset for document-level natural language inference.",
673
+ "author": "Wenpeng Yin, Dragomir Radev, and Caiming Xiong. 2021.",
674
+ "venue": "In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4913\u20134922, Online. Association for Computational Linguistics.",
675
+ "url": "https://doi.org/10.18653/v1/2021.findings-acl.435"
676
+ }
677
+ },
678
+ {
679
+ "66": {
680
+ "title": "Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start.",
681
+ "author": "Wenpeng Yin, Nazneen Fatema Rajani, Dragomir Radev, Richard Socher, and Caiming Xiong. 2020.",
682
+ "venue": "arXiv:2010.02584 [cs].",
683
+ "url": "http://arxiv.org/abs/2010.02584"
684
+ }
685
+ },
686
+ {
687
+ "67": {
688
+ "title": "Character-level Convolutional Networks for Text Classification.",
689
+ "author": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.",
690
+ "venue": "In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc.",
691
+ "url": "https://papers.nips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html"
692
+ }
693
+ },
694
+ {
695
+ "68": {
696
+ "title": "LIMA: Less Is More for Alignment.",
697
+ "author": "Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023.",
698
+ "venue": "ArXiv:2305.11206 [cs].",
699
+ "url": "http://arxiv.org/abs/2305.11206"
700
+ }
701
+ }
702
+ ],
703
+ "url": "http://arxiv.org/html/2312.17543v2"
704
+ }
20240322/2401.05224v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2401.05943v2.json ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "SoK: Analysis Techniques for WebAssembly",
3
+ "abstract": "WebAssembly is a low-level bytecode language that enables high-level languages like C, C++, and Rust to be executed in the browser at near-native performance. In recent years, WebAssembly has gained widespread adoption and is now natively supported by all modern browsers. Despite its benefits, WebAssembly has introduced significant security challenges, primarily due to vulnerabilities inherited from memory-unsafe source languages. Moreover, the use of WebAssembly extends beyond traditional web applications to smart contracts on blockchain platforms, where vulnerabilities have led to significant financial losses. WebAssembly has also been used for malicious purposes, like cryptojacking, where website visitors\u2019 hardware resources are used for crypto mining without their consent. To address these issues, several analysis techniques for WebAssembly binaries have been proposed. This paper presents a systematic review of these analysis techniques, focusing on vulnerability analysis, cryptojacking detection, and smart contract security. The analysis techniques are categorized into static, dynamic, and hybrid methods, evaluating their strengths and weaknesses based on quantitative data.\nOur findings reveal that static techniques are efficient but may struggle with complex binaries, while dynamic techniques offer better detection at the cost of increased overhead.\nHybrid approaches, which merge the strengths of static and dynamic methods, are not extensively used in the literature and emerge as a promising direction for future research.\nLastly, this paper identifies potential future research directions based on the state of the current literature.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "The Internet has come a long way since its inception and one of the key technologies that have enabled its growth and evolution is JavaScript. JavaScript, which was developed in the mid-1990s, is a programming language that is widely used to create interactive and dynamic websites. It was initially designed to enable basic interactivity on web pages, such as form validation and image slideshows. However, it has evolved into a versatile language that is used to build complex web applications. Today, JavaScript is one of the most popular programming languages in the world, currently being used by 98% of all websites [w3techs-2022-usageofjs].\nDespite its popularity and versatility, JavaScript has some inherent limitations that have become apparent as web applications have grown more complex and resource-demanding. Specifically, JavaScript is a high-level, interpreted, dynamically typed language, which fundamentally limits its performance. Consequently, it is not suited for developing resource-demanding web applications. To address the shortcomings of JavaScript, several technologies, like ActiveX [activex-2022], NaCl [native-client-2022], and asm.js [asm-js-2022], have been developed. However, these technologies have faced compatibility issues, security vulnerabilities, and performance limitations.\nWebAssembly was developed by a consortium of companies, including Mozilla, Microsoft, Apple, and Google, as a solution to the limitations of existing technologies. WebAssembly is designed as a safe, fast, and portable compilation target for high-level languages like C, C++, and Rust, allowing them to be executed with near-native performance in the browser. It has gained widespread adoption and is currently supported by 96% of all browser instances [can-i-use-2022]. Moreover, WebAssembly is also being extended to desktop applications [moller-2018-technical], mobile devices [pop-2021-secure], cloud computing [2022-fastlydocs], blockchain Virtual Machines (VMs) [ewasm-ethereum-2022, eosio-wasm-2022, near-2022-whatisasmart], IoT [liu-2021-aerogel, makitalo-2021-wasm], and embedded devices [scheidl-2020-valent].\nHowever, WebAssembly is not without its own set of challenges. Vulnerabilities in memory-unsafe languages, like C and C++, can translate into vulnerabilities in WebAssembly binaries [lehmann-2020-everythingoldis]. Unfortunately, two-thirds of WebAssembly binaries are compiled from memory-unsafe languages [hilbig-2021-empiricalstudyreal], and these attacks have been found to be practical in real-world scenarios [lehmann-2020-everythingoldis]. Vulnerabilities have also been uncovered in WebAssembly smart contracts [number-generator-2022, huang-2020-eosfuzzerfuzzingeosio], consequently causing significant financial loss. Moreover, WebAssembly has been used for malicious purposes, such as cryptojacking, where website visitor\u2019s hardware resources are used for crypto mining without their consent [musch-2019-newkidweb]. To mitigate these issues, several analysis techniques for WebAssembly binaries have been proposed.\nIn this paper, we conduct an in-depth literature review of analysis techniques for WebAssembly binaries, with a focus on their application across diverse computing environments, including web development, cloud computing, and edge computing. To this end, we classify the analysis techniques based on their strategy and objectives, uncovering three primary categories: Detecting malicious WebAssembly binaries (Section LABEL:sec:detecting-malicious-wasm-binaries), detecting vulnerabilities in WebAssembly binaries (Section LABEL:sec:detecting-vulnerabilities-in-wasm-binaries), and detecting vulnerabilities in WebAssembly smart contracts (Section LABEL:sec:detecting-vulnerabilities-in-wasm-smart-contracts). Moreover, we compare and evaluate the techniques using quantitative data, highlighting their strengths and weaknesses. Lastly, one of the main contributions of this paper is the identification of future research directions based on the literature review conducted.\nIn summary, this paper contributes the following:\nA comprehensive analysis of current analysis techniques for WebAssembly binaries, using quantitative data to evaluate their strengths and weaknesses.\nA taxonomical classification of current analysis techniques for WebAssembly binaries.\nKey findings and limitations of current analysis techniques for WebAssembly binaries, including the trade-offs between accuracy and overhead of static and dynamic analysis methods.\nIdentification of gaps in the literature and suggestions for future research directions.\nThe rest of this paper is structured as follows: Section 2 ###reference_### provides the necessary background information and the current state of research in the field. Section 3 ###reference_### reviews related work, highlighting previous studies and their contributions. Section 4 ###reference_### describes the methodology employed in our research, including the search strategy, selection process, data extraction, and analysis methods. The main findings of the systematic review are detailed in Section 5 ###reference_###, where we categorize and evaluate the analysis techniques for WebAssembly based on their method and effectiveness. A discussion of these findings and their implications are presented in Section LABEL:sec:discussion. Finally, Section LABEL:sec:conclusion concludes the paper and suggests future research directions."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Background",
15
+ "text": "The background section of this paper provides a detailed overview of WebAssembly. The limitations of JavaScript and prior attempts at incorporating low-level code on the web are first discussed. Then, an in-depth description of WebAssembly\u2019s security mechanisms, vulnerabilities, and use cases are presented."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "History",
21
+ "text": "JavaScript.\nInitially, the Internet was primarily used by researchers, scientists, and other academics to share information and collaborate on projects. At this time, websites were mostly composed of static text and images, lacking dynamic or interactive components. The arrival of web browsers such as Netscape Navigator and Internet Explorer in the late 1990s made the internet accessible to the general public and sparked the development of technology to enhance website user experience with dynamic and interactive elements. JavaScript, created by Netscape in 1995 [javascript-wikipedia], became one of these technologies, enabling web developers to create engaging content. Today, JavaScript is a widely used programming language supported by all major web browsers and used on 98% of websites [w3techs-2022-usageofjs].\nDespite its popularity and versatility, JavaScript has some inherent limitations that impact its performance. As a high-level language, JavaScript abstracts away many of the details of the underlying hardware, making it easier to write and understand. However, this also means that the JavaScript engine has to do more work to translate the code into machine-readable instructions. Additionally, because JavaScript is an interpreted language, it must be parsed and interpreted every time it is executed, which can add overhead and decrease performance. Lastly, JavaScript is dynamically typed, meaning the type of a variable is determined at runtime. This can make it difficult for the JavaScript engine to optimize the code, resulting in reduced performance. These limitations can hinder the performance of JavaScript in resource-demanding or complex applications. There is, therefore, a need for high-performance, low-level code on the web.\nActiveX.\nActiveX [activex-2022] is a deprecated framework that was introduced by Microsoft in 1996. It allowed developers to embed signed x86 binaries through ActiveX controls. These controls were built using the \\acCOM specification, which was intended to make the controls platform-independent. However, ActiveX controls contain compiled x86 machine code and calls to the standard Win32 API, restricting them to x86-based Windows machines. Additionally, they were not run in a sandboxed environment, consequently allowing them to access and modify system resources. In terms of security, ActiveX did not ensure safety through its technical design but rather through a trust model based on code signing.\nNaCl.\n\\acNaCl [native-client-2022] is a system introduced by Google in 2011 that allows for the execution of machine code on the web. The sandboxing model implemented by \\acNaCl enables the coexistence of \\acNaCl code with sensitive data within the same process. However, \\acNaCl is specifically designed for the x86 architecture, limiting its portability. To address this limitation, Google introduced \\acpNaCl [portable-native-client-2022] in 2013. \\acpNaCl builds upon \\acNaCl\u2019s sandboxing techniques and uses an LLVM bitcode subset as an interchangeable format, allowing for the portability of applications across different architectures. However, \\acpNaCl does not significantly improve compactness and still exposes details specific to compilers and architectures, like the call stack layout.\nThe portability of \\acNaCl and \\acpNaCl is also limited since they are only supported in Google Chrome.\nAsm.js.\nAsm.js [asm-js-2022], which was introduced by Mozilla in 2013, is a strict subset of JavaScript that can be used as an efficient compilation target for high-level languages like C and C++. Through the Emscripten toolchain [emscripten-page], these languages can be compiled to asm.js and subsequently executed on modern JavaScript execution engines, benefitting from sophisticated \\acJIT compilers. This allows for near-native performance.\nHowever, the nature of asm.js as a strict subset of JavaScript means that any extension of its features requires modifications to JavaScript first, followed by ensuring these changes are compatible with asm.js, which makes features challenging to implement effectively.\nJava and Flash.\nIt is also worth noting that Java and Flash were among the first technologies to be used on the web, being released in 1995 and 1996, respectively [java-launch, flash-launch]. They offered managed runtime plugins; however, neither was capable of supporting high-performance, low-level code. Moreover, their usage has declined due to security vulnerabilities and performance issues."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "WebAssembly",
27
+ "text": "###figure_1### Overview.\nWebAssembly is a technology that aims to address performance, compatibility, and security issues that have plagued previous approaches. It was developed by a consortium of tech companies, including Mozilla, Microsoft, Apple, and Google, and was released in 2017 [haas-2017-bringingwebspeed]. WebAssembly has since gained widespread adoption and is currently supported by 96% of all browser instances [can-i-use-2022]. Additionally, it is an official \\acW3C standard [w3c-standard], and is natively supported on the web. An overview of WebAssembly is given in Figure 1 ###reference_###.\nWebAssembly is a low-level bytecode language that runs on a stack-based \\acVM.\nMore specifically, instructions push and pop operands to the evaluation stack.\nThis architecture does not use registers; instead, values are stored in global variables that are accessible throughout the entire module or in local variables that are scoped to the current function.\nThe \\acVM manages the evaluation stack, global variables, and local variables.\nHost Environment.\nWebAssembly modules run within a host environment, which provides the necessary functionality for the module to perform actions such as I/O or network access. In a browser, the host environment is provided by the JavaScript engine, such as V8 or SpiderMonkey. WebAssembly exports can be wrapped in JavaScript functions using the WebAssembly JavaScript API [wasm-js-api], allowing them to be called from JavaScript code. Similarly, WebAssembly code can import and call JavaScript functions. Other host environments for WebAssembly include server-side environments like Node.js [node-js-2022] and stand-alone \\acVMs with accompanying APIs. For instance, the \\acWASI [wasi] allows WebAssembly modules to access the file system.\nModule.\nWebAssembly modules serve as the fundamental building blocks for deployment, loading, and compilation.\nA module contains definitions for types, functions, tables, memories, and globals. In addition, a module can declare imports and exports, as well as provide initialization through data and element segments or a start function.\nCompilation.\nLanguages like C, C++, and Rust can be compiled into WebAssembly since it is designed as a compilation target. Toolchains like Emscripten [emscripten-page] or wasm-pack [wasm-pack-2022] can be used to compile these languages to WebAssembly. The resulting binary is in the wasm binary format, but can also be represented in the equivalent human-readable text format called wat. A module corresponds to one file. The \\acWABT [wabt] provides tools for converting between wasm and wat representations, as well as for de-compilation and validation of WebAssembly binaries.\nUse Cases.\nWebAssembly has been adopted for various applications on the web due to its near-native execution performance, such as data compression, game engines, and natural language processing. However, the usage of WebAssembly is not only limited to the web. It is also being extended to desktop applications [moller-2018-technical], mobile devices [pop-2021-secure], cloud computing [2022-fastlydocs], IoT [liu-2021-aerogel, makitalo-2021-wasm], and embedded devices [scheidl-2020-valent]."
28
+ },
29
+ {
30
+ "section_id": "2.2.1",
31
+ "parent_section_id": "2.2",
32
+ "section_name": "2.2.1 Security",
33
+ "text": "Environment.\nWebAssembly modules run in a sandboxed environment which uses fault isolation techniques to separate it from the host environment.\nAs a result of this, modules have to go through APIs to access external resources.\nFor instance, modules that run in the web browser must use JavaScript APIs to interact with the \\acDOM.\nSimilarly, stand-alone runtimes must use APIs, like \\acWASI, to access system resources like files.\nIn addition to this, modules must adhere to the security policies implemented by its host environment, such as the \\acSOP [same-origin-policy] enforced by web browsers, which restricts the flow of information between web pages from different origins.\nMemory.\nUnlike native binaries, which have access to the entire memory space allocated to the process, WebAssembly modules only have access to a contiguous region of memory known as linear memory.\nThis memory is untyped and byte-addressable, and its size is determined by the data present in the binary.\nThe size of linear memory is a multiple of a WebAssembly page, each being 64 KiB in size.\nWhen a WebAssembly module is instantiated, it uses the appropriate API call to allocate the memory that is needed for its execution.\nThe host environment then creates a managed buffer, typically an ArrayBuffer, to store the linear memory.\nThis means that the WebAssembly module accesses the physical memory indirectly through the managed buffer, which ensures that it can only read and write data within a limited area of the memory.\nControl Flow Integrity.\nWebAssembly enforces structured control flow, organizing instructions into well-nested blocks within functions. It restricts branches to the end of surrounding blocks or within the current function, with multi-way branches targeting only pre-defined blocks. This prevents unrestricted gotos or executing data as bytecode, eliminating attacks like shellcode injection or misuse of indirect jumps. Additionally, execution semantics ensure safety for direct function calls through explicit indexing and protected returns with a call stack. Indirect function calls undergo runtime checks for type signatures, establishing coarse-grained, type-based control-flow integrity. Additionally, the LLVM compiler infrastructure has been adapted to include a fine-grained control flow integrity feature, specifically designed to support WebAssembly [wasm-security-2022]."
34
+ },
35
+ {
36
+ "section_id": "2.2.2",
37
+ "parent_section_id": "2.2",
38
+ "section_name": "2.2.2 Vulnerabilities",
39
+ "text": "Inherent vulnerabilities in the source code can lead to subsequent vulnerabilities in WebAssembly modules [lehmann-2020-everythingoldis].\nSpecifically, buffer overflows in memory-unsafe languages like C and C++ can overwrite constant data or the heap in WebAssembly modules.\nDespite WebAssembly\u2019s sandboxing, these vulnerabilities allow malicious script injection into the module\u2019s data section, which is accessible via JavaScript APIs.\nAn example of this is the Emscripten API [emstricpten-api], which allows developers to access data from WebAssembly modules and inject it into the \\acDOM, which can lead to \\acXSS attacks [mcfadden-2018-securitychasmswasm].\nNotably, two-thirds of WebAssembly binaries are compiled from memory-unsafe languages [hilbig-2021-empiricalstudyreal], and these attacks have been shown to be practical in real-world scenarios [lehmann-2020-everythingoldis].\nFor instance, Fastly, a cloud platform that offers edge computing services, experienced a 45-minute disruption on June 8th, 2021, when a WebAssembly binary with a vulnerability was deployed [fastly-2022]."
40
+ },
41
+ {
42
+ "section_id": "2.2.3",
43
+ "parent_section_id": "2.2",
44
+ "section_name": "2.2.3 Smart Contracts",
45
+ "text": "Smart contracts are computer programs that are stored on a blockchain, designed to automatically execute once pretermined conditions are met, eliminating the need for intermediaries.\nInitially proposed by Nick Szabo in 1994 [szabo-1994-smart], long before the advent of Bitcoin, they have since gained widespread popularity alongside the rise of blockchain technology and cryptocurrencies.\nThe inherent properties of blockchain, such as transparency, security, and immutability, make smart contracts particularly appealing for cryptocurrency transactions.\nThis ensures that once the terms of the contract are agreed upon and coded into the blockchain, they can be executed without the possibility of fraud or third-party interference.\nSmart contracts can facilitate a variety of transactions, from the transfer of cryptocurrency between parties to the automation of complex processes in finance, real estate, and beyond.\nDue to its near-native performance, WebAssembly has been adopted by blockchain platforms, such as EOSIO [eosio-wasm-2022] and NEAR [near-2022-whatisasmart], as their smart contract runtime.\nEthereum has included WebAssembly in the roadmap for Ethereum 2.0, positioning it as the successor to the \\acEVM [ewasm-ethereum-2022].\nHowever, as with any technology, smart contracts are not without their challenges and vulnerabilities. The immutable nature of blockchain means that once a smart contract is deployed, it cannot be modified, making the correction of vulnerabilities in its code challenging. Several incidents have highlighted the potential financial and security risks associated with vulnerabilities in WebAssembly smart contracts. For instance, random number generation vulnerabilities led to the theft of approximately 170,000 EOS tokens [number-generator-2022]. Similarly, the fake EOS transfer vulnerability in the EOSCast smart contract has led to the theft of approximately 60,000 EOS tokens [huang-2020-eosfuzzerfuzzingeosio]. The forged transfer notification vulnerability in EOSBet has resulted in the loss of 140,000 EOS tokens [huang-2020-eosfuzzerfuzzingeosio]. Based on the average price of EOS tokens at the time of the attacks, the combined financial impact of these three vulnerabilities amounted to roughly $1.9 million. Additionally, around 25% of WebAssembly smart contracts have been found to be vulnerable [he-2021-eosafesecurityanalysis]."
46
+ },
47
+ {
48
+ "section_id": "2.2.4",
49
+ "parent_section_id": "2.2",
50
+ "section_name": "2.2.4 Cryptojacking",
51
+ "text": "Cryptojacking, also known as drive-by mining, involves using a website visitor\u2019s hardware resources for mining cryptocurrencies without their consent. Previously, cryptojacking was implemented using JavaScript. However, in recent years WebAssembly has been utilized due to its computational efficiency. The year after WebAssembly was released, there was a 459% increase in cryptojacking [cyberthreat-2018]. The following year, researchers found that over 50% of all sites using WebAssembly were using it for cryptojacking [musch-2019-newkidweb]. To counter this trend, researchers developed several static and dynamic detection methods for identifying WebAssembly-based cryptojacking.\nWhile there are theories suggesting that WebAssembly can be used for other malicious purposes, like tech support scams, browser exploits, and script-based keyloggers [darkside-of-wasm], evidence of such misuse in real-world scenarios has not been documented. As a result, there are no analysis techniques for detecting such malicious WebAssembly binaries. Consequently, discussions about malicious WebAssembly binaries in this paper mainly refer to crypto mining binaries."
52
+ },
53
+ {
54
+ "section_id": "3",
55
+ "parent_section_id": null,
56
+ "section_name": "Related Work",
57
+ "text": "This section discusses related work. Specifically, related studies are presented and the differences between those studies and our paper are discussed.\nIn a similar vein to this paper, Kim et al. [kim-2022-avengersassemblesurvey] survey the various techniques and methods for WebAssembly binary security. However, their focus is on general security techniques for WebAssembly, while our paper focuses specifically on analysis techniques for WebAssembly. We both discuss cryptojacking detection and vulnerability detection for WebAssembly, but we go further by also examining vulnerability analysis for WebAssembly smart contracts. Additionally, we use different classification systems and performance metrics.\nTekiner et al. [tekiner-2021-sok] focus on surveying cryptojacking detection techniques by strictly evaluating and comparing state-of-the-art methods. In contrast, our paper examines analysis techniques for WebAssembly, including cryptojacking detection, vulnerability analysis for WebAssembly binaries, and vulnerability analysis for WebAssembly smart contracts. We also use different classification systems and performance metrics.\nRomano et al. [romano-2021-empiricalstudybugs] investigate bugs in WebAssembly compilers, specifically examining the Emscripten [emscripten-page], AssemblyScript [assemblyscript], and WebAssembly-Bindgen [wasm-bindgen] compilers. They discover bugs in the Emscripten compiler that could potentially cause significant security issues. Our work, on the other hand, focuses on security in WebAssembly binaries using analysis techniques, rather than examining the security of the compilers themselves."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Methodology",
63
+ "text": "This section outlines the methodology used to conduct the systematic review. The literature review aims to identify, evaluate, and synthesize the findings from previous studies on vulnerability analysis, malicious WebAssembly binaries, and smart contracts within the WebAssembly context."
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Search Strategy",
69
+ "text": "The primary sources for the literature search were Google Scholar and Scopus. The search terms used were a combination of keywords related to WebAssembly and its security aspects. These included \"WebAssembly\", \"WebAssembly security\", \"WebAssembly vulnerability analysis\", \"malicious WebAssembly binaries\", \"cryptojacking\", and \"WebAssembly smart contracts\", as well as their synonyms and related terms. Boolean operators (AND, OR) were used to refine the search queries, aiming to capture a broad spectrum of relevant research.\nThe search was actively conducted from August 2022 to December 2022.\nGiven the emerging nature of WebAssembly and its security landscape, we did not apply any publication date restrictions in our search criteria.\nThis approach allowed us to include all relevant studies, from the inception of WebAssembly to the latest advancements, ensuring our review reflects the complete historical and contemporary context of WebAssembly security research."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Selection Process",
75
+ "text": "The selection process was designed to include studies that had developed analysis methods and tools specifically for WebAssembly. Given the novelty of the field, all studies implementing such techniques were considered. However, exclusions were made for papers not directly related to vulnerability analysis, malicious WebAssembly binaries, or smart contracts. Furthermore, only peer-reviewed journal articles were included, ensuring the credibility and reliability of the results.\nAn additional inclusion criterion was the application of the proposed analysis technique on at least ten samples. This criterion was set to ensure that included studies had their methods tested adequately, providing a measure of reliability and applicability of the findings. The sample size for each method is presented in the following sections aim to illustrate the extent to which each technique has been tested and validated."
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Data Extraction and Analysis",
81
+ "text": "Data extraction was performed on the selected papers, focusing on implementation details, the application domain (vulnerability analysis, detection of malicious binaries, or smart contracts), sample size, and the performance of the methods. The papers used different metrics for evaluating the performance of their methods, so we converted their results into a standardized set of metrics to have a basis for comparison.\nFor evaluating the performance of the analysis techniques we opted to use precision, recall, and F-scores. Precision measures the proportion of retrieved items that are relevant, while recall measures the proportion of relevant items that are retrieved. A high number of false positives will decrease the precision, while a high number of false negatives will decrease the recall. The F score is the harmonic mean of precision and recall and provides a way to combine these two metrics into a single value.\nThese metrics are mathematically defined as:\nThese metrics are used instead of accuracy because they are better suited for evaluating the performance of analysis techniques in the presence of imbalanced datasets, which has been common in the literature. In addition to these metrics, the performance of static-based methods has been evaluated using detection time, while the performance of dynamic-based methods has been evaluated using runtime overhead. These metrics provide a way to compare the different analysis techniques and assess their relative strengths and weaknesses."
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Analysis Techniques for WebAssembly",
87
+ "text": ""
88
+ }
89
+ ],
90
+ "appendix": [],
91
+ "tables": {},
92
+ "image_paths": {
93
+ "1": {
94
+ "figure_path": "2401.05943v2_figure_1.png",
95
+ "caption": "Figure 1: WebAssembly serves as the intermediate bytecode bridging the gap between multiple source languages and host environments. The host environments compile the WebAssembly binaries into native code for the specific hardware architecture.",
96
+ "url": "http://arxiv.org/html/2401.05943v2/x3.png"
97
+ }
98
+ },
99
+ "validation": true,
100
+ "references": [],
101
+ "url": "http://arxiv.org/html/2401.05943v2"
102
+ }
20240322/2401.11170v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240322/2402.00631v2.json ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Beyond Inserting: Learning Identity Embedding for Semantic-Fidelity Personalized Diffusion Generation",
3
+ "abstract": "Advanced diffusion-based Text-to-Image (T2I) models, such as the Stable Diffusion Model, have made significant progress in generating diverse and high-quality images using text prompts alone. However, when non-famous users require personalized image generation for their identities (IDs), the T2I models fail to accurately generate their ID-related images. The main problem is that pre-trained T2I models do not learn the mapping between the new ID prompts and their corresponding visual content. The previous methods either failed to accurately fit the face region or lost the interactive generative ability with other existing concepts in T2I models. In other words, they are unable to generate T2I-aligned and semantic-fidelity images for the given prompts with other concepts such as scenes (\u201cEiffel Tower\u201d), actions (\u201cholding a basketball\u201d), and facial attributes (\u201ceyes closed\u201d). In this paper, we focus on inserting accurate and interactive ID embedding into the Stable Diffusion Model for semantic-fidelity personalized generation. We address this challenge from two perspectives: face-wise region fitting and semantic-fidelity token optimization. Specifically, we first visualize the attention overfit problem and propose a face-wise attention loss to fit the face region instead of entangling ID-unrelated information, such as face layout and background. This key trick significantly enhances the ID accuracy and interactive generative ability with other existing concepts. Then, we optimize one ID representation as multiple per-stage tokens where each token contains two disentangled features. This expansion of the textual conditioning space improves semantic-fidelity control. Extensive experiments validate that our results exhibit superior ID accuracy, text-based manipulation ability, and generalization compared to previous methods.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Recently, Text-to-Image (T2I) models, such as the Stable Diffusion Model [4 ###reference_b4###], have demonstrated an impressive ability to generate diverse, high-quality, and semantic-fidelity images using text prompts alone, thanks to image-aligned language encoders [5 ###reference_b5###] and diffusion-based generative models [6 ###reference_b6###, 7 ###reference_b7###]. However, the challenge of personalized generation still remains, because the accurate person-specific face manifold can not be represented by text tokens, especially for the non-famous users whose data are not included in the training dataset. In this paper, we focus on learning the accurate identity embedding for semantic-fidelity personalized diffusion-based generation using only one face image.\nThe previous methods for this task have two problems that need to be addressed: (1) Attention Overfit: Their fine-tuning strategies [8 ###reference_b8###, 9 ###reference_b9###], such as Texural Inversion [1 ###reference_b1###] and ProSpect [2 ###reference_b2###], tend to fit the whole target image rather than the ID-related face region, which entangle face layout and background information into the ID embedding. This results in the low ID accuracy and the difficulty of generating other existing concepts in the given prompt, such as ID-unrelated scenes (e.g., \u201cEiffel Tower\u201d), ID-related facial attributes (e.g., expressions and age), and actions (e.g., \u201cholding a basketball\u201d). Particularly for actions, it is more challenging to generate prompt-fidelity human motions and human-object interactions, which can be shown in Fig. 1 ###reference_###. (2) Limited Semantic-Fidelity: Their ID embedding methods lack the semantic-fidelity representations for facial attributes, which results in that human faces are treated as objects without non-rigid and diverse deformations. Although Celeb Basis [3 ###reference_b3###] can achieve an accurate ID mapping, it is unable to manipulate the facial attributes of the target image, such as expressions (e.g., \u201ceyes closed\u201d in Fig. 1 ###reference_###).\nTo address these problems, we propose our identity embedding method from two perspectives: (1) Face-Wise Region Fit: We first visualize the attention overfit problem of the previous methods from the attention feature activation maps and then propose a face-wise attention loss to fit the face region instead of the whole target image. This key trick can improve the ID accuracy and interactive generative ability with the existing concepts in the original Stable Diffusion Model. (2) Semantic-Fidelity Token Optimization: We optimize one ID representation as several per-stage tokens, and each token consists of two disentangled features. This approach expands the textual conditioning space and allows for semantic-fidelity control ability. Our extensive experiments validate that our method achieves higher accuracy in ID embedding and is able to produce a wider range of scenes, facial attributes, and actions compared to previous methods.\nTo summarize, the contributions of our approach are:\nWe visualize attention overfit problem of the previous methods, and propose a face-wise attention loss for improving the ID embedding accuracy and interactive generative ability with the existing concepts in the original Stable Diffusion Model.\nFor semantic-fidelity generation, we optimize one ID representation as several per-stage tokens with disentangled features, which expands the textual conditioning space of the diffusion model with control ability for various scenes, facial attributes, and actions.\nExtensive experiments validate our advantages in ID accuracy and manipulation ability over previous methods.\nOur method does not rely on any prior facial knowledge, which has the potential to be applied to other categories.\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Text-Based Image Synthesis and Manipulation",
21
+ "text": "Previous models such as GAN [10 ###reference_b10###, 11 ###reference_b11###, 12 ###reference_b12###, 13 ###reference_b13###, 14 ###reference_b14###, 15 ###reference_b15###, 16 ###reference_b16###, 17 ###reference_b17###, 18 ###reference_b18###, 19 ###reference_b19###], VAE [20 ###reference_b20###, 21 ###reference_b21###, 22 ###reference_b22###, 23 ###reference_b23###], Autoregressive [24 ###reference_b24###, 25 ###reference_b25###], Flow [26 ###reference_b26###, 27 ###reference_b27###] were adopted to model the dataset distribution, and then synthesize new realistic images through sampling from the modeled distribution. Based on these, text-driven image manipulation [28 ###reference_b28###, 29 ###reference_b29###, 30 ###reference_b30###] has achieved significant progress using GANs by combining text representations such as CLIP [5 ###reference_b5###]. These methods work well on structured scenarios (e.g. human face editing), but their performance in fine-grained multi-modal alignment is not very satisfactory. Recent advanced diffusion models [6 ###reference_b6###, 7 ###reference_b7###] have shown excellent diversity and fidelity in text-to-image synthesis [31 ###reference_b31###, 32 ###reference_b32###, 24 ###reference_b24###, 4 ###reference_b4###, 33 ###reference_b33###, 34 ###reference_b34###]. Conditioned on the text embedding of the text encoder [5 ###reference_b5###], these diffusion-based models are optimized by a simple denoising loss and can generate a new image by sampling Gaussian noise and a text prompt. Thanks to the powerful capabilities of diffusion in T2I generation, works [35 ###reference_b35###, 36 ###reference_b36###, 37 ###reference_b37###] achieve state-of-the-art text based image editing quality over diverse datasets, often surpassing GANs. Although most of these approaches enable various global or local editing of an input image, all of them have difficulties in generating novel concepts [38 ###reference_b38###] or controlling the identity of generated objects [1 ###reference_b1###]. Existing methods either directly blended the latent code of objects [39 ###reference_b39###, 40 ###reference_b40###] to the generated background, or failed to understand the scenes correctly [41 ###reference_b41###], which results in the obvious artifacts. To further solve this problem, some work [42 ###reference_b42###, 43 ###reference_b43###, 44 ###reference_b44###, 45 ###reference_b45###] adopted attention-based methods to manipulate target objects, but fail to balance the trade-off between content diversity and identity accuracy."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Personalized Generation of Diffusion-Based T2I Models",
27
+ "text": "Using images from the new concepts for fine-tuning can obtain a personalized model, which can insert new concepts into the original model and synthesize concept-specific new scenes, appearances, and actions. Inspired by the GAN Inversion [46 ###reference_b46###], recent diffusion-based personalized generation works can be divided into three categories: (1) Fine-Tuning T2I model: DreamBooth [9 ###reference_b9###] fine-tunes all weight of the T2I model on a set of images with the same ID and marks it as the specific token. (2) Token Optimization: Textual Inversion [1 ###reference_b1###], ProSpect [2 ###reference_b2###], and Celeb Basis [3 ###reference_b3###] optimize the text embedding of special tokens to map the specific ID into the T2I model, where the T2I model is fixed in the optimization process. (3) Tuning Free: ELITE [47 ###reference_b47###] learning an encoder to customize a visual concept provided by the user without further fine-tuning. BootPIG [48 ###reference_b48###] follows a bootstraping strategy by utilizing a pre-trained U-Net model to steer the personalization generation. Except for those, Token Optimization and Fine-Tuning are combined to manipulate multi-concept interactions [8 ###reference_b8###] or saving fine-tuning time and parameter amount [49 ###reference_b49###, 50 ###reference_b50###, 51 ###reference_b51###, 52 ###reference_b52###].\nID Embedding for Faces. Previous methods [53 ###reference_b53###, 54 ###reference_b54###] try to train an inversion encoder for face embedding, but face ID-oriented mapping is difficult to be obtained from a naively optimized encoder. Moreover, fine-tuning the T2I model on large-scale images often causes concept forgetting. For this, Celeb Basis [3 ###reference_b3###] adopts a pre-trained face recognition model and a face ID basis to obtain an ID representation for one single face image, and Face0 [55 ###reference_b55###] learned to project the embeddings of recognition models to the context space of Stable Diffusion. Except for ID representation, FaceStudio [56 ###reference_b56###] deployed a CLIP vision encoder [5 ###reference_b5###] to extract the structure features. InstantID [57 ###reference_b57###] handled image generation in various styles by designing a learnable IdentityNet to grasp strong semantics. However, introducing too strong face prior makes it difficult to manipulate diverse facial attributes and fails to generalize to other concept embedding. FastComposer [58 ###reference_b58###] used a delayed subject conditioning strategy to avoid subject overfitting, but they only focus on faces and fail to interact with other objects such as \u201csofa\u201d as shown in Fig. 9 ###reference_###. While PhotoMaker [59 ###reference_b59###] proposed an ID-oriented dataset that includes diverse scenarios and fine-tuning part of the Transformer [60 ###reference_b60###] layers in the image encoder to mitigate contextual information loss. Nevertheless, the training of Transfromer will sacrifice the compatibility with existing pretrained community models."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III Method",
33
+ "text": "Embedding one new identity (ID) into the Stable Diffusion Model for personalized generation using only one single face image has three technical requirements: accuracy, interactivity, and semantic-fidelity. Our learned ID embedding focuses on the face region and adopts disentangled token representation, which has flexible face spatial layout, interactive generation ability with existing concepts (e.g., generating interaction motion with other objects), and fine-grained manipulation ability (e.g., editing the facial expressions). This means that our method improves both ID accuracy and manipulation ability. As shown in Fig. 2 ###reference_###, we propose our ID embedding pipeline from two key perspectives: (1) Face-Wise Attention Loss: Towards the improvements in ID accuracy and interactive generative ability with existing concepts in the original model, we propose a face-wise attention loss in Sec. III-B ###reference_###. (2) Semantic-Fidelity Token Optimization: For diverse manipulation, we optimize one ID representation as several per-stage tokens, and each token consists of two disentangled embeddings, which can be seen in Sec. III-C ###reference_###. In the following sections, we first give an introduction of the pre-trained Stable Diffusion Model [4 ###reference_b4###], and we then provide the details of our method."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Preliminary",
39
+ "text": "Diffusion-Based T2I Generation. Our utilized Stable Diffusion Model [4 ###reference_b4###] consists of a CLIP text encoder [5 ###reference_b5###], an AutoVAE [61 ###reference_b61###] and a latent U-Net [62 ###reference_b62###] module. Given an image ( and represent the size of target image), the VAE encoder maps it into a lower dimensional latent space as followed by a corresponding decoder to map the latent vectors back as . The and are the dimensions of latent tensor . Given any user provided prompts , the tokenizer of the CLIP text encoder divides and encodes into integer tokens. Correspondingly, by looking up the dictionary, a word embedding group can be obtained. Then, the CLIP text transformers encode to generate text condition vectors , which serve as a condition to guide the training of the latent U-Net denoiser :\nwhere denotes for the unscaled noise and is the timestep. is the latent variable of a forward Markov chain , where is a hyper-parameter that modulates the quantity of noise added. Given a latent noise vector in the timestep , the model learns to denoise it to . During inference, a random Gaussian latent vector is iteratively denoised to .\n###figure_2### Cross-Attention for Text Condition. As shown in the upper block of Fig. 3 ###reference_###, the text prompt is first tokenized to a word embedding group , and then encoded by the text transformers to generate text condition . Given the latent image features , the cross attention operation updates the latent features as:\nwhere , , and map the inputs to Query, Key, and Value features, respectively. The is the output dimension of Key and Query features.\nPrevious work has shown that the CLIP text embedding space is expressive enough to capture image semantics [1 ###reference_b1###, 2 ###reference_b2###]. Specifically, a placeholder string, \u201c\u201d, is designated in the prompt to represent the identity-related feature we wish to learn. During the word embedding process, the vector associated with the word \u201c\u201d is replaced by the learned ID embedding . Thus, we can combine \u201c\u201d with other words to achieve personalized creation. In this work, we focus on learning accurate and interactive ID embedding ."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-B Face-Wise Attention Loss",
45
+ "text": "We first analyze and visualize the attention overfit problem of previous methods. Then, we present an accessible prior from the Stable Diffusion Model, instead of the face prior from other models such as face recognition models, to improve both the embedding accuracy and interactive generative ability at the same time. These are our motivations to propose our Face-Wise Attention Loss. Finally, we present the details of the loss implementations.\n###figure_3### Image Fit vs. Face-Region Fit. Previous methods rely on learning from multiple images of a target object to grasp relevant features, such as Textual Inversion [1 ###reference_b1###] and DreamBooth [9 ###reference_b9###]. However, when only a single image is available, they are prone to fitting to the whole target image (including ID-unrelated face layout and background information), and the learned embedding tends to influence regions beyond the face region during the cross-attention stages. As a result, they lack the interactive generative ability with the existing concepts in the original model [50 ###reference_b50###]. In other words, during the inference, the generated results from the personalized model may not be consistent to the text prompts. For example, as shown in Fig. 1 ###reference_###, the given prompt is \u201ca is enjoying a cup of latte\u201d, but the methods with attention overfit problem fail to generate the \u201ccup\u201d content. The same problem can also be seen in Fig. 7 ###reference_###, which given some facial attributes such as \u201cold\u201d in the prompt, the diffusion-based generation process just fails. Our ID embedding optimization can focus on ID-related face regions and neglect the unrelated background features, which can simultaneously improve the ID accuracy and interactive generative ability.\nMake Best of Stable Diffusion Model. Multiple target images are necessary for previous methods to acquire concept-related embedding. These images allow users to use text prompts to manipulate different poses, appearances, and contextual variations of objects. One target image fails to achieve this generalization. However, Stable Diffusion Model has learnt a strong general concept prior for various sub-categories. For example, different human identities belong to the general concept \u201cperson\u201d, and different dog categories such as Corgi and Chihuahua belong to the general concept \u201cdog\u201d. Therefore, it is reasonable to adopt this prior knowledge to achieve one-shot learning. To meet our more higher requirements, we aim to have flexibility to manipulate the ID-specific regions of final images. In other words, when we want to generate images corresponding to \u201ca photo of is playing guitar\u201d, only handling portrait or face image generation is not enough for this prompt. Therefore, we adopt \u201cthe face of person\u201d as our general concept prior for ID embedding, because when provided with the prompt \u201ca photo of the face of person\u201d, Stable Diffusion Model can generate a face image of a person with a randomly assigned identity and constrain the region where the generated person appears in the final image.\nSpecifically, we propose to use a reference prompt that remains consistent with the general concept of different IDs, which replaces the placeholder word (\u201c\u201d) with \u201cperson\u201d in prompts (i.e., \u201ca photo of the face of person\u201d). Then, we use this attention map derived from as a constraint to restrict the attention corresponding to the placeholder word (\u201c\u201d) in the target prompt \u201ca photo of the face of \u201d. This approach allows the ID embedding to focus on the face region associated with the target ID while maintaining the coherence of the general concept. Specifically, we first embed the reference prompt and target prompt as word embedding groups and . The ID embedding in is fed into a self-attention module to obtain per-stage token embeddings , which will be introduced in the subsequent section. Then, we adopt text encoder transformers to obtain their corresponding key () and value () features and . Then, each of the K features are send to the cross-attention module to calculate the attention map and with latent image features respectively. The is constrained by the within the corresponding representation of the concept as follows:\nThe detailed Face-Wise Attention Loss computation pipeline is depicted in Algorithm 1 ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-C Semantic-Fidelity Token Optimization",
51
+ "text": "We first present the disadvantages of previous methods from the semantic-fidelity control. Then, we introduce our optimization strategy, including the motivation for feature disentanglement and the details of obtaining feature pairs. Finally, we present our training loss for optimization.\nLack of Semantic-Fidelity Control. This problem can be found from two perspectives: (1) Stable Diffusion Model: We observe that even though the face data of celebrities has been included in the training dataset of Stable Diffusion Model, it still fails to achieve perfect semantic-fidelity control for these IDs. For example, \u201ca photo of an old Obama\u201d cannot generate the corresponding images. (2) Previous Personalized Methods: Methods like Celeb Basis [3 ###reference_b3###], Textural Inversion [1 ###reference_b1###] and InstantID [57 ###reference_b57###] mainly emphasize how to preserve the characteristics of the person and achieve global control over the generated images through text modifications. Although these methods are able to manipulate scenes or styles, they struggle to control fine-grained facial attributes of learned IDs, such as age and expressions. Prospect [2 ###reference_b2###] represents an image as a collection of textual token embeddings which could offer better disentanglement and controllability in editing images. However, When it comes to the generation of images with controllable facial attributes, as shown in Fig. 7 ###reference_###, it fails to generate examples like \u201can old person\u201d. We address this challenge by disentangling the and features, as explained in detail in the following section.\nDisentanglement of and Features. The text condition features (, ) will be fed into cross-attention layers of U-Nets for conditioning the generated images. Previous methods [8 ###reference_b8###, 50 ###reference_b50###] differentiated the and features calculated from the same ID embedding as position information and object texture features, which is not appropriate for manipulating facial attributes. Therefore, to further investigate the different effects of , features for our task, we disentangle the ID embedding as per-stage token embedding groups , and then visualize the effects of these features in the image generation process. As shown in Fig. 4 ###reference_###, we found that the embeddings are more ID-related, while the embeddings are more related to environment factors such as lighting, mouth open and face texture. The disentangled optimization of ID embedding in and can further improve the ID accuracy and interactive generative ability with other concepts. As shown in Fig. 3 ###reference_###, we illustrate the different implementations.\nHow to Obtain K-V Feature Pairs? Specifically, the input prompt is firstly fed into the CLIP Tokenizer, which generates the textual token embeddings . Here, the ID embedding related to \u201c\u201d is a vector with the size of . As depicted in Fig. 5 ###reference_###, the is then fed to a trainable Self-Attention [60 ###reference_b60###] module to create embedding . Each of the newly generated ID embedding will replace the original ID embedding to form five groups of textual embeddings, and then these embedding groups will be multiplied by and to obtain and features. The Self-Attention module consists of two self-attention layers with one feed-forward layer. We take each group of textual embeddings as a different condition. We evenly divide the 1000 diffusion steps into five stages, each stage corresponds to a unique pair of textual embeddings. Finally, only the Self-Attention module is trainable and the final is obtained by optimizing the diffusion denoising loss as follows:\nwhere is the latent code of target image. The is the unscaled noise sample, and the is the U-Net module in diffusion model. The is the optimization step of the diffusion process, and the is U-Net output of different steps.\n###figure_4### Training Loss. Our goal is seamlessly embedding one specific ID into the space of Stable Diffusion Model, which have to achieve accurate ID mapping and fully use the prior from the Stable Diffusion Model to manipulate scenes, actions and facial attributes. Thus, the total optimization objective can be formulated as follows:\n###figure_5### ###figure_6###"
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "IV Experiments",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-A Experimental Settings",
63
+ "text": "Implementation Details. We present our target T2I model, test data, training details, and inference recipe for reproductivity. (1) Target T2I Model: Unless otherwise specified, we utilize Stable Diffusion 1.4 [4 ###reference_b4###] with default hyper parameters as the pre-trained diffusion-based T2I model. We adopt a frozen CLIP model [33 ###reference_b33###] in the Stable Diffusion Model as the text encoder network. The texts are tokenized into start-token, end-token, and 75 non-text padding tokens. (2) Test Data: The test face images are the StyleGAN [63 ###reference_b63###] synthetic data and the images from the CelebA-HQ dataset [64 ###reference_b64###]. (3) Training Details: For our method, the time for fine-tuning every ID using only one face image is minutes ( epochs) on one NVIDIA TITAN RTX GPU. We adopt Adam optimizer and set its learning rate as 0.005. The is set to 0.003. Since we only rely on single face image to acquire its embedding, we adopt some image augmentation methods, including color jitter, horizontal flip with the probability of , and random scaling ranging in . (4) Inference Recipe: During sampling time, we employ a DDIM sampler [65 ###reference_b65###] with diffusion steps and the classifier-guidance [66 ###reference_b66###] with the guidance scale .\n###figure_7### Baseline Methods. Our task setting is using only one face image to embed the novel ID into the pre-trained Stable Diffusion Model. Thus, for fair comparisons, we only use a single image for all personalized generation methods, but using enough optimization time for different methods. We select six state-of-the-art works as baseline methods for comparisons from three perspectives: (1) Model Fine-Tuning: DreamBooth [9 ###reference_b9###] (learns a unique identifier and fine-tunes the diffusion model to learn from a set of images) and Custom Diffusion [8 ###reference_b8###] (retrieves images with similar captions of the target concept and optimizes the cross-attention module with a modifier token); (2) Token Optimization: Textual Inversion [1 ###reference_b1###] (learns a pseudo-word for a concept within a limited number of images for optimization), ProSpect [2 ###reference_b2###] (expands the textual conditioning space with several per-stage textual token embeddings), and Celeb Basis [3 ###reference_b3###] (builds a well-defined face basis module to constrict the face manifold); (3) Tuning Free: FastComposer [58 ###reference_b58###] (deploys a delayed subject conditioning strategy to achieve tuning-free image generation).\nMetrics. We evaluate all the methods from objective metrics, user study, parameter amount, and fine-tuning time. (1) Objective Metrics: We select Prompt (CLIP alignment score [33 ###reference_b33###] between text and image), ID (ID feature similarity score [67 ###reference_b67###]), and Detect (face detection rate [67 ###reference_b67###]). However, evaluating the ID without the essence of T2I generation (i.e., Prompt-Image alignment has the highest priority) is inappropriate, and we DISCUSS the reasons for this problem in Sec. IV-B ###reference_###. Thus, we propose a new metric for face personalized generation which is denoted as ID (P). Specifically, if the CLIP score of this image is lower than the threshold (set as 0.23), then the ID (P) score of this image is 0. The threshold is the average CLIP score of these images which get higher scores in user study. To distinguish these ID metrics, we denote *ID (F) and *Detect (F) for evaluating the images using \u201ca photo of the face of V*\u201d. (2) User Study: We select more than 20 volunteers and generate 200 images, to evaluate different methods from Prompt (U) (Prompt-Image alignment), ID (U) (ID accuracy), and Quality (image quality). (3) Parameter Amount: We compare the parameter amount from parameters to be learned (Train) and the total introduced parameters (Add). (4) Time: We evaluate the fine-tuning time of different methods to show efficiency performance.\n###figure_8### ###figure_9### ###figure_10###"
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-B *DISCUSSION* for ID Similarity Evaluation.",
69
+ "text": "As shown in Fig. 6 ###reference_###, different from face image generation (i.e., using \u201ca photo of the face of V*\u201d) and editing, achieving Prompt-Image alignment has the highest priority in our task. We have to note this important issue from two perspectives: (1) Explanations for the Previous ID Similarity Metric: As shown in Tab. II ###reference_###, the reason for ID similarity metric less than 0.4, is due to differences in face region resolution. These T2I generated images require face cropping, scaling, and alignment. Consequently, the ID scores are lower than in previous face image generation methods. To fairly evaluate ID similarity under the setting of face photo generation, we conduct ID similarity evaluation using the same-resolution generated face images as the input images with metrics *ID (F) and *Detect (F), as shown in Tab. I ###reference_###. (2) Evaluating ID Considering Text-to-Image Alignment: ID evaluation ignoring the T2I alignment shows \u201cfake high\u201d ID scores. As shown in Fig. 1 ###reference_### and Tab. II ###reference_###, we observe that the previous methods failed to generate the images aligned with prompt such as \u201ca V* is enjoying a cup of latte\u201d and only generate face images due to attention overfitting, but they had the higher ID scores. This ID evaluation ignores pre-requisite of T2I alignment. Therefore, considering the essence of T2I generation, we have to propose ID (P) for fair comparisons.\n###figure_11###"
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-C Single ID Embedding and Manipulation",
75
+ "text": "We first utilize the same prompts to evaluate five different state-of-the-art methods and ours on single ID embedding and manipulation, as shown in Fig. 7 ###reference_###. We evaluate the performance from three different levels: facial attributes (e.g., age and hairstyle), actions (i.e., human motion and interactions with other objects), and combinations of facial attributes and actions. Textual Inversion [1 ###reference_b1###] and Prospect [2 ###reference_b2###] tend to overfit the input image, so they fail to interact with other concepts. Although DreamBooth [9 ###reference_b9###] and Custom Diffusion [8 ###reference_b8###] successfully generate the image of interaction of human and concept, the generated identities fail to maintain the ID consistency with the target images. Celeb Basis [3 ###reference_b3###] successfully generate the human-object interaction actions, but they fail to manipulate the facial attributes of target identities well. Additional results showcased in Fig. 8 ###reference_### further illustrate the diverse range of manipulations accomplished by our methods in terms of scene (stylization), facial attributes, and action representation within the context of single-person image generation."
76
+ },
77
+ {
78
+ "section_id": "4.4",
79
+ "parent_section_id": "4",
80
+ "section_name": "IV-D Quantitative Evaluation",
81
+ "text": "As shown in Tab. I ###reference_###, our method achieves the SOTA performance in the Prompt-Image alignment evaluation and ID (Face) similarity. Due to attention overfit, Textual Inversion [1 ###reference_b1###], Prospect [2 ###reference_b2###] show poor Prompt-Image alignment.\nSince that achieving Prompt-Image alignment has the highest priority, we propose a new metric ID (P), which requires the generated images have to achieve the task of semantic-fidelity, and then we calculate their ID scores. Our method achieves better ID (P) scores than the other methods and ours is excellent in Prompt-Image alignment evaluation. This improvement is from two reasons: (1) Attention Overfit Alleviation: our face-wise attention loss is able to alleviate the attention overfit problem of previous methods such as DreamBooth [9 ###reference_b9###], Prospect [2 ###reference_b2###], and Texutal Inversion [1 ###reference_b1###]. Our method can make the ID embedding focus on the face region, instead of the whole image. (2) Attribute-Aware Tokens: Compared to Celeb Basis [3 ###reference_b3###], our method does not introduce too much face prior and represents one ID as five per-stage tokens, which can balance the trade-off between ID accuracy and manipulation ability. Our expended textual conditioning space has a strong disentanglement and control ability of attributes (e.g., action-related objects and facial attributes) than Celeb Basis [3 ###reference_b3###].\n###figure_12###"
82
+ },
83
+ {
84
+ "section_id": "4.5",
85
+ "parent_section_id": "4",
86
+ "section_name": "IV-E Multi-ID Embedding and Manipulation",
87
+ "text": "As shown in Fig. 9 ###reference_### and Fig. 10 ###reference_###, we illustrate the circumstances where two IDs appear in the same scene and some interactive actions between them. Though Celeb Basis [3 ###reference_b3###] can achieve competitive prompt alignment as ours, the generated identity is less precise which leads to their poor identity similarity as shown in Tab. I ###reference_###. We hypothesis that in the absence of explicit regularization, the learned ID embedding may be sub-optimal, as they still can not disentangle the identity representation from the other latent factors. For instance, the results in Fig. 9 ###reference_### suggest that their learned ID embedding not only focuses on identity but also incorporates additional information, such as clothing (e.g., the consistent presence of a suited man). In the experiments compared with FastComposer [58 ###reference_b58###], The generated images by FastComposer predominantly feature the faces of the target IDs, occupying a significant portion of the images and it seems like that the characters are directly pasted into the picture, resulting in a disharmonious appearance. Besides, it is difficult for FastComposer to interact with other concepts (like \u201cpicnic\u201d and \u201cgarage\u201d) and generate the correct action (like \u201csitting\u201d, \u201cshaking\u201d, and \u201ccooking\u201d) because of the aforementioned semantic prior forgetting problem. As shown in Fig. 11 ###reference_###, we experiment on more complex scenarios in multi-ID generation, which showcases the high generation diversity and good interactive ability of our method.\n###figure_13###"
88
+ },
89
+ {
90
+ "section_id": "4.6",
91
+ "parent_section_id": "4",
92
+ "section_name": "IV-F User Study",
93
+ "text": "To make our results more convincing and incorporate a broader range of user perspectives, we further conduct a user study, which can be found in Tab. I ###reference_###. Our method obtains better preference than previous work among the participating users, including better Prompt-Image alignment, ID similarity to the target reference image, and image quality. This shows that our semantic-fidelity embedding can enable better interactive generation ability and is potential to exploit the powerful manipulation capabilities of the Stable Diffusion Model itself."
94
+ },
95
+ {
96
+ "section_id": "4.7",
97
+ "parent_section_id": "4",
98
+ "section_name": "IV-G Efficiency Evaluation",
99
+ "text": "As shown in Tab. I ###reference_###, we have advantages in introduced parameter amount and fine-tuning time. Celeb Basis [3 ###reference_b3###] introduces a basis module and a pre-trained face recognition model, but these are large optimization burdens and a too strong facial prior can disrupt the interaction between faces and other concepts. We utilize the prior from the T2I model itself, reducing the introduction of additional parameters and further enhancing the facial manipulation ability of T2I models."
100
+ },
101
+ {
102
+ "section_id": "4.8",
103
+ "parent_section_id": "4",
104
+ "section_name": "IV-H Ablation Study",
105
+ "text": "Different Effects of and Tokens. The semantic information of per-stage tokens is important for the interpretation of diffusion-based generation process, especially for the different effects of and tokens. As shown in Fig. 12 ###reference_### we add experiments of using the Per-Stage Token Optimization with previous Textual Inversion, which shows its fine-grained control ability, such as the manipulation of facial attributes. To further investigate this, as shown in Fig. 4 ###reference_###, we thoroughly explore and tokens from two perspectives: (1) Progressively Adding: We add different tokens to the conditioning information in ten steps. We found that the initial tokens influence more the layout of generation content (e.g., face region location, and poses), while the latter tokens effect more the ID-related details. (2) Progressively Substituting: We then substitute different and tokens of . We found that contribute to the vast majority of ID-related information, and the contribute more to textual details, such as environment lighting.\n###figure_14### ###figure_15### Attention Loss. We thoroughly investigate three options for face-wise attention loss. The option only regularizes on the token and the option regularizes the prompt-length tokens. As shown in Fig. 13 ###reference_###, option affects the other concept embeddings in the T2I model, which results in non-ID concepts cannot be generated, such as sunglasses. Although the option can reduce the influence of too much ID attention, the activation region of still disrupts regions beyond its scope, which can be seen in the corners of the feature activation map for . Our final adopted option is , which calculates the attention loss among the whole text attention maps generated by each token. This option prevents the learned token from overfitting to other regions and only focus on the face region. Drawn from Tab. III ###reference_###, as more tokens are token into the attention loss regularization, the prompt score rises. We think the reason lies in two perspective: (1) The regularization on the token ensures it to focus on face region and prevents it from disturb the other concepts; (2) The regularization applied to all other tokens serves as an additional penalty, preserving their ability to implicitly disentangle the token from the rest of the tokens. Our loss strategy only addresses the attention overfitting, improving the ID accuracy and interactivity with other concepts, but the manipulation capacity for the high text2image alignment and diversity still needs to be improved by us and other diffusion-based generative model researchers.\nThe Number of K-V Feature Pairs. As shown in Fig. 14 ###reference_###, we explore the influence of K-V pair numbers. When using only one pair of K-V, the learned ID-related tokens fail to maintain good ID accuracy and interact with other complex concepts and attributes. However, adopting too many K-V pairs (e.g., 10 pairs) fails to bring significant improvements of diversity or quality, and this is no doubt a huge computational burden. In our method, we select 5 K-V pairs, which balance the trade-off of representing capacity and computation. As shown in Tab. III ###reference_###, the Prompt and identity scores of setting 1 K-V with option are lower than 5 K-V with option and 10 K-V with option . While the 10 K-V with option shows the same prompt score compared to 5 K-V with option , it exhibits lower identity similarity."
106
+ },
107
+ {
108
+ "section_id": "4.9",
109
+ "parent_section_id": "4",
110
+ "section_name": "IV-I Generalization",
111
+ "text": "Embedding Other Objects. We further validate our methods on other objects. Compared to Celeb Basis [3 ###reference_b3###], our method does not introduce face prior from other models (e.g., a pre-trained face recognition model or basis module). As shown in Fig. 15 ###reference_###, we adopt animals (Bear, Cat, and Dog) and general objects (Car, Chair and Plushie) for experiments, which show the generalizability of our method.\nUsing Stable Diffusion XL. To validate the generalization to the latest version of Stable Diffusion Model, we select SDXL model [69 ###reference_b69###] stable-diffusion-xl-base-1.0 as the target model and the newly released methods using it for comparisons. As shown in Fig. 16 ###reference_###, we compare with the SOTA methods InstantID [57 ###reference_b57###], PhotoMaker [59 ###reference_b59###], and IP-Adapter-FaceID [68 ###reference_b68###]. InstantID [57 ###reference_b57###] can only generate the face photo and fails to manipulate other actions or facial attributes. Although PhotoMaker [59 ###reference_b59###] and IP-Adapter-FaceID [68 ###reference_b68###] could generate the target ID under different scenes and actions, they can not handle complex actions (e.g., \u201csit on a chair\u201d) and accurate facial attribute controlling. Additionally, IP-Adapter-FaceID [68 ###reference_b68###] even loses the identity information of target person when combined with facial attribute prompts. As shown in the shortcomings of other methods, we found that incorporating additional features into the SDXL model would compromise the semantic-fidelity ability of T2I models, resulting in generated images that are misaligned with the given prompts. In contrast, our approach focuses on learning interactive ID embeddings with diffusion prior itself, which would not disrupt the original semantic understanding capability of the adopted models."
112
+ },
113
+ {
114
+ "section_id": "5",
115
+ "parent_section_id": null,
116
+ "section_name": "Conclusion",
117
+ "text": "We propose two novel problem-orient techniques to enhance the accuracy and interactivity of the ID embeddings for semantic-fidelity personalized diffusion-based generation. We analyze the attention overfit problem and propose Face-Wise Attention Loss. This improves the ID accuracy and facilitates the effective interactions between this ID embedding and other concepts (e.g., scenes, facial attributes, and actions). Then, we optimize one ID embedding as multiple per-stage tokens, which further expands the textual conditioning space with semantic-fidelity control ability. Extensive experiments validate our better ID accuracy and manipulation ability than previous methods, and we thoroughly conduct ablation study to validate the effectiveness of our methods. Moreover, our embedding method does not rely on any prior facial knowledge, which is potential to be applied to other categories.\nEthical Statement. Our research endeavors are dedicated to addressing specific challenges within multi-modal generation with the overarching aim of advancing the technological landscape within our community. We staunchly oppose any misuse of our technology, such as the unauthorized use of their identity information. To mitigate such risks, we are actively developing watermarking techniques to prevent the misuse of Artificial Intelligence Generated Content."
118
+ }
119
+ ],
120
+ "appendix": [],
121
+ "tables": {
122
+ "1": {
123
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Quantitative evaluation between different SOTA methods and ours.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.22\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T1.4.4.5\" rowspan=\"2\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"4\" id=\"S4.T1.1.1.1\">\n<span class=\"ltx_text\" id=\"S4.T1.1.1.1.1\" style=\"font-size:80%;\">Objective Metrics</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S4.T1.2.2.2\">\n<span class=\"ltx_text\" id=\"S4.T1.2.2.2.1\" style=\"font-size:80%;\">User Study</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"2\" id=\"S4.T1.3.3.3\">\n<span class=\"ltx_text\" id=\"S4.T1.3.3.3.1\" style=\"font-size:80%;\">Params</span>\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T1.4.4.4\">\n<span class=\"ltx_text\" id=\"S4.T1.4.4.4.1\" style=\"font-size:80%;\">Time</span>\n</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.22.23.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.22.23.1.1\"><span class=\"ltx_text\" id=\"S4.T1.22.23.1.1.1\" style=\"font-size:80%;\">Prompt</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.22.23.1.2\"><span class=\"ltx_text\" id=\"S4.T1.22.23.1.2.1\" style=\"font-size:80%;\">*ID (F)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.22.23.1.3\"><span class=\"ltx_text\" id=\"S4.T1.22.23.1.3.1\" style=\"font-size:80%;\">*Detect (F)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.22.23.1.4\"><span class=\"ltx_text\" id=\"S4.T1.22.23.1.4.1\" style=\"font-size:80%;\">ID (P)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.22.23.1.5\"><span class=\"ltx_text\" id=\"S4.T1.22.23.1.5.1\" style=\"font-size:80%;\">Prompt (U)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.22.23.1.6\"><span class=\"ltx_text\" id=\"S4.T1.22.23.1.6.1\" style=\"font-size:80%;\">ID (U)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.22.23.1.7\"><span class=\"ltx_text\" id=\"S4.T1.22.23.1.7.1\" style=\"font-size:80%;\">Quality</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.22.23.1.8\"><span class=\"ltx_text\" id=\"S4.T1.22.23.1.8.1\" style=\"font-size:80%;\">Train</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.22.23.1.9\"><span class=\"ltx_text\" id=\"S4.T1.22.23.1.9.1\" style=\"font-size:80%;\">Add</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"S4.T1.22.23.1.10\"><span class=\"ltx_text\" id=\"S4.T1.22.23.1.10.1\" style=\"font-size:80%;\">(min)</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.7.4\">\n<span class=\"ltx_text\" id=\"S4.T1.7.7.4.1\" style=\"font-size:80%;\">DreamBooth\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.7.7.4.2.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.00631v2#bib.bib9\" title=\"\">9</a><span class=\"ltx_text\" id=\"S4.T1.7.7.4.3.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.7.5\"><span class=\"ltx_text\" id=\"S4.T1.7.7.5.1\" style=\"font-size:80%;\">0.249</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.7.6\"><span class=\"ltx_text\" id=\"S4.T1.7.7.6.1\" style=\"font-size:80%;\">0.488</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.5.5.1\">\n<span class=\"ltx_text\" id=\"S4.T1.5.5.1.1\" style=\"font-size:80%;\">85.2</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.7.7\"><span class=\"ltx_text\" id=\"S4.T1.7.7.7.1\" style=\"font-size:80%;\">0.413</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.7.8\"><span class=\"ltx_text\" id=\"S4.T1.7.7.8.1\" style=\"font-size:80%;\">0.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.7.9\"><span class=\"ltx_text\" id=\"S4.T1.7.7.9.1\" style=\"font-size:80%;\">0.07</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.7.10\"><span class=\"ltx_text\" id=\"S4.T1.7.7.10.1\" style=\"font-size:80%;\">0.02</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.7.3\">\n<span class=\"ltx_text\" id=\"S4.T1.7.7.3.1\" style=\"font-size:80%;\">9.82</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.7.11\"><span class=\"ltx_text\" id=\"S4.T1.7.7.11.1\" style=\"font-size:80%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.7.7.12\"><span class=\"ltx_text\" id=\"S4.T1.7.7.12.1\" style=\"font-size:80%;\">16</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.10.10\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.4\">\n<span class=\"ltx_text\" id=\"S4.T1.10.10.4.1\" style=\"font-size:80%;\">Custom Diffusion\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.10.10.4.2.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.00631v2#bib.bib8\" title=\"\">8</a><span class=\"ltx_text\" id=\"S4.T1.10.10.4.3.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.5\"><span class=\"ltx_text\" id=\"S4.T1.10.10.5.1\" style=\"font-size:80%;\">0.252</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.6\"><span class=\"ltx_text\" id=\"S4.T1.10.10.6.1\" style=\"font-size:80%;\">0.492</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.8.8.1\">\n<span class=\"ltx_text\" id=\"S4.T1.8.8.1.1\" style=\"font-size:80%;\">84.9</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.7\"><span class=\"ltx_text\" id=\"S4.T1.10.10.7.1\" style=\"font-size:80%;\">0.369</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.8\"><span class=\"ltx_text\" id=\"S4.T1.10.10.8.1\" style=\"font-size:80%;\">0.02</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.9\"><span class=\"ltx_text\" id=\"S4.T1.10.10.9.1\" style=\"font-size:80%;\">0.16</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.10\"><span class=\"ltx_text\" id=\"S4.T1.10.10.10.1\" style=\"font-size:80%;\">0.02</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.3\">\n<span class=\"ltx_text\" id=\"S4.T1.10.10.3.1\" style=\"font-size:80%;\">5.71</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.11\"><span class=\"ltx_text\" id=\"S4.T1.10.10.11.1\" style=\"font-size:80%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.10.10.12\"><span class=\"ltx_text\" id=\"S4.T1.10.10.12.1\" style=\"font-size:80%;\">12</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.11.11\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.2\">\n<span class=\"ltx_text\" id=\"S4.T1.11.11.2.1\" style=\"font-size:80%;\">Textual Inversion\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.11.11.2.2.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.00631v2#bib.bib1\" title=\"\">1</a><span class=\"ltx_text\" id=\"S4.T1.11.11.2.3.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.3\"><span class=\"ltx_text\" id=\"S4.T1.11.11.3.1\" style=\"font-size:80%;\">0.236</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.4\"><span class=\"ltx_text\" id=\"S4.T1.11.11.4.1\" style=\"font-size:80%;\">0.340</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.1\">\n<span class=\"ltx_text\" id=\"S4.T1.11.11.1.1\" style=\"font-size:80%;\">85.1</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.5\"><span class=\"ltx_text\" id=\"S4.T1.11.11.5.1\" style=\"font-size:80%;\">0.293</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.6\"><span class=\"ltx_text\" id=\"S4.T1.11.11.6.1\" style=\"font-size:80%;\">0.12</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.7\"><span class=\"ltx_text\" id=\"S4.T1.11.11.7.1\" style=\"font-size:80%;\">0.07</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.8\"><span class=\"ltx_text\" id=\"S4.T1.11.11.8.1\" style=\"font-size:80%;\">0.15</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.9\"><span class=\"ltx_text\" id=\"S4.T1.11.11.9.1\" style=\"font-size:80%;\">1536</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.10\"><span class=\"ltx_text\" id=\"S4.T1.11.11.10.1\" style=\"font-size:80%;\">0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.11.11.11\"><span class=\"ltx_text\" id=\"S4.T1.11.11.11.1\" style=\"font-size:80%;\">24</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.14.14\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.4\">\n<span class=\"ltx_text\" id=\"S4.T1.14.14.4.1\" style=\"font-size:80%;\">Prospect\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.14.14.4.2.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.00631v2#bib.bib2\" title=\"\">2</a><span class=\"ltx_text\" id=\"S4.T1.14.14.4.3.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.5\"><span class=\"ltx_text\" id=\"S4.T1.14.14.5.1\" style=\"font-size:80%;\">0.217</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.6\"><span class=\"ltx_text\" id=\"S4.T1.14.14.6.1\" style=\"font-size:80%;\">0.492</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.12.12.1\">\n<span class=\"ltx_text\" id=\"S4.T1.12.12.1.1\" style=\"font-size:80%;\">86.3</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.7\"><span class=\"ltx_text\" id=\"S4.T1.14.14.7.1\" style=\"font-size:80%;\">0.302</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.8\"><span class=\"ltx_text\" id=\"S4.T1.14.14.8.1\" style=\"font-size:80%;\">0.02</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.9\"><span class=\"ltx_text\" id=\"S4.T1.14.14.9.1\" style=\"font-size:80%;\">0.22</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.10\"><span class=\"ltx_text\" id=\"S4.T1.14.14.10.1\" style=\"font-size:80%;\">0.13</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.11\"><span class=\"ltx_text\" id=\"S4.T1.14.14.11.1\" style=\"font-size:80%;\">7680</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.3\">\n<span class=\"ltx_text\" id=\"S4.T1.14.14.3.1\" style=\"font-size:80%;\">3.1</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.14.14.12\"><span class=\"ltx_text\" id=\"S4.T1.14.14.12.1\" style=\"font-size:80%;\">18</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.18.18\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.18.5\">\n<span class=\"ltx_text\" id=\"S4.T1.18.18.5.1\" style=\"font-size:80%;\">Celeb Basis\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.18.18.5.2.1\" style=\"font-size:80%;\">[</span><a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.00631v2#bib.bib3\" title=\"\">3</a><span class=\"ltx_text\" id=\"S4.T1.18.18.5.3.2\" style=\"font-size:80%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.18.6\"><span class=\"ltx_text\" id=\"S4.T1.18.18.6.1\" style=\"font-size:80%;\">0.242</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.18.7\"><span class=\"ltx_text\" id=\"S4.T1.18.18.7.1\" style=\"font-size:80%;\">0.412</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.15.15.1\">\n<span class=\"ltx_text\" id=\"S4.T1.15.15.1.1\" style=\"font-size:80%;\">87.1</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.18.8\"><span class=\"ltx_text\" id=\"S4.T1.18.18.8.1\" style=\"font-size:80%;\">0.312</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.18.9\"><span class=\"ltx_text\" id=\"S4.T1.18.18.9.1\" style=\"font-size:80%;\">0.06</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.18.10\"><span class=\"ltx_text\" id=\"S4.T1.18.18.10.1\" style=\"font-size:80%;\">0.10</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.18.11\"><span class=\"ltx_text\" id=\"S4.T1.18.18.11.1\" style=\"font-size:80%;\">0.16</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.18.12\"><span class=\"ltx_text\" id=\"S4.T1.18.18.12.1\" style=\"font-size:80%;\">1024</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.17.17.3\">\n<span class=\"ltx_text\" id=\"S4.T1.17.17.3.1\" style=\"font-size:80%;\">6.6</span>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.18.18.4\">\n<span class=\"ltx_text\" id=\"S4.T1.18.18.4.1\" style=\"font-size:80%;\">5</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.22.22\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.5\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text\" id=\"S4.T1.22.22.5.1\" style=\"font-size:80%;background-color:#ECECEC;\">Ours</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.6\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.22.22.6.1\" style=\"font-size:80%;background-color:#ECECEC;\">0.263</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.7\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.22.22.7.1\" style=\"font-size:80%;background-color:#ECECEC;\">0.525</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.19.19.1\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text\" id=\"S4.T1.19.19.1.1\" style=\"font-size:80%;background-color:#ECECEC;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.19.19.1.1.1\" style=\"background-color:#ECECEC;\">88.8</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.8\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text\" id=\"S4.T1.22.22.8.1\" style=\"font-size:80%;background-color:#ECECEC;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.22.22.8.1.1\" style=\"background-color:#ECECEC;\">0.428</span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.9\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.22.22.9.1\" style=\"font-size:80%;background-color:#ECECEC;\">0.58</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.10\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.22.22.10.1\" style=\"font-size:80%;background-color:#ECECEC;\">0.38</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.11\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.22.22.11.1\" style=\"font-size:80%;background-color:#ECECEC;\">0.52</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.12\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text\" id=\"S4.T1.22.22.12.1\" style=\"font-size:80%;background-color:#ECECEC;\">7680</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.21.21.3\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text\" id=\"S4.T1.21.21.3.2\" style=\"font-size:80%;background-color:#ECECEC;\">3.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T1.22.22.4\" style=\"background-color:#ECECEC;\">\n<span class=\"ltx_text\" id=\"S4.T1.22.22.4.1\" style=\"font-size:80%;background-color:#ECECEC;\">5</span>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
124
+ "capture": "TABLE I: Quantitative evaluation between different SOTA methods and ours."
125
+ },
126
+ "2": {
127
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Quantitative evaluation using previous metrics.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.6\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.6.7.1\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T2.6.7.1.1\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.6.7.1.2\">Prompt</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.6.7.1.3\">ID</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.6.7.1.4\">Detect</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T2.6.7.1.5\">ID (P)</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2\">DreamBooth\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.00631v2#bib.bib9\" title=\"\">9</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.3\">0.253</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.4\">0.261</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.1\">77.5\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.5\">0.241</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.2.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.2\">Custom Diffusion\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.00631v2#bib.bib8\" title=\"\">8</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.3\">0.227</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.4\">0.231</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.1\">83.6\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.2.2.5\">0.243</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.2\">Textual Inversion\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.00631v2#bib.bib1\" title=\"\">1</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3\">0.198</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.4.1\">0.382</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.1.1\">87.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.5\">0.176</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.2\">Prospect\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.00631v2#bib.bib2\" title=\"\">2</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.3\">0.209</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.4\">0.372</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.1\">84.2\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.4.4.5\">0.193</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.2\">Celeb Basis\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"https://arxiv.org/html/2402.00631v2#bib.bib3\" title=\"\">3</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.3\">0.253</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.4\">0.299</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.1\">82.6\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.5.5.5\">0.184</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.6.6.2\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text\" id=\"S4.T2.6.6.2.1\" style=\"background-color:#ECECEC;\">Ours</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.6.6.3\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.6.3.1\" style=\"background-color:#ECECEC;\">0.265</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.6.6.4\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text\" id=\"S4.T2.6.6.4.1\" style=\"background-color:#ECECEC;\">0.366</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.6.6.1\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text\" id=\"S4.T2.6.6.1.1\" style=\"background-color:#ECECEC;\">85.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.6.6.5\" style=\"background-color:#ECECEC;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.6.5.1\" style=\"background-color:#ECECEC;\">0.258</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
128
+ "capture": "TABLE II: Quantitative evaluation using previous metrics."
129
+ },
130
+ "3": {
131
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE III: </span>Quantitative evaluation of ablation study.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.9\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T3.4.4.5\" style=\"padding-left:11.0pt;padding-right:11.0pt;\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.1.1.1\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">Prompt\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.2.2.2\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">ID\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.3.3.3\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">Detect\n</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S4.T3.4.4.4\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">ID (P)\n</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.2\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">5 K-V &amp; #1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.3\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.201</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.4\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.382</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.1\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">85.9\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.5.5.5\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.190</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.6.2\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">5 K-V &amp; #2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.6.3\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.246</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.6.4\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.324</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.6.1\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">86.4\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.6.6.5\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.248</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.7.7\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.7.7.2\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">1 K-V &amp; #3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.7.7.3\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.205</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.7.7.4\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.323</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.7.7.1\" style=\"padding-left:11.0pt;padding-right:11.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.7.7.1.1\">88.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.7.7.5\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.203</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.8.8\">\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.8.8.2\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">10 K-V &amp; #3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.8.8.3\" style=\"padding-left:11.0pt;padding-right:11.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.8.8.3.1\">0.265</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.8.8.4\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.337</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.8.8.1\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">85.9\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.8.8.5\" style=\"padding-left:11.0pt;padding-right:11.0pt;\">0.251</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.9.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.9.9.2\" style=\"background-color:#ECECEC;padding-left:11.0pt;padding-right:11.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.9.9.2.1\" style=\"background-color:#ECECEC;\">5 K-V &amp; #3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.9.9.3\" style=\"background-color:#ECECEC;padding-left:11.0pt;padding-right:11.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.9.9.3.1\" style=\"background-color:#ECECEC;\">0.265</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.9.9.4\" style=\"background-color:#ECECEC;padding-left:11.0pt;padding-right:11.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.9.9.4.1\" style=\"background-color:#ECECEC;\">0.366</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.9.9.1\" style=\"background-color:#ECECEC;padding-left:11.0pt;padding-right:11.0pt;\"><span class=\"ltx_text\" id=\"S4.T3.9.9.1.1\" style=\"background-color:#ECECEC;\">85.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.9.9.5\" style=\"background-color:#ECECEC;padding-left:11.0pt;padding-right:11.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.9.9.5.1\" style=\"background-color:#ECECEC;\">0.258</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
132
+ "capture": "TABLE III: Quantitative evaluation of ablation study."
133
+ }
134
+ },
135
+ "image_paths": {
136
+ "1": {
137
+ "figure_path": "2402.00631v2_figure_1.png",
138
+ "caption": "Figure 1: Previous methods for inserting new identities (IDs) into pre-trained Text-to-Image diffusion models for personalized generation have two problems: (1) Attention Overfit : As shown in the activation maps of Textural Inversion [1] and ProSpect [2], their \u201cV*\u201d attention nearly takes over the whole images, which means the learned embeddings try to encode both the human faces and ID-unrelated information in the reference images, such as the face region layout and background. This problem extremely limits their generative ability and disrupts their interaction with other existing concepts such as \u201ccup\u201d, which results in the failure of the given prompt (i.e., they fail to generate the image content aligned with the given prompt). (2) Limited Semantic-Fidelity: Despite alleviating overfit, Celeb Basis [3] introduces excessive face prior, limiting the semantic-fidelity of the learned ID embedding (e.g., the \u201ccup\u201d attention still continues to the \u201cV*\u201d face region and this limitation hinders the control of facial attributes such as \u201ceyes closed\u201d). Therefore, we propose Face-Wise Region Fit (Sec. III-B) and Semantic-Fidelity Token Optimization (Sec. III-C) to address problem (1) and (2) respectively. More results: https://com-vis.github.io/SeFi-IDE/.",
139
+ "url": "http://arxiv.org/html/2402.00631v2/x1.png"
140
+ },
141
+ "2": {
142
+ "figure_path": "2402.00631v2_figure_2.png",
143
+ "caption": "Figure 2: The overview of our framework. We first propose a novel Face-Wise Attention Loss (Sec. III-B) to alleviate the attention overfit problem and make the ID embedding focus on the face region to improve ID accuracy and interactive generative ability. Then, we optimize the target ID embedding as five per-stage tokens pairs with disentangled features to expend textural conditioning space with semantic-fidelity control ability (Sec. III-C).",
144
+ "url": "http://arxiv.org/html/2402.00631v2/x2.png"
145
+ },
146
+ "3": {
147
+ "figure_path": "2402.00631v2_figure_3.png",
148
+ "caption": "Figure 3: The details of text condition and K-V feature implementation differences.",
149
+ "url": "http://arxiv.org/html/2402.00631v2/x3.png"
150
+ },
151
+ "4": {
152
+ "figure_path": "2402.00631v2_figure_4.png",
153
+ "caption": "Figure 4: The different effects of \ud835\udc77\ud835\udc8a\ud835\udc72superscriptsubscript\ud835\udc77\ud835\udc8a\ud835\udc72\\bm{P_{i}^{K}}bold_italic_P start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_K end_POSTSUPERSCRIPT and \ud835\udc77\ud835\udc8a\ud835\udc7dsuperscriptsubscript\ud835\udc77\ud835\udc8a\ud835\udc7d\\bm{P_{i}^{V}}bold_italic_P start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_V end_POSTSUPERSCRIPT tokens. (1) Progressively Adding: We add different {(\ud835\udc77\ud835\udc8a\ud835\udc72,\ud835\udc77\ud835\udc8a\ud835\udc7d)}1\u2264i\u22645subscriptsuperscriptsubscript\ud835\udc77\ud835\udc8a\ud835\udc72superscriptsubscript\ud835\udc77\ud835\udc8a\ud835\udc7d1\ud835\udc565{\\{(\\bm{P_{i}^{K}},\\bm{P_{i}^{V})}\\}}_{1\\leq i\\leq 5}{ ( bold_italic_P start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_K end_POSTSUPERSCRIPT , bold_italic_P start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_V end_POSTSUPERSCRIPT bold_) } start_POSTSUBSCRIPT 1 \u2264 italic_i \u2264 5 end_POSTSUBSCRIPT tokens to the conditioning information in ten steps. We found that the initial tokens effect more the layout of generation content (e.g., face region location, and poses), while the latter tokens effect more the ID-related details. (2) Progressively Substituting: We then substitute different \ud835\udc77\ud835\udc8a\ud835\udc72superscriptsubscript\ud835\udc77\ud835\udc8a\ud835\udc72\\bm{P_{i}^{K}}bold_italic_P start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_K end_POSTSUPERSCRIPT and \ud835\udc77\ud835\udc8a\ud835\udc7dsuperscriptsubscript\ud835\udc77\ud835\udc8a\ud835\udc7d\\bm{P_{i}^{V}}bold_italic_P start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_V end_POSTSUPERSCRIPT tokens of {(\ud835\udc77\ud835\udc8a\ud835\udc72,\ud835\udc77\ud835\udc8a\ud835\udc7d)}1\u2264i\u22645subscriptsuperscriptsubscript\ud835\udc77\ud835\udc8a\ud835\udc72superscriptsubscript\ud835\udc77\ud835\udc8a\ud835\udc7d1\ud835\udc565{\\{(\\bm{P_{i}^{K}},\\bm{P_{i}^{V})}\\}}_{1\\leq i\\leq 5}{ ( bold_italic_P start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_K end_POSTSUPERSCRIPT , bold_italic_P start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_V end_POSTSUPERSCRIPT bold_) } start_POSTSUBSCRIPT 1 \u2264 italic_i \u2264 5 end_POSTSUBSCRIPT. We found that \ud835\udc77\ud835\udc8a\ud835\udc7dsuperscriptsubscript\ud835\udc77\ud835\udc8a\ud835\udc7d\\bm{P_{i}^{V}}bold_italic_P start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_V end_POSTSUPERSCRIPT contribute to the vast majority of ID-related conditioning information, and the \ud835\udc77\ud835\udc8a\ud835\udc72superscriptsubscript\ud835\udc77\ud835\udc8a\ud835\udc72\\bm{P_{i}^{K}}bold_italic_P start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_K end_POSTSUPERSCRIPT contribute more to textural details, such as environment lighting.",
154
+ "url": "http://arxiv.org/html/2402.00631v2/x4.png"
155
+ },
156
+ "5": {
157
+ "figure_path": "2402.00631v2_figure_5.png",
158
+ "caption": "Figure 5: The details of Self-Attention module. For simplicity, we disregard the remaining embeddings in \ud835\udc80\ud835\udc95subscript\ud835\udc80\ud835\udc95\\bm{Y_{t}}bold_italic_Y start_POSTSUBSCRIPT bold_italic_t end_POSTSUBSCRIPT and focus on the ID embedding \ud835\udc77\ud835\udc77\\bm{P}bold_italic_P associated with the pseudo-word \u201cV*\u201d.",
159
+ "url": "http://arxiv.org/html/2402.00631v2/x5.png"
160
+ },
161
+ "6": {
162
+ "figure_path": "2402.00631v2_figure_6.png",
163
+ "caption": "Figure 6: Face photo generation of ours and comparison methods. Due to attention overfitting, Textural Inversion [1] and Prospect [2] struggle to generate images that accurately reflect the semantics of \u201cwhite hair\u201c. Custom Diffusion [8] and DreamBooth [9] tend to overly mimic the training image and fail to maintain identity when combined with other text prompts. On the other hand, methods such as Celeb Basis [3] and FastComposer [58] exhibit poor semantic fidelity and limited diversity in their generated outputs.",
164
+ "url": "http://arxiv.org/html/2402.00631v2/x6.png"
165
+ },
166
+ "7": {
167
+ "figure_path": "2402.00631v2_figure_7.png",
168
+ "caption": "Figure 7: Qualitative comparisons with different SOTA methods using more complex prompts. We conduct experiments from three levels, including the action manipulation, facial attribute editing, and their mixture. Our method shows superior embedding accuracy and interactive generation ability with existing concepts.",
169
+ "url": "http://arxiv.org/html/2402.00631v2/x7.png"
170
+ },
171
+ "8": {
172
+ "figure_path": "2402.00631v2_figure_8.png",
173
+ "caption": "Figure 8: The manipulation diversity of our method, which is shown from various identities, styles, facial attributes, and actions.",
174
+ "url": "http://arxiv.org/html/2402.00631v2/x8.png"
175
+ },
176
+ "9": {
177
+ "figure_path": "2402.00631v2_figure_9.png",
178
+ "caption": "Figure 9: Multi-ID action manipulation comparisons of Celeb Basis [3], FastComposer [58], and ours. FastComposer only focuses on faces and fails to interact with other concepts, such as \u201cshake hands\u201d, \u201csofa\u201d, and \u201cpicnic\u201d. Although Celeb Basis can generate text-aligned images, it shows lower identity preservation.",
179
+ "url": "http://arxiv.org/html/2402.00631v2/x9.png"
180
+ },
181
+ "10": {
182
+ "figure_path": "2402.00631v2_figure_10.png",
183
+ "caption": "Figure 10: Multi-ID scene manipulation comparisons of Celeb Basis [3], FastComposer [58], and ours. As for FastComposer, the faces of target IDs take over most of the generated picture and some concepts are lost, like \u201cgarage\u201d. As for Celeb Basis, its learned IDs are less precise and may generate artifacts (i.e., a head of a woman which should not exist in the photo).",
184
+ "url": "http://arxiv.org/html/2402.00631v2/x10.png"
185
+ },
186
+ "11": {
187
+ "figure_path": "2402.00631v2_figure_11.png",
188
+ "caption": "Figure 11: Our multi-ID generation results tested in more complex scenarios, showcasing the diversity of generated images and the ability to interact with complex concepts.",
189
+ "url": "http://arxiv.org/html/2402.00631v2/x11.png"
190
+ },
191
+ "12": {
192
+ "figure_path": "2402.00631v2_figure_12.png",
193
+ "caption": "Figure 12: Ablation study of using per-stage tokens with previous methods. The per-stage tokens strategy enables our method to manipulate the facial attributes of target face, and also works for previous methods.",
194
+ "url": "http://arxiv.org/html/2402.00631v2/x12.png"
195
+ },
196
+ "13": {
197
+ "figure_path": "2402.00631v2_figure_13.png",
198
+ "caption": "Figure 13: Ablation study of different options for attention loss. Option #\u20621#1\\#1# 1 inferences other concepts, and option #\u20622#2\\#2# 2 still disrupts regions like the corners of the activation map for V*superscript\ud835\udc49V^{*}italic_V start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT that beyond its scope.",
199
+ "url": "http://arxiv.org/html/2402.00631v2/x13.png"
200
+ },
201
+ "14": {
202
+ "figure_path": "2402.00631v2_figure_14.png",
203
+ "caption": "Figure 14: Ablation study of utilizing different number of K-V pairs. Using only 1 K-V pair can not sufficient maintain the ID features. And adopting too many K-V pairs would not bring significant improvements to generation quality. Thus, we finally select 5 K-V pairs.",
204
+ "url": "http://arxiv.org/html/2402.00631v2/x14.png"
205
+ },
206
+ "15": {
207
+ "figure_path": "2402.00631v2_figure_15.png",
208
+ "caption": "Figure 15: Using our ID embedding method for non-face concepts. In each block of part (b), the target object is displayed on the left, while on the right, from top to bottom, are the images labeled as \u201ca photo of V*\u201d, \u201cstylization of V*\u201d, and \u201cV* under different scenes\u201d.",
209
+ "url": "http://arxiv.org/html/2402.00631v2/x15.png"
210
+ },
211
+ "16": {
212
+ "figure_path": "2402.00631v2_figure_16.png",
213
+ "caption": "Figure 16: Using our ID embedding method for Stable Diffusion XL. InstantID [57] tends to generate a face photo of target ID. PhotoMaker [59] and IP-Adapter-FaceID [68] can not achieve fine-grained text guided facial attribute controlling. Our method can achieve better interactive generation with the other concepts (e.g., chair) than the other methods.",
214
+ "url": "http://arxiv.org/html/2402.00631v2/x16.png"
215
+ }
216
+ },
217
+ "validation": true,
218
+ "references": [],
219
+ "url": "http://arxiv.org/html/2402.00631v2"
220
+ }
20240322/2402.14704v3.json ADDED
The diff for this file is too large to render. See raw diff