yilunzhao commited on
Commit
b964a13
·
verified ·
1 Parent(s): 2bf039d

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 20240123/2110.07869v3.json +0 -0
  2. 20240123/2110.11334v3.json +0 -0
  3. 20240123/2112.11628v4.json +0 -0
  4. 20240123/2202.03087v3.json +287 -0
  5. 20240123/2202.12312v2.json +485 -0
  6. 20240123/2204.13209v2.json +0 -0
  7. 20240123/2205.05173v5.json +0 -0
  8. 20240123/2205.05587v3.json +421 -0
  9. 20240123/2205.13743v5.json +0 -0
  10. 20240123/2206.02059v3.json +0 -0
  11. 20240123/2206.14359v5.json +247 -0
  12. 20240123/2209.07805v4.json +0 -0
  13. 20240123/2209.09930v2.json +349 -0
  14. 20240123/2210.01407v6.json +0 -0
  15. 20240123/2210.02651v2.json +0 -0
  16. 20240123/2211.01758v2.json +514 -0
  17. 20240123/2211.04625v2.json +0 -0
  18. 20240123/2211.06598v3.json +327 -0
  19. 20240123/2211.08262v4.json +11 -0
  20. 20240123/2212.13069v3.json +819 -0
  21. 20240123/2301.02424v2.json +130 -0
  22. 20240123/2301.04378v3.json +135 -0
  23. 20240123/2301.09217v5.json +273 -0
  24. 20240123/2301.11915v2.json +0 -0
  25. 20240123/2303.07700v3.json +0 -0
  26. 20240123/2303.07846v2.json +0 -0
  27. 20240123/2303.10728v2.json +222 -0
  28. 20240123/2303.13716v2.json +0 -0
  29. 20240123/2304.13014v4.json +0 -0
  30. 20240123/2305.00557v3.json +0 -0
  31. 20240123/2305.02317v3.json +435 -0
  32. 20240123/2305.07730v2.json +0 -0
  33. 20240123/2305.11321v2.json +0 -0
  34. 20240123/2305.13208v2.json +0 -0
  35. 20240123/2305.13998v5.json +0 -0
  36. 20240123/2305.14800v6.json +0 -0
  37. 20240123/2305.18417v3.json +0 -0
  38. 20240123/2305.19004v3.json +102 -0
  39. 20240123/2306.02869v3.json +0 -0
  40. 20240123/2306.05739v4.json +0 -0
  41. 20240123/2306.08877v3.json +491 -0
  42. 20240123/2306.14451v2.json +0 -0
  43. 20240123/2306.14624v2.json +0 -0
  44. 20240123/2306.17396v2.json +609 -0
  45. 20240123/2307.02156v2.json +0 -0
  46. 20240123/2307.02764v2.json +0 -0
  47. 20240123/2308.10487v2.json +658 -0
  48. 20240123/2308.12890v3.json +562 -0
  49. 20240123/2308.14190v2.json +0 -0
  50. 20240123/2308.16692v2.json +542 -0
20240123/2110.07869v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2110.11334v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2112.11628v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2202.03087v3.json ADDED
@@ -0,0 +1,287 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Unsupervised Long-Term Person Re-Identification with Clothes Change",
3
+ "abstract": "Most person re-identification methods artificially assume that each person\u2019s clothing is stationary in space and time.\nSince the average person often changes clothes even within a single day, this condition primarily holds true in situations involving short-term re-identification scenarios.\nSome recent studies have investigated re-identification of clothing changes based on supervised learning to reduce this limitation.\nIn this paper, we remove the necessity for personal identity labels, which makes this new problem dramatically more challenging than conventional unsupervised short-term Re-ID.\nTo surmount these obstacles, we introduce a novel approach known as the Curriculum Person Clustering (CPC) method, which exhibits the ability to dynamically modify the clustering criterion based on the clustering confidence in the clustering process.\nExperimental results on DeepChange show that CPC surpasses other unsupervised re-id method and even close to supervised methods.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Person re-identification aims to associate the identity of an individual with images acquired from various camera perspectives.\nThe majority of re-id methods in use today\n[1 ###reference_1###, 2 ###reference_2###]\nare based on general scenarios without clothing changes.\nThis presents a limitation since the majority of individuals alter their attire on a daily basis.\nConsequently, their efficacy remains confined to shot-term re-identification task.\nThis constraint has sparked a burgeoning research interest in long-term person re-id, particularly concerning variations in clothes [3 ###reference_3###, 4 ###reference_4###]. However, the collection and annotation of personal identity labels is extremely difficult under conditions of unconstrained clothing change.\nAs shown in Figure 1 ###reference_###, due to the diversity of pedestrian appearance, the largest and most realistic clothes change dataset DeepChange [5 ###reference_5###] was created at great expense.\nRecognizing the profound importance of long-term person re-identification and the substantial financial investments required for dataset acquisition, our study centers on addressing the unsupervised long-term person re-identification predicament, effectively obviating the necessity for arduous personal identity labeling.\n Unsupervised long-term is more challenging because of the complexity of different people have similar apperance, while the same person wears different clothes lead to very different appearances.\nConsequently, the current methodologies[6 ###reference_6###, 7 ###reference_7###] relying on pseudo-labels would confront formidable dilemmas, ultimately culminating in suboptimal resolutions.\n###figure_1### To address these obstacles,\nwe introduce a novel Curriculum Person Clustering (CPC) method.\nIn order to reduce the accumulation of negative effects of labelling errors throughout the training process, we introduce a pseudo label generating strategy, called curriculum learning clustering.\nSpecifically, to regulate the labeling process, we formulate a confidence metric by establishing the correlation between samples within each cluster.\nDuring the training process, only a fraction of the samples will be involved in the training based on the selection of the confidence index, which also means that these samples are currently confident enough to provide correct information for the training of the model.\nAlso, the confidence index is updated as the training progresses. In this way, the damage caused by incorrect labelled samples for model training can be greatly reduced and the accuracy of the samples used for model training can be improved.\nThe contributions of our CPC as:\n(1) To solve unsupervised long-term person re-id challenge,\nwe propose the Curriculum Person Clustering (CPC), which aims to redirect the labelling error that affects propagation in the training process.\n(2) Extensive experiments demonstrate that CPC surpasses existing unsupervised methods by a substantial margin, rivaling the performance of fully supervised models on DeepChange, the most largets re-identification benchmark available to date.\n###figure_2###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Long-Term Person Re-id",
21
+ "text": "Some person re-id methods use fully supervised training to tackle the effects of changing clothes [4 ###reference_4###]These articles essentially try to find other supervisory information in addition to general supervisory information, such as person silhouette, to prompt the model learning cloth independent features.\nTo make use of the potential information in the body shape,\nHong [8 ###reference_8###] extracts pose-specific features and estimates the person\u2019s silhouette to effectively utilize the fine-grained features.\nThese clothes-changing re-id methods provide inspiring ideas based under supervised training but inevitably need to rely on auxiliary information.\nHenceforth, in this paper, our primary emphasis lies in surmounting the hurdles posed by clothing changing with unsupervised setting."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Curriculum Learning",
27
+ "text": "Curriculum Learning [9 ###reference_9###] inspired by the human learning curriculum, that allows the model to have a better understanding of the samples by learning from easy to hard.\nBased on the CL, the model can gain better the generalization ability.\nThus, the curriculum learning technique has gained widespread adoption in the train processing of deep learning[10 ###reference_10###].\n\nWe propose a CL strategy with unsupervised training for long-term re-id task."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Methodology",
33
+ "text": "We present a novel method known as the Curriculum Person Clustering (CPC) approach to address the intricate challenge of unsupervised long-term person re-identification.\nIllustrated in Figure 2 ###reference_###, our CPC framework comprises two modules: (1) the module for acquiring feature representations, and (2) the module dedicated to curriculum-learning-based person clustering.\nWithin the representation learning module, the outcomes of person clustering through a curriculum learning serve as the supervision information for network training.\nAs training continues, the enocder\u2019s image representation grows in power. In the curriculum person clustering module, we propose an adaptive CL training strategy to automatically optimize the clustering process. This makes the stage-specific clustering select samples based on dynamic criteria.\nIn CPC, we use ResNet50 [11 ###reference_11###] as the encoder network.\nWe us the penultimate network layer of the model to extract the person image features.\nOnce we have obtained the clustering results, we use only the clustered samples in the subsequent stage of representation learning.\nBy leveraging this approach, it becomes possible to mitigate the introduction of errors stemming from pseudo labels, thus reducing their impact to a minimum."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "Clustering-based Unsupervised Re-id",
39
+ "text": "In the absence of supervision information, our focus lies upon training a model using an unlabeled dataset .\nIn the beginning, we use the network to extract the features , where , where is the feature encoder with parameters , where is the feature dimension.\nThen, we cluster by DBSCAN[12 ###reference_12###] to generate pesudo label.\nAfter clustering, there will be some single samples not fall in any cluster.\nAssuming that of samples are clustered together\nand we denote the pseudo labels as ,\nand the unclustered samples will be excluded from this training iteration.\nAccordinig to the pseudo label , we contrast the cluster center bank , where is cluster number, is defined as:\nis the size of cluster , and is the sample features in cluster .\nIn the training process, we can update the encoder with parameter set through the cross-entropy loss function:\nwhere is pseudo label index of image , is the temperature parameter. Subsequently, the cluster center bank undergoes an update during the -th iteration in the following manner:\nwhere is the update parameter to control the memory updating.\nThen, we perform the clustering algorithm on all samples to generate new pseudo labels for further training iteration:\nwhere denotes image clustering algorithm parameters. is the pseudo label and is generated by the clustering algorithm, where .\nAgain, the clustering method will produce a proportion of unclustered images. These samples will not be included in the representation learning in the current iteration."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "Curriculum Person Clustering",
45
+ "text": "The inherent limitation of the baseline approach lies in its inflexibility to accommodate the diverse range of clothing variations inherent to individual identities. The clustering criterion employed fails to effectively capture these nuanced differences.\nTo surmount this constraint, we present a novel approach in this section: a dynamic clustering strategy that possesses the ability to dynamically modify the clustering criterion based on the cluster\u2019s internal property.\nWe use the cluster density index to quantify the difficulty of clustering, such as clustering confidence, as the measure of the difficulty of learning the curriculum, called the Relaxing Index (RI):\nthe similarity score measures the similarity between sample and its cluster center,\nwhich defined as:\nwhere denotes the dimension mean value, and the is the -th dimension of feature .\n and are respectively defined as:\nAccording to Eq.(6 ###reference_###), a higher cluster density will lead to a larger , and a significantly pronounced value of the indicates that the current cluster consists of only a solitary item or a small number of clothes. This also implies that the present cluster possesses a promising potential for expansion, enabling it to accommodate and incorporate a greater number of samples.\nWe use as an indicator parameter to scheduling model training, the training scheduler in curriculum learning is defined as:\nwhere is a threshold value.\nThen, we introduce an update scheme for the parameters of person image clustering:\nis a hyper-parameter.\nBased on the Eq. (10 ###reference_###), the model has the capability to gradually expand the level of clustering , progressing from simpler instances such as identical clothes to more challenging scenarios involving diverse clothing items, by gradually perceiving the latent clothes-independent patterns. Please see our supplementary materials for visualization evaluation.\nThe summarize of Curriculum Person Clustering (CPC) are shown in Algorithm 1 ###reference_###."
46
+ },
47
+ {
48
+ "section_id": "4",
49
+ "parent_section_id": null,
50
+ "section_name": "Experiment",
51
+ "text": ""
52
+ },
53
+ {
54
+ "section_id": "4.1",
55
+ "parent_section_id": "4",
56
+ "section_name": "Experimental Setting",
57
+ "text": "Datasets\n\nWithin this section, we assess the performance of CPC on the DeepChange [5 ###reference_5###], which currently stands as the most largest long-term person re-identification dataset.\nIt contains images of people with identities from camera views, collected over months.\nProtocols and metrics\n\nTo evaluate the performance of the model, we employ two commonly-used metrics for retrieval accuracy: CMC and mAP.\nNevertheless, in contrast to short-term re-id, long-term re-id requires a more intricate consideration, the true matches for a given probe image should originate from the same camera but captured at distinct time points, featuring individuals who is adorned in dissimilar clothes.\nCompetitors\n\n\nCertain approaches have shown promising results in addressing unsupervised short-term reid, and these methods often utilize clustering-based models.\nTo the best of our understanding, no existing methodologies have been specifically designed and tailored for the purpose of addressing unsupervised scenarios on the DeepChange.\nIn particular, we have selected two commonly used short-term methods as our main competitors: self-paced contrastive learning (SpCL) [13 ###reference_13###] and cluster contrast (CC) [14 ###reference_14###].\nFurthermore, we have conducted a comparative analysis of our approach with several supervised baseline methods, including MobileNet [15 ###reference_15###], OSNet [16 ###reference_16###], DenseNet [17 ###reference_17###]), ReIDCaps [18 ###reference_18###], DeiT [19 ###reference_19###], BNNeck Re-ID [20 ###reference_20###] and Vision Transformer [21 ###reference_21###].\nImplementation details\n\nAll experimental procedures were carried out using the PyTorch framework, utilizing the computational power of Nvidia 1080Ti GPU(s).\nFor CPC, we employed ResNet50 as the backbone and initialized it ImageNet.\nThe training procedure have total 50 epochs. The learning rate, initially established at 0.00035, undergoes a decay process whereby it is multiplied by a factor of 0.1 every 20 epochs. Batch size is set to 128.\nWe additionally introduce several parameters. The temperature parameter , assumes a value of 0.05, while the upgrade factor is set at 0.2. Additionally, the values of , poised at 0.8, and fixed at 0.01."
58
+ },
59
+ {
60
+ "section_id": "4.2",
61
+ "parent_section_id": "4",
62
+ "section_name": "Comparison",
63
+ "text": "Comparison with unsupervised method\nWe retested both methods SPCL [13 ###reference_13###] and CC [14 ###reference_14###] on DeepChange and compared our methods, as shown in Table 1 ###reference_###.\nOur approach demonstrates good performance compared to other methods, achieving mAP 14.6% and a Rank-1 45.9% on the DeepChange dataset.\nBoth SpCL and CC rely on training the model by utilizing high-quality pseudo labels, an efficient strategy for unsupervised methods in the absence of clothes changes[23 ###reference_23###]. However,\nit also means that both of these methods heavily depend on the color feature. Our method can effectively improve this situation without using any auxiliary information.\nIn particular, CPC still has clear advantages over SpCL even with a stronger ViT encoder. This further proves that our method is effective in solving the clothing change challenge.\nComparison with supervised methods \nIn order to further highlight the strengths of our approach in clothing change scenarios, we conducted a comprehensive comparison with numerous supervised competitors.\nAs shown in Table 2 ###reference_###, even the supervised baselines struggle to effectively address long-term re-identification with low retrieval accuracies.\nThis performance proves that the clothes changing is also very challenging for supervised training. However, our unsupervised model outperforms almost all of the above supervised training methods and achieves a minimal gap to the strongest baseline.\nAblative studies\nTo further demonstrate the superiority of CPC, we have conducted a series of ablation studies. The results are shown in Table 3 ###reference_###.\nThe Model (#17) which excludes the inclusion of the CPC, exhibits performance metrics of only 12.40% and 41.80% for mAP and rank-1, respectively. These results fall short when compared to the performance demonstrated by CPC in Model (#18).\nThis gap demonstrates the effectiveness of CPC for the clothes change challenge."
64
+ },
65
+ {
66
+ "section_id": "5",
67
+ "parent_section_id": null,
68
+ "section_name": "Conclusion",
69
+ "text": "We pay attention to the formidable challenge of unsupervised long-term person re-id in the presence of diverse clothing patterns, because individuals may possess similar attire, while the same person can exhibit a wide range of outfit selections that clearly distinguish them. To surmount this intricate obstacle, we propose a novel method called Curriculum Person Clustering (CPC). This method dynamically adjusts the unsupervised clustering criteria based on the density of the clusters, which can effectively merge images of individuals undergoing clothing changes into the same cohesive clusters. Experiments on the most current and long-term person re-id datasets demonstrate the significant superiority of CPC, even comparable to the supervised re-id models.\nAcknowledgment This paper is supported by the National Natural Science Foundation of China under Grant 61876022."
70
+ }
71
+ ],
72
+ "appendix": [],
73
+ "tables": {
74
+ "1": {
75
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.11.1.1\">Table 1</span>: </span>Comparison with the state-of-the-art unsupervised re-identification models on the DeepChange dataset.\n denotes no fine tuning on DeepChange.\n\u201cClustering Base\u201d means the use orignial clustering result.\nThe best results are indicated in <span class=\"ltx_text\" id=\"S4.T1.12.2\" style=\"color:#FF0000;\">red</span>.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.4\" style=\"width:433.6pt;height:261.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(82.5pt,-49.8pt) scale(1.61438607249084,1.61438607249084) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.4.2\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.3.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.4.2.3.1.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.3.1.1.1\" style=\"font-size:90%;\">Model</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.2.3.1.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.3.1.2.1\" style=\"font-size:90%;\">Rank-1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.2.3.1.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.3.1.3.1\" style=\"font-size:90%;\">Rank-5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.2.3.1.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.3.1.4.1\" style=\"font-size:90%;\">mAP</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.4.2.3.1.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.3.1.5.1\" style=\"font-size:90%;\">Backbone</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.3.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T1.3.1.1.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.3.1.1.1.1\" style=\"font-size:90%;\">#1 ResNet50\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.3.1.1.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib11\" title=\"\">11</a><span class=\"ltx_text\" id=\"S4.T1.3.1.1.1.3.2\" style=\"font-size:90%;\">]</span></cite><span class=\"ltx_text\" id=\"S4.T1.3.1.1.1.4\" style=\"font-size:90%;\"> </span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.1.1.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.3.1.1.2.1\" style=\"font-size:90%;\">15.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.1.1.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.3.1.1.3.1\" style=\"font-size:90%;\">27.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.1.1.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.3.1.1.4.1\" style=\"font-size:90%;\">02.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.3.1.1.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.3.1.1.5.1\" style=\"font-size:90%;\">ResNet50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.4.2.2.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.4.2.2.1.1\" style=\"font-size:90%;\">#2 ViT\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib21\" title=\"\">21</a><span class=\"ltx_text\" id=\"S4.T1.4.2.2.1.3.2\" style=\"font-size:90%;\">]</span></cite><span class=\"ltx_text\" id=\"S4.T1.4.2.2.1.4\" style=\"font-size:90%;\"> </span>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.2.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.2.1\" style=\"font-size:90%;\">11.1</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.2.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.3.1\" style=\"font-size:90%;\">21.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.2.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.4.1\" style=\"font-size:90%;\">01.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.2.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.2.5.1\" style=\"font-size:90%;\">ViT</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.4.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T1.4.2.4.2.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.4.2.1.1\" style=\"font-size:90%;\">#3 Clustering Base</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.2.4.2.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.4.2.2.1\" style=\"font-size:90%;\">35.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.2.4.2.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.4.2.3.1\" style=\"font-size:90%;\">45.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.2.4.2.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.4.2.4.1\" style=\"font-size:90%;\">09.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.2.4.2.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.4.2.5.1\" style=\"font-size:90%;\">ResNet50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.5.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.4.2.5.3.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.5.3.1.1\" style=\"font-size:90%;\">#4 Clustering Base</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.5.3.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.5.3.2.1\" style=\"font-size:90%;\">38.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.5.3.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.5.3.3.1\" style=\"font-size:90%;\">47.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.5.3.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.5.3.4.1\" style=\"font-size:90%;\">10.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.5.3.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.5.3.5.1\" style=\"font-size:90%;\">ViT</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.6.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T1.4.2.6.4.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.4.2.6.4.1.1\" style=\"font-size:90%;\">#5 SpCL\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.4.2.6.4.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib13\" title=\"\">13</a><span class=\"ltx_text\" id=\"S4.T1.4.2.6.4.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.2.6.4.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.6.4.2.1\" style=\"font-size:90%;\">32.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.2.6.4.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.6.4.3.1\" style=\"font-size:90%;\">42.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.2.6.4.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.6.4.4.1\" style=\"font-size:90%;\">08.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T1.4.2.6.4.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.6.4.5.1\" style=\"font-size:90%;\">ResNet50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.7.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.4.2.7.5.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.4.2.7.5.1.1\" style=\"font-size:90%;\">#6 CC\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T1.4.2.7.5.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib14\" title=\"\">14</a><span class=\"ltx_text\" id=\"S4.T1.4.2.7.5.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.7.5.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.7.5.2.1\" style=\"font-size:90%;\">37.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.7.5.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.7.5.3.1\" style=\"font-size:90%;\">45.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.7.5.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.7.5.4.1\" style=\"font-size:90%;\">10.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.7.5.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.7.5.5.1\" style=\"font-size:90%;\">ResNet50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.8.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.4.2.8.6.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.8.6.1.1\" style=\"font-size:90%;\">#7 SpCL</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.8.6.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.8.6.2.1\" style=\"font-size:90%;\">37.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.8.6.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.8.6.3.1\" style=\"font-size:90%;\">46.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.8.6.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.8.6.4.1\" style=\"font-size:90%;\">10.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.4.2.8.6.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.8.6.5.1\" style=\"font-size:90%;\">ViT</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.2.9.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_tt\" id=\"S4.T1.4.2.9.7.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.4.2.9.7.1.1\" style=\"font-size:90%;\">#8 </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.4.2.9.7.1.2\" style=\"font-size:90%;\">CPC</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_tt\" id=\"S4.T1.4.2.9.7.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.9.7.2.1\" style=\"font-size:90%;\">45.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_tt\" id=\"S4.T1.4.2.9.7.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.9.7.3.1\" style=\"font-size:90%;\">54.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_tt\" id=\"S4.T1.4.2.9.7.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.9.7.4.1\" style=\"font-size:90%;color:#FF0000;\">14.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_tt\" id=\"S4.T1.4.2.9.7.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.2.9.7.5.1\" style=\"font-size:90%;\">ResNet50</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
76
+ "capture": "Table 1: Comparison with the state-of-the-art unsupervised re-identification models on the DeepChange dataset.\n denotes no fine tuning on DeepChange.\n\u201cClustering Base\u201d means the use orignial clustering result.\nThe best results are indicated in red."
77
+ },
78
+ "2": {
79
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.6.1.1\">Table 2</span>: </span>\nComparison with <span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.7.2\">supervised</span> method on DeepChange. \n</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.8\" style=\"width:433.6pt;height:272.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(73.5pt,-46.2pt) scale(1.51299276303376,1.51299276303376) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.8.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.8.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T2.8.1.1.1.1\" rowspan=\"2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.1.1.1.1\" style=\"font-size:90%;\">Network/Model</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S4.T2.8.1.1.1.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.1.1.2.1\" style=\"font-size:90%;\">Rank</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.1.1.3\" rowspan=\"2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.1.1.3.1\" style=\"font-size:90%;\">mAP</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.1.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.8.1.2.2.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.2.2.1.1\" style=\"font-size:90%;\">@1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.8.1.2.2.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.2.2.2.1\" style=\"font-size:90%;\">@5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.8.1.2.2.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.2.2.3.1\" style=\"font-size:90%;\">@10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.8.1.2.2.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.2.2.4.1\" style=\"font-size:90%;\">@20</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.1.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T2.8.1.3.3.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.8.1.3.3.1.1\" style=\"font-size:90%;\">#9 ResNet50\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.8.1.3.3.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib11\" title=\"\">11</a><span class=\"ltx_text\" id=\"S4.T2.8.1.3.3.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.3.3.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.3.3.2.1\" style=\"font-size:90%;\">36.6</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.3.3.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.3.3.3.1\" style=\"font-size:90%;\">49.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.3.3.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.3.3.4.1\" style=\"font-size:90%;\">55.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.3.3.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.3.3.5.1\" style=\"font-size:90%;\">61.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.3.3.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.3.3.6.1\" style=\"font-size:90%;\">09.6</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.1.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.8.1.4.4.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.8.1.4.4.1.1\" style=\"font-size:90%;\">#10 MobileNetv2\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.8.1.4.4.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib15\" title=\"\">15</a><span class=\"ltx_text\" id=\"S4.T2.8.1.4.4.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.4.4.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.4.4.2.1\" style=\"font-size:90%;\">33.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.4.4.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.4.4.3.1\" style=\"font-size:90%;\">46.5</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.4.4.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.4.4.4.1\" style=\"font-size:90%;\">52.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.4.4.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.4.4.5.1\" style=\"font-size:90%;\">59.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.4.4.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.4.4.6.1\" style=\"font-size:90%;\">07.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.1.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.8.1.5.5.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.8.1.5.5.1.1\" style=\"font-size:90%;\">#11\nDenseNet121\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.8.1.5.5.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib17\" title=\"\">17</a><span class=\"ltx_text\" id=\"S4.T2.8.1.5.5.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.5.5.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.5.5.2.1\" style=\"font-size:90%;\">38.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.5.5.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.5.5.3.1\" style=\"font-size:90%;\">50.2</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.5.5.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.5.5.4.1\" style=\"font-size:90%;\">55.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.5.5.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.5.5.5.1\" style=\"font-size:90%;\">62.4</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.5.5.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.5.5.6.1\" style=\"font-size:90%;\">09.1</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.1.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.8.1.6.6.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.8.1.6.6.1.1\" style=\"font-size:90%;\">#12\nInceptionv3\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.8.1.6.6.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib22\" title=\"\">22</a><span class=\"ltx_text\" id=\"S4.T2.8.1.6.6.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.6.6.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.6.6.2.1\" style=\"font-size:90%;\">35.0</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.6.6.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.6.6.3.1\" style=\"font-size:90%;\">47.7</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.6.6.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.6.6.4.1\" style=\"font-size:90%;\">53.9</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.6.6.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.6.6.5.1\" style=\"font-size:90%;\">60.6</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.8.1.6.6.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.6.6.6.1\" style=\"font-size:90%;\">08.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.1.7.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T2.8.1.7.7.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.8.1.7.7.1.1\" style=\"font-size:90%;\">#13 BNNeck re-id ResNet50\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.8.1.7.7.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib20\" title=\"\">20</a><span class=\"ltx_text\" id=\"S4.T2.8.1.7.7.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.7.7.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.7.7.2.1\" style=\"font-size:90%;\">47.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.7.7.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.7.7.3.1\" style=\"font-size:90%;\">59.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.7.7.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.7.7.4.1\" style=\"font-size:90%;\">65.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.7.7.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.7.7.5.1\" style=\"font-size:90%;\">71.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.7.7.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.7.7.6.1\" style=\"font-size:90%;\">12.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.1.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T2.8.1.8.8.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.8.1.8.8.1.1\" style=\"font-size:90%;\">#14 ReIDCaps\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.8.1.8.8.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib18\" title=\"\">18</a><span class=\"ltx_text\" id=\"S4.T2.8.1.8.8.1.3.2\" style=\"font-size:90%;\">]</span></cite><span class=\"ltx_text\" id=\"S4.T2.8.1.8.8.1.4\" style=\"font-size:90%;\"> (ResNet50)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.8.8.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.8.8.2.1\" style=\"font-size:90%;\">39.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.8.8.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.8.8.3.1\" style=\"font-size:90%;\">52.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.8.8.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.8.8.4.1\" style=\"font-size:90%;\">58.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.8.8.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.8.8.5.1\" style=\"font-size:90%;\">64.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.8.8.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.8.8.6.1\" style=\"font-size:90%;\">11.3</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.1.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S4.T2.8.1.9.9.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.8.1.9.9.1.1\" style=\"font-size:90%;\">#15 ViT B16\u00a0</span><cite class=\"ltx_cite ltx_citemacro_cite\"><span class=\"ltx_text\" id=\"S4.T2.8.1.9.9.1.2.1\" style=\"font-size:90%;\">[</span><a class=\"ltx_ref\" href=\"#bib.bib21\" title=\"\">21</a><span class=\"ltx_text\" id=\"S4.T2.8.1.9.9.1.3.2\" style=\"font-size:90%;\">]</span></cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.9.9.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.9.9.2.1\" style=\"font-size:90%;\">49.7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.9.9.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.9.9.3.1\" style=\"font-size:90%;\">61.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.9.9.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.9.9.4.1\" style=\"font-size:90%;\">67.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.9.9.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.9.9.5.1\" style=\"font-size:90%;\">72.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.8.1.9.9.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.9.9.6.1\" style=\"font-size:90%;color:#FF0000;\">14.9</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.8.1.10.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_tt ltx_border_tt\" id=\"S4.T2.8.1.10.10.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text\" id=\"S4.T2.8.1.10.10.1.1\" style=\"font-size:90%;\">#16 </span><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.8.1.10.10.1.2\" style=\"font-size:90%;\">CPC (Ours, unsupervised)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_tt ltx_border_tt\" id=\"S4.T2.8.1.10.10.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.10.10.2.1\" style=\"font-size:90%;\">45.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_tt ltx_border_tt\" id=\"S4.T2.8.1.10.10.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.10.10.3.1\" style=\"font-size:90%;\">54.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_tt ltx_border_tt\" id=\"S4.T2.8.1.10.10.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.10.10.4.1\" style=\"font-size:90%;\">58.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_tt ltx_border_tt\" id=\"S4.T2.8.1.10.10.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.10.10.5.1\" style=\"font-size:90%;\">63.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_tt ltx_border_tt\" id=\"S4.T2.8.1.10.10.6\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text\" id=\"S4.T2.8.1.10.10.6.1\" style=\"font-size:90%;\">14.6</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
80
+ "capture": "Table 2: \nComparison with supervised method on DeepChange. \n"
81
+ },
82
+ "3": {
83
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.6.1.1\">Table 3</span>: </span> Ablation study on DeepChange.\n</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.2\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.2.3.1\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T3.2.3.1.1\" rowspan=\"2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.2.3.1.1.1\" style=\"font-size:90%;\">CPC</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" colspan=\"4\" id=\"S4.T3.2.3.1.2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.2.3.1.2.1\" style=\"font-size:90%;\">Rank</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.2.3.1.3\" rowspan=\"2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.2.3.1.3.1\" style=\"font-size:90%;\">mAP</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.4.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.2.4.2.1\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.2.4.2.1.1\" style=\"font-size:90%;\">@1</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.2.4.2.2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.2.4.2.2.1\" style=\"font-size:90%;\">@5</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.2.4.2.3\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.2.4.2.3.1\" style=\"font-size:90%;\">@10</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.2.4.2.4\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.2.4.2.4.1\" style=\"font-size:90%;\">@20</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_tt\" id=\"S4.T3.1.1.1\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">\n<span class=\"ltx_text\" id=\"S4.T3.1.1.1.1\" style=\"font-size:90%;\">#17\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.1.2.1\" style=\"font-size:90%;\">41.8</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.3\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.1.3.1\" style=\"font-size:90%;\">51.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.4\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.1.4.1\" style=\"font-size:90%;\">54.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.5\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.1.5.1\" style=\"font-size:90%;\">59.2</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T3.1.1.6\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text\" id=\"S4.T3.1.1.6.1\" style=\"font-size:90%;\">12.4</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.2.2\">\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_row ltx_border_b\" id=\"S4.T3.2.2.1\" style=\"padding-left:8.5pt;padding-right:8.5pt;\">\n<span class=\"ltx_text\" id=\"S4.T3.2.2.1.1\" style=\"font-size:90%;\">#18\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.2.2.2\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.2.1\" style=\"font-size:90%;\">45.9</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.2.2.3\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.3.1\" style=\"font-size:90%;\">54.0</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.2.2.4\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.4.1\" style=\"font-size:90%;\">58.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.2.2.5\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.5.1\" style=\"font-size:90%;\">63.3</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T3.2.2.6\" style=\"padding-left:8.5pt;padding-right:8.5pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.2.2.6.1\" style=\"font-size:90%;\">14.6</span></td>\n</tr>\n</tbody>\n</table>\n</figure>",
84
+ "capture": "Table 3: Ablation study on DeepChange.\n"
85
+ }
86
+ },
87
+ "image_paths": {
88
+ "1": {
89
+ "figure_path": "2202.03087v3_figure_1.png",
90
+ "caption": "Fig. 1: Visualizing the inherent challenges of long-term person re-identification.\nAll the images belong to a single person. It is clear that the difference in appearance between different clothing.",
91
+ "url": "http://arxiv.org/html/2202.03087v3/x1.png"
92
+ },
93
+ "2": {
94
+ "figure_path": "2202.03087v3_figure_2.png",
95
+ "caption": "Fig. 2: The flowchart of Curriculum Person Clustering (CPC).",
96
+ "url": "http://arxiv.org/html/2202.03087v3/x2.png"
97
+ }
98
+ },
99
+ "validation": true,
100
+ "references": [
101
+ {
102
+ "1": {
103
+ "title": "\u201cUnsupervised tracklet person re-identification,\u201d",
104
+ "author": "Minxian Li, Xiatian Zhu, and Shaogang Gong,",
105
+ "venue": "TPAMI, 2019.",
106
+ "url": null
107
+ }
108
+ },
109
+ {
110
+ "2": {
111
+ "title": "\u201cLearning shape representations for clothing variations in person\nre-identification,\u201d",
112
+ "author": "Yu-Jhe Li, Zhengyi Luo, Xinshuo Weng, and Kris M Kitani,",
113
+ "venue": "arXiv preprint arXiv:2003.07340, 2020.",
114
+ "url": null
115
+ }
116
+ },
117
+ {
118
+ "3": {
119
+ "title": "\u201cPerson re-identification by video ranking,\u201d",
120
+ "author": "Taiqing Wang, Shaogang Gong, Xiatian Zhu, and Shengjin Wang,",
121
+ "venue": "in ECCV. Springer, 2014.",
122
+ "url": null
123
+ }
124
+ },
125
+ {
126
+ "4": {
127
+ "title": "\u201cPerson re-identification by contour sketch under moderate clothing\nchange,\u201d",
128
+ "author": "Qize Yang, Ancong Wu, and Wei-Shi Zheng,",
129
+ "venue": "TPAMI, 2019.",
130
+ "url": null
131
+ }
132
+ },
133
+ {
134
+ "5": {
135
+ "title": "\u201cDeepchange: A long-term person re-identification benchmark,\u201d",
136
+ "author": "Peng Xu and Xiatian Zhu,",
137
+ "venue": "arXiv preprint arXiv:2105.14685, 2021.",
138
+ "url": null
139
+ }
140
+ },
141
+ {
142
+ "6": {
143
+ "title": "\u201cUnsupervised person re-identification via softened similarity\nlearning,\u201d",
144
+ "author": "Yutian Lin, Lingxi Xie, Yu Wu, Chenggang Yan, and Qi Tian,",
145
+ "venue": "in CVPR, 2020.",
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "7": {
151
+ "title": "\u201cCluster-guided asymmetric contrastive learning for unsupervised\nperson re-identification,\u201d",
152
+ "author": "Mingkun Li, Chun-Guang Li, and Jun Guo,",
153
+ "venue": "arXiv preprint arXiv:2106.07846, 2021.",
154
+ "url": null
155
+ }
156
+ },
157
+ {
158
+ "8": {
159
+ "title": "\u201cFine-grained shape-appearance mutual learning for cloth-changing\nperson re-identification,\u201d",
160
+ "author": "Peixian Hong, Tao Wu, Ancong Wu, Xintong Han, and Wei-Shi Zheng,",
161
+ "venue": "in CVPR, 2021.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "9": {
167
+ "title": "\u201cAutomatic curriculum learning for deep rl: A short survey,\u201d",
168
+ "author": "R\u00e9my Portelas, C\u00e9dric Colas, Lilian Weng, Katja Hofmann, and\nPierre-Yves Oudeyer,",
169
+ "venue": "arXiv preprint arXiv:2003.04664, 2020.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "10": {
175
+ "title": "\u201cOn the power of curriculum learning in training deep networks,\u201d",
176
+ "author": "Guy Hacohen and Daphna Weinshall,",
177
+ "venue": "in ICML, 2019.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "11": {
183
+ "title": "\u201cDeep residual learning for image recognition,\u201d",
184
+ "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun,",
185
+ "venue": "in CVPR, 2016.",
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "12": {
191
+ "title": "\u201cA density-based algorithm for discovering clusters in large spatial\ndatabases with noise,\u201d",
192
+ "author": "Martin Ester, Hans-Peter Kriegel, J\u00f6rg Sander, and Xiaowei Xu,",
193
+ "venue": "in SIGKDD, 1996.",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "13": {
199
+ "title": "\u201cSelf-paced contrastive learning with hybrid memory for domain\nadaptive object re-id,\u201d",
200
+ "author": "Yixiao Ge, Feng Zhu, Dapeng Chen, Rui Zhao, and Hongsheng Li,",
201
+ "venue": "in NeurIPS, 2020.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "14": {
207
+ "title": "\u201cCluster contrast for unsupervised person re-identification,\u201d",
208
+ "author": "Zuozhuo Dai, Guangyuan Wang, Siyu Zhu, Weihao Yuan, and Ping Tan,",
209
+ "venue": "arXiv preprint arXiv:2103.11568, 2021.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "15": {
215
+ "title": "\u201cMobilenetv2: Inverted residuals and linear bottlenecks,\u201d",
216
+ "author": "Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh\nChen,",
217
+ "venue": "in CVPR, 2018.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "16": {
223
+ "title": "\u201cLearning generalisable omni-scale representations for person\nre-identification,\u201d",
224
+ "author": "Kaiyang Zhou, Yongxin Yang, Andrea Cavallaro, and Tao Xiang,",
225
+ "venue": "TPAMI, 2021.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "17": {
231
+ "title": "\u201cDensely connected convolutional networks,\u201d",
232
+ "author": "Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger,",
233
+ "venue": "in CVPR, 2017.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "18": {
239
+ "title": "\u201cBeyond scalar neuron: Adopting vector-neuron capsules for long-term\nperson re-identification,\u201d",
240
+ "author": "Yan Huang, Jingsong Xu, Qiang Wu, Yi Zhong, Peng Zhang, and Zhaoxiang Zhang,",
241
+ "venue": "TCSVT, 2019.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "19": {
247
+ "title": "\u201cTraining data-efficient image transformers & distillation through\nattention,\u201d",
248
+ "author": "Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre\nSablayrolles, and Herv\u00e9 J\u00e9gou,",
249
+ "venue": "in ICML, 2021.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "20": {
255
+ "title": "\u201cA strong baseline and batch normalization neck for deep person\nre-identification,\u201d",
256
+ "author": "Hao Luo, Wei Jiang, Youzhi Gu, Fuxu Liu, Xingyu Liao, Shenqi Lai, and Jianyang\nGu,",
257
+ "venue": "TMM, 2020.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "21": {
263
+ "title": "\u201cAn image is worth 16x16 words: Transformers for image recognition\nat scale,\u201d",
264
+ "author": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn,\nXiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg\nHeigold, Sylvain Gelly, et al.,",
265
+ "venue": "arXiv preprint arXiv:2010.11929, 2020.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "22": {
271
+ "title": "\u201cRethinking the inception architecture for computer vision,\u201d",
272
+ "author": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew\nWojna,",
273
+ "venue": "in CVPR, 2016.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "23": {
279
+ "title": "\u201cCluster-guided asymmetric contrastive learning for unsupervised\nperson re-identification,\u201d",
280
+ "author": "Mingkun Li, Chun-Guang Li, and Jun Guo,",
281
+ "venue": "IEEE Transactions on Image Processing, vol. 31, pp. 3606\u20133617,\n2022.",
282
+ "url": null
283
+ }
284
+ }
285
+ ],
286
+ "url": "http://arxiv.org/html/2202.03087v3"
287
+ }
20240123/2202.12312v2.json ADDED
@@ -0,0 +1,485 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Oolong: Investigating What Makes Transfer Learning Hard with Controlled Studies",
3
+ "abstract": "When we transfer a pretrained language model to a new language, there are many axes of variation that change at once. To disentangle the impact of different factors like syntactic similarity and vocabulary similarity, we propose a set of controlled transfer studies: we systematically transform the language of the GLUE benchmark, altering one axis of crosslingual variation at a time, and then measure the resulting drops in a pretrained model\u2019s downstream performance. We find that models can largely recover from syntactic-style shifts, but cannot recover from vocabulary misalignment and embedding matrix re-initialization, even with continued pretraining on 15 million tokens. Moreover, good-quality tokenizers in the transfer language do not make vocabulary alignment easier. Our experiments provide insights into the factors of cross-lingual transfer that researchers should most focus on when designing language transfer scenarios.\n\u2020\u2020In Chinese, \u201cOolong\u201d can refer to an unexpected change or development. Equal contribution. Corresponding author.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "What makes it hard for neural networks to learn new languages? Large language models (LLMs) require vast datasets for pretraining, making it challenging to train LLMs from scratch for low-resource languages Devlin et al. (2018 ###reference_8###); Liu et al. (2019 ###reference_18###); Lacoste et al. (2019 ###reference_14###); Clark et al. (2020 ###reference_5###). For such languages, an appealing approach is to transfer knowledge from an LLM trained for a high-resource language, especially since pretrained models can transfer knowledge across even extreme shifts Papadimitriou and Jurafsky (2020 ###reference_26###); Tamkin et al. (2020 ###reference_37###).\nA range of methods have been explored to enable such crosslingual transfer of English LLMs, using techniques such as adaptive pretraining Reimers and Gurevych (2020 ###reference_33###), and embedding retraining Artetxe et al. (2020 ###reference_3###); Tran (2020 ###reference_38###). To better understand the factors affecting successful transfer, we present a set of controlled transfer studies to compare the effects of different aspects of a cross-lingual shift.\n###figure_1### Our controlled studies consist of transferring an English model to a language that is transformed from English on just one axis of variation. Realistic transfer scenarios involve languages that differ across multiple axes of variation at one time. Our experiments serve to disentangle these effects, and identify the issues that practitioners should most focus on when doing cross-lingual transfer learning. We examine three factors that are salient in a transfer learning context:\nWord-order syntactic differences: Languages vary greatly in the ways that their syntax orders words. Syntactic topological similarities are generally considered an important factor when deciding transfer language pairs. We test the effects of different levels of word-order perturbation in transfer learning.\nWord identity alignments: Transferring to a new language requires learning the meaning, or word embeddings, of new words, and how their layer 0 embeddings correspond to the old language. We experiment with the effect of re-initializing or shuffling the rows of the layer 0 word embedding matrix before transfer.\nTokenizer quality We test the effect of bad tokenizer quality by reinitializing the word embedding matrix and transferring to English data tokenized with French and Dutch tokenizers that are suboptimal quality for English tokenization.\nWe test the effect of these factors on transfer learning both by 1) directly fine-tuning on t-English versions of the GLUE benchmark, as well as 2) continuing masked language model pre-training on 15 million tokens of t-English wikitext. In all cases, we find that word identity alignment provides the greatest stumbling block for transfer learning. Re-initializing or shuffling the rows of the embedding matrix has a very negative effect on downstream learning which we cannot reverse in the low-data regime that we are simulating. If the embedding matrix is reinitialized and a new tokenizer is used, the effect of reinitialization overshadows any effect that the quality of the new tokenizer might cause. In the case of syntactic word-order transformations, we find that even in the low-data transfer learning regime, the models we test can adapt to word order shifts as long as vocabulary information is kept.\nWe run experiments on RoBERTa, DeBERTa, and XLM-R in order to test transfer learning beyond the training set languages for both monolingual and multilingual models. Our method allows us to disentangle the effects of correlated factors by inspecting them one at a time.111Our code is available publicly at https://github.com/frankaging/oolong-crosslingual ###reference_lingual###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "As self-supervised pretraining advances the state of NLP in high-resource languages, research into widening these successes beyond high-resource languages has become widespread and important. Methodologies for best transferring a monolingual or multilingual model to an unseen language are widely explored. Ogueji et al. (2021 ###reference_21###) and Ogunremi et al. (2023 ###reference_22###), showcase the positive effects of pretraining on closer and related languages to the target language, even if this is less data than larger pretrained models, in part because of the possibility of shared vocabulary (Oladipo et al., 2022 ###reference_23###). Our experiments build off previous efforts that try to enable crosslingual transfer from pretrained monolingual LLMs to new languages Artetxe et al. (2018 ###reference_2###, 2020 ###reference_3###); Tran (2020 ###reference_38###); Reimers and Gurevych (2020 ###reference_33###); Gogoulou et al. (2021 ###reference_11###).\nWith respect to vocabulary sharing and adaptation, Liang et al. (2023 ###reference_17###) show that training a multilingual model with a massive vocabulary that separates out languages outweighs the benefits of vocabulary sharing between language Patil et al. (2022 ###reference_27###), while in the transfer regime Chronopoulou et al. (2020 ###reference_4###) showcase the importance of maintaining vocabulary overlap. Techniques mapping subword embeddings to their new synonyms, or keeping subwords in the same script across languages, prove effective for cross-lingual transfer (Vernikos and Popescu-Belis, 2021 ###reference_39###; Pfeiffer et al., 2021 ###reference_29###, 2020 ###reference_28###; Muller et al., 2021 ###reference_20###). The importance of embedding intialization statistics is discussed in (Raghu et al., 2019 ###reference_32###).\nResults on the importance of syntactic shifts remain broad, with work on multilingual training suggesting that syntactic shifts are significant compared to vocabulary effects (K et al., 2020 ###reference_12###), and that syntactic structure plays a role in developing parallel multilingual encodings (Dufter and Sch\u00fctze, 2020 ###reference_9###), while Deshpande et al. (2022 ###reference_7###) show intersecting effects of vocabulary and word order shifts.\nUnderstanding the direct relationship between the effect of syntactic shifts and the effect of vocabulary and tokenizer shifts remains an important problem in understanding transfer learning. Our work creates a framework for decomposing and disentangling the difficulties of transfer in controlled studies, giving researchers pointers for what aspects of language variation make transfer difficult."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Methods",
21
+ "text": "Our methodology consists of taking a pretrained model, and transferring to a t-English: a systematically transformed version of English data that differs from English on one axis of variation. The different t-Englishes that we use are described and motivated below, and examples are in Table 1 ###reference_###. We consider two low-data transfer environments: Direct Fine-tuning, where we transfer the English pretrained model directly to t-GLUE, transformed GLUE datasets (Wang et al., 2018 ###reference_40###), and Continued Pretraining, where we first do masked language modeling training on 15 million tokens of the WikiText-103M corpus Merity et al. (2016 ###reference_19###) transformed to t-English. 222For comparison, the pretraining data for RoBERTa contains 3.3B tokens, so 15M tokens is about 0.45% of its pretraining data. This is comparable to the size of the OSCAR corpus for Yiddish Ortiz Su\u00e1rez et al. (2019 ###reference_24###)."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Transformed English (t-Englishes)",
27
+ "text": "While syntax is a crucial aspect of language (Garrett, 1976 ###reference_10###), how sensitive or invariant lanugage models are to syntactic information is a complex topic (Pham et al., 2021 ###reference_30###; Sinha et al., 2021 ###reference_36###; Papadimitriou et al., 2022 ###reference_25###; Abdou et al., 2022 ###reference_1###). In the domain of transfer learning, we investigate a set of syntactic transformations that isolate syntactic word-order shifts from the other factors that differ between languages. We bound our syntactic transformation experiments with a random shuffle control, where no word order information from the original language can be used to decode the new language. We also do the simple, but drastic baseline of reversing the order of all of the words in the input. In order to test the effect of more realistic syntactic changes, we transform the English data into t-Englishes that follow the word-order statistics of other language. Using the Galactic Dependencies package (Wang and Eisner, 2016 ###reference_41###) with Stanza Qi et al. (2020 ###reference_31###) to transform our corpora to match the ordering of words in noun phrases and verb phrases of French () and Japanese () and also perform a mixed transformation with French noun order and Japanese verb order ().\nPrevious works have consistently found that good embeddings are crucial for enabling effective crosslingual transfer Tran (2020 ###reference_38###); Artetxe et al. (2020 ###reference_3###). However, these gains may due to several factors, including better initialization statistics (Raghu et al., 2019 ###reference_32###), or to a learned alignment between the learned embeddings and the pretrained transformer layers (Wu et al., 2021 ###reference_43###). Here, we test the baseline effect of reinitializing the embedding layer while transferring to the same language that the model was pretrained. We compare this to a scenario where the rows of the embedding matrix are shuffled, meaning that vector statistics are broadly similar but each word has been swapped with another and the model needs to find the mapping during fine-tuning.\nHow much does tokenizer quality matter, if the price of a better tokenizer is having to reinitialize the whole word embedding matrix? Though quality tokenizers undoubtedly play an important role in multilingual NLP (Rust et al., 2020 ###reference_34###), we wish to compare the effect of tokenizer quality when the word identity alignment problem remains constant. While re-initializing the embedding matrix, we compare the effects of the original RoBERTa tokenizer, to two tokenizers that produce low-quality tokenizations for English text: the French FlauBERT Le et al. (2020 ###reference_16###) and the Dutch DutchBERT de Vries et al. (2019 ###reference_6###). The non-English tokenizers used to tokenize English text simulate the effect of having a bad, non-language-specific tokenizer in the low data regime (see Appendix B ###reference_### for statistics on how the different tokenizers work on English).\n###figure_2### ###figure_3### ###figure_4### ###figure_5###"
28
+ },
29
+ {
30
+ "section_id": "4",
31
+ "parent_section_id": null,
32
+ "section_name": "Results",
33
+ "text": "We present the main results of our transfer experiments. Our experimental details (e.g. hyperparameter choices) with a per-task breakdown of t-GLUE performance as well as additional results on DeBERTa and XLM-R are included in Appendix A ###reference_###."
34
+ },
35
+ {
36
+ "section_id": "4.1",
37
+ "parent_section_id": "4",
38
+ "section_name": "Syntax matters, but training can mostly recover",
39
+ "text": "Word order permutations have an effect on model performance, but the models that we test can recover relatively well from linguistic word order permutations when there are no vocabulary confounders. As shown in Figure 2 ###reference_###, simply by fine-tuning on GLUE RoBERTa can recover from linguistic-style syntactic shifts relatively well, though this is significantly worse for random word order permutations that have no consistency or syntactic backing. These differences are all lessened with continued pretraining on 15M tokens of the transformed t-English data. These results suggest that syntactic shifts have real but limited impact on crosslingual transfer when disentangled from vocabulary learning effects.\n###figure_6###"
40
+ },
41
+ {
42
+ "section_id": "4.2",
43
+ "parent_section_id": "4",
44
+ "section_name": "Good embeddings matter most, bad embeddings can ruin a good tokenizer",
45
+ "text": "Looking at the isolated effect of vocabulary, we find that in the low-data transfer regime the model has a hard time reconstructing a reinitialized embedding matrix. As shown in Figure 3 ###reference_###, reinitializing the embedding matrix causes huge failures for the direct fine-tune case, and the quality of the tokenizer (language-bespoke versus not) do not have an effect beyond this. Our results suggest that tokenization may thus be a \u201clower-order bit\u201d for crosslingual transfer, which has little impact until good word embeddings are learned. In the direct fine-tuning case, shuffling the word embedding matrix is significantly better than reinitializing the embeddings, though this difference disappears with continued pretraining."
46
+ },
47
+ {
48
+ "section_id": "5",
49
+ "parent_section_id": null,
50
+ "section_name": "Conclusions",
51
+ "text": "In this paper, we propose a paradigm to study crosslingual transfer through transformations which simulate and disentangle the linguistic changes across languages. Our results suggest that solving the embedding alignment problem is the \"high-order bit\" for crosslingual transfer: it has the largest impact on finetuning performance and is the least improved by continued pretraining. Thus, future progress on solving this problem in large-scale transformers may have outsized impact."
52
+ }
53
+ ],
54
+ "appendix": [
55
+ {
56
+ "section_id": "Appendix 1",
57
+ "parent_section_id": null,
58
+ "section_name": "Appendix A Results on other models",
59
+ "text": "We present the results in Figures 2 ###reference_### and 3 ###reference_### for two more models: DeBERTa and the cross-lingual model XLM-R:\n###figure_7### ###figure_8###"
60
+ },
61
+ {
62
+ "section_id": "Appendix 2",
63
+ "parent_section_id": null,
64
+ "section_name": "Appendix B Sequence Length Distribution",
65
+ "text": "As described in Section 3.1 ###reference_SSS0.Px3###, we try four different tokenizers to substitute for our RoBERTa Liu et al. (2019 ###reference_18###) model that uses the Byte-Pair Encoding (BPE) (Sennrich et al., 2015 ###reference_35###) tokenizer. Specifically, we substitue with the WordPiece tokenizer Wu et al. (2016 ###reference_42###) used by BERT Devlin et al. (2018 ###reference_8###) (i.e., BERT Tokenizer in Table 1 ###reference_###) and the SentencePiece tokenizer Kudo and Richardson (2018 ###reference_13###) used by Albert Lan et al. (2019 ###reference_15###) (i.e., Albert Tokenizer in Table 1 ###reference_###). Additionally, we substitute with two new non-English tokenizers including the French FlauBERT Le et al. (2020 ###reference_16###) (FlauBERT Tokenizer in Table 1 ###reference_###) and the Dutch DutchBERT de Vries et al. (2019 ###reference_6###) (DutchBERT Tokenizer in Table 1 ###reference_###). As shown in Figure 7 ###reference_###, we plot the distributions of sequence lengths as a measure of the heterogeneity introduced by new tokenizers to ensure variences across tokenized sequence lengths. Specifically, we see there are inferior tokenizers such as FlauBERT Tokenizer with a 22.15% increase in sequence length. Our results are consistent with previous findings Rust et al. (2020 ###reference_34###) where sequence length distributions are closer."
66
+ },
67
+ {
68
+ "section_id": "Appendix 3",
69
+ "parent_section_id": null,
70
+ "section_name": "Appendix C Training Set-up Details",
71
+ "text": ""
72
+ },
73
+ {
74
+ "section_id": "Appendix 4",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix D Detailed GLUE Task Performance",
77
+ "text": "Table 2 ###reference_### shows performance break-down for individual GLUE task under different transformations as described in Section 3.1 ###reference_###. The individual t-GLUE and GLUE results are included in Table 2 ###reference_###. We find a consistent picture across most of the tasks, with some interesting effects like CoLA (which is more syntax-sensitive) being impacted more by syntactic shifts."
78
+ }
79
+ ],
80
+ "tables": {
81
+ "1": {
82
+ "table_html": "<figure class=\"ltx_table\" id=\"S2.T1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S2.T1.8\" style=\"width:433.6pt;height:277.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-52.1pt,33.3pt) scale(0.806358519360442,0.806358519360442) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S2.T1.8.8\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S2.T1.8.8.9.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_tt\" id=\"S2.T1.8.8.9.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.8.8.9.1.1.1\">Transformation Type</span></th>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S2.T1.8.8.9.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S2.T1.8.8.9.1.2.1\">Sentence / Sequence</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.8.8.10.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S2.T1.8.8.10.2.1\" style=\"padding-bottom:8.61108pt;\">Original English</th>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S2.T1.8.8.10.2.2\" style=\"padding-bottom:8.61108pt;\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.8.8.10.2.2.1\">\u201cthe film unfolds with all the mounting tension of an expert thriller , until the tragedy beneath it all gradually reveals itself .\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.8.8.11.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.8.8.11.3.1\">Random Order</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.8.8.11.3.2\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.8.8.11.3.2.1\">\u201can all all gradually beneath thriller with reveals . until tension tragedy mounting the it of the the expert , unfolds itself film\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.8.8.12.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.8.8.12.4.1\" style=\"padding-bottom:4.30554pt;\">Reverse Order</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.8.8.12.4.2\" style=\"padding-bottom:4.30554pt;\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.8.8.12.4.2.1\">\u201c. itself reveals gradually all it beneath tragedy the until , thriller expert an of tension mounting the all with unfolds film the\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.1.1.1.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.1.1.1.2.1\">\u201cthe film with all the of an expert , until the beneath all gradually . itself reveals it tragedy thriller tension mounting unfolds\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.2.2.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.2.2.2.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.2.2.2.2\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.2.2.2.2.1\">\u201cthe film unfolds with all the tension of an thriller , until the tragedy beneath it all gradually itself . reveals expert mounting\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.3.3.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.3.3.3.1\" style=\"padding-bottom:8.61108pt;\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.3.3.3.2\" style=\"padding-bottom:8.61108pt;\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.3.3.3.2.1\">\u201cthe film unfolds with all the of an expert , until the beneath all gradually . itself reveals it tragedy thriller tension mounting\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.4.4.4.1\">\n<span class=\"ltx_text ltx_markedasmath ltx_font_typewriter\" id=\"S2.T1.4.4.4.1.1\">RoBERTa</span> Tokenizer</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.4.4.4.2\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.4.4.4.2.1\">\u201cthe film unfolds with all the mounting tension of an expert thriller , until the tragedy beneath it all gradually reveals itself .\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.5.5.5.1\">\n<span class=\"ltx_text ltx_markedasmath ltx_font_typewriter\" id=\"S2.T1.5.5.5.1.1\">BERT</span> Tokenizer</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.5.5.5.2\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.5.5.5.2.1\">\u201cthe film un fold s with all the mounting tension of an expert thriller , until the tragedy beneath it all gradually reveals itself .\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.6.6.6.1\">\n<span class=\"ltx_text ltx_markedasmath ltx_font_typewriter\" id=\"S2.T1.6.6.6.1.1\">Albert</span> Tokenizer</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.6.6.6.2\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.6.6.6.2.1\">\u201cthe film unfold s with all the mounting tension of an expert thriller , until the tragedy beneath it all gradually reveals itself .\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.7.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.7.7.7.1\">\n<span class=\"ltx_text ltx_markedasmath ltx_font_typewriter\" id=\"S2.T1.7.7.7.1.1\">FlauBERT</span> Tokenizer</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.7.7.7.2\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.7.7.7.2.1\">\u201cthe film un fol ds with all the mou n ting tension of an expert thriller , un til the tr age dy bene ath it all gradu ally re ve als</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.8.8.13.5\">\n<th class=\"ltx_td ltx_th ltx_th_row\" id=\"S2.T1.8.8.13.5.1\"></th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.8.8.13.5.2\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.8.8.13.5.2.1\">it self .\u201d</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S2.T1.8.8.8.1\">\n<span class=\"ltx_text ltx_markedasmath ltx_font_typewriter\" id=\"S2.T1.8.8.8.1.1\">DutchBERT</span> Tokenizer</th>\n<td class=\"ltx_td ltx_align_left\" id=\"S2.T1.8.8.8.2\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.8.8.8.2.1\">\u201cthe film u n f old s with all the mo unt ing te n sion of a n expert thriller , u n til the trage d y ben e ath i t all gra d u ally</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S2.T1.8.8.14.6\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_bb\" id=\"S2.T1.8.8.14.6.1\" style=\"padding-bottom:4.30554pt;\"></th>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S2.T1.8.8.14.6.2\" style=\"padding-bottom:4.30554pt;\"><span class=\"ltx_text ltx_font_slanted\" id=\"S2.T1.8.8.14.6.2.1\">rev e als i t sel f .\u201d</span></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>An example from the SST-2 dataset and its t-English variants. Tokenizer pre-fixes and post-fixes such as , , and are not shown for simplicity.</figcaption>\n</figure>",
83
+ "capture": "Table 1: An example from the SST-2 dataset and its t-English variants. Tokenizer pre-fixes and post-fixes such as , , and are not shown for simplicity."
84
+ },
85
+ "2": {
86
+ "table_html": "<figure class=\"ltx_table\" id=\"A4.T2\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"A4.T2.12\" style=\"width:433.6pt;height:255.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-73.4pt,43.3pt) scale(0.746993739837428,0.746993739837428) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"A4.T2.12.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A4.T2.3.3.3\">\n<th class=\"ltx_td ltx_th ltx_th_row ltx_border_tt\" id=\"A4.T2.3.3.3.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.3.3.3.5.1\">Original</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.3.3.3.6.1\">Token Swap</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.3.3.3.7.1\">Word Swap</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.8\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.3.3.3.8.1\">Reinit(Emb)</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.9\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold\" id=\"A4.T2.3.3.3.9.1\">Bert</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.10\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold\" id=\"A4.T2.3.3.3.10.1\">Albert</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.11\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold\" id=\"A4.T2.3.3.3.11.1\">FlauBERT</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.12\"><span class=\"ltx_text ltx_font_typewriter ltx_font_bold\" id=\"A4.T2.3.3.3.12.1\">DutchBERT</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.13\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.3.3.3.13.1\">Random</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.14\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.3.3.3.14.1\">Reverse</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.3.3.3.3\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A4.T2.12.12.13.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"A4.T2.12.12.13.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.12.12.13.1.1.1\">CoLA</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.2\">.58(.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.3\">.00(.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.4\">.00(.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.5\">.00(.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.6\">.00(.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.7\">.00(.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.8\">.00(.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.9\">.00(.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.10\">.04(.05)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.11\">.01(.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.12\">.16(.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.13\">.21(.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.12.12.13.1.14\">.12(.01)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.4.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.2\">.59(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.3\">.05(.07)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.4\">.02(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.5\">.06(.05)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.6\">.00(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.7\">.00(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.8\">.01(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.9\">.00(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.10\">.22(.04)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.11\">.35(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.12\">.45(.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.13\">.47(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.4.4.4.14\">.44(.01)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.12.12.14.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.12.12.14.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.12.12.14.2.1.1\">MNLI</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.2\">.88(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.3\">.34(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.4\">.50(.08)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.5\">.53(.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.6\">.54(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.7\">.53(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.8\">.67(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.9\">.68(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.10\">.82(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.11\">.85(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.12\">.86(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.13\">.86(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.14.2.14\">.85(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.5.5.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.5.5.5.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.2\">.88(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.3\">.72(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.4\">.72(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.5\">.73(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.6\">.73(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.7\">.71(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.8\">.71(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.9\">.69(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.10\">.82(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.11\">.86(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.12\">.86(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.13\">.86(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.5.5.5.14\">.86(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.12.12.15.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.12.12.15.3.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.12.12.15.3.1.1\">MRPC</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.2\">.88(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.3\">.68(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.4\">.68(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.5\">.68(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.6\">.68(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.7\">.68(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.8\">.76(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.9\">.77(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.10\">.77(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.11\">.85(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.12\">.85(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.13\">.86(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.15.3.14\">.83(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.6.6.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.6.6.6.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.2\">.87(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.3\">.83(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.4\">.80(.04)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.5\">.79(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.6\">.82(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.7\">.80(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.8\">.83(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.9\">.78(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.10\">.81(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.11\">.87(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.12\">.87(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.13\">.87(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.6.6.6.14\">.86(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.12.12.16.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.12.12.16.4.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.12.12.16.4.1.1\">QNLI</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.2\">.93(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.3\">.60(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.4\">.54(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.5\">.54(.04)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.6\">.55(.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.7\">.52(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.8\">.79(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.9\">.79(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.10\">.88(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.11\">.89(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.12\">.90(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.13\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.16.4.14\">.90(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.7.7.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.7.7.7.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.2\">.93(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.3\">.83(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.4\">.82(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.5\">.82(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.6\">.83(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.7\">.82(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.8\">.82(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.9\">.81(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.10\">.88(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.11\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.12\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.13\">.92(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.7.7.7.14\">.91(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.12.12.17.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.12.12.17.5.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.12.12.17.5.1.1\">QQP</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.2\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.3\">.77(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.4\">.77(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.5\">.77(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.6\">.76(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.7\">.75(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.8\">.85(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.9\">.86(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.10\">.90(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.11\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.12\">.90(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.13\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.17.5.14\">.90(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.8.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.8.8.8.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.2\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.3\">.87(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.4\">.87(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.5\">.87(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.6\">.87(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.7\">.87(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.8\">.86(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.9\">.87(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.10\">.90(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.11\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.12\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.13\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.8.8.8.14\">.91(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.12.12.18.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.12.12.18.6.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.12.12.18.6.1.1\">RTE</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.2\">.65(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.3\">.51(.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.4\">.51(.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.5\">.53(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.6\">.53(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.7\">.53(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.8\">.54(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.9\">.56(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.10\">.57(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.11\">.60(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.12\">.60(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.13\">.61(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.18.6.14\">.59(.05)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.9.9.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.9.9.9.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.2\">.67(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.3\">.56(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.4\">.53(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.5\">.54(.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.6\">.57(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.7\">.59(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.8\">.57(.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.9\">.57(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.10\">.59(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.11\">.58(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.12\">.69(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.13\">.64(.05)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.9.9.9.14\">.65(.03)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.12.12.19.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.12.12.19.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.12.12.19.7.1.1\">SST-2</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.2\">.94(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.3\">.79(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.4\">.75(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.5\">.79(.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.6\">.73(.04)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.7\">.68(.05)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.8\">.77(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.9\">.78(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.10\">.86(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.11\">.91(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.12\">.92(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.13\">.92(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.19.7.14\">.92(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.10.10.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.10.10.10.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.2\">.94(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.3\">.83(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.4\">.85(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.5\">.85(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.6\">.83(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.7\">.82(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.8\">.82(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.9\">.81(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.10\">.88(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.11\">.93(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.12\">.93(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.13\">.93(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.10.10.10.14\">.92(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.12.12.20.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.12.12.20.8.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.12.12.20.8.1.1\">STS-B</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.2\">.89(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.3\">.06(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.4\">.06(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.5\">.06(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.6\">.09(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.7\">.08(.02)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.8\">.74(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.9\">.77(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.10\">.87(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.11\">.87(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.12\">.88(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.13\">.88(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.20.8.14\">.88(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.11.11.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.11.11.11.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.2\">.89(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.3\">.76(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.4\">.73(.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.5\">.77(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.6\">.79(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.7\">.78(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.8\">.77(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.9\">.79(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.10\">.88(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.11\">.87(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.12\">.89(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.13\">.89(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.11.11.11.14\">.89(.00)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.12.12.21.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"A4.T2.12.12.21.9.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A4.T2.12.12.21.9.1.1\">WNLI</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.2\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.3\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.4\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.5\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.6\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.7\">.58(.03)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.8\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.9\">.56(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.10\">.55(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.11\">.56(.01)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.12\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.13\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.12.12.21.9.14\">.56(.01)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.12.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"A4.T2.12.12.12.1\"></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.2\">.56(.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.3\">.52(.06)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.4\">.53(.05)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.5\">.53(.03)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.6\">.55(.02)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.7\">.51(.07)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.8\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.9\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.10\">.55(.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.11\">.51(.07)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.12\">.56(.01)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.13\">.56(.00)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.12.12.12.14\">.53(.05)</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>GLUE scores for t-English with different types of interventions including scrambled word identities, syntactic shifts, and tokenizer substitutions with standard deviation (SD) for all tasks across 3 distinct runs with different random seeds. The scores with original English sentences are included for comparison. <span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"A4.T2.16.1\">c.p.</span> indicates finetuning results with continued pretrained models.</figcaption>\n</figure>",
87
+ "capture": "Table 2: GLUE scores for t-English with different types of interventions including scrambled word identities, syntactic shifts, and tokenizer substitutions with standard deviation (SD) for all tasks across 3 distinct runs with different random seeds. The scores with original English sentences are included for comparison. c.p. indicates finetuning results with continued pretrained models."
88
+ }
89
+ },
90
+ "image_paths": {
91
+ "1": {
92
+ "figure_path": "2202.12312v2_figure_1.png",
93
+ "caption": "Figure 1: Controlled transfer studies paradigm. We systematically transform GLUE tasks (t-GLUE) to target one linguistic factor, then finetune a pretrained language model on that dataset. The resulting drop in performance indicates the importance of that factor to crosslingual transfer. See Table 1 for the list of transformations.",
94
+ "url": "http://arxiv.org/html/2202.12312v2/x1.png"
95
+ },
96
+ "2(a)": {
97
+ "figure_path": "2202.12312v2_figure_2(a).png",
98
+ "caption": "Figure 2: Models are largely able to adapt to syntactic shifts with minor drops in performance. Averaged GLUE scores for t-Englishes with syntactic shifts. Realistic syntactic shifts slightly impact downstream performance, while reverse and random order impact performance more significantly. Error bars represent 95% confidence intervals over 3 random seeds. Results are depicted for RoBERTa, but are consistent for all 3 models that we tested: RoBERTa, DeBERTa, and XLM-R (all results in Figure 5 in Appendix A).",
99
+ "url": "http://arxiv.org/html/2202.12312v2/extracted/5365144/figures/cr_syntax_direct.png"
100
+ },
101
+ "2(b)": {
102
+ "figure_path": "2202.12312v2_figure_2(b).png",
103
+ "caption": "Figure 2: Models are largely able to adapt to syntactic shifts with minor drops in performance. Averaged GLUE scores for t-Englishes with syntactic shifts. Realistic syntactic shifts slightly impact downstream performance, while reverse and random order impact performance more significantly. Error bars represent 95% confidence intervals over 3 random seeds. Results are depicted for RoBERTa, but are consistent for all 3 models that we tested: RoBERTa, DeBERTa, and XLM-R (all results in Figure 5 in Appendix A).",
104
+ "url": "http://arxiv.org/html/2202.12312v2/extracted/5365144/figures/syntax_continue.png"
105
+ },
106
+ "3(a)": {
107
+ "figure_path": "2202.12312v2_figure_3(a).png",
108
+ "caption": "Figure 3: Token embedding transformations are hard to recover from, regardless of tokenizer. Averaged GLUE scores for t-Englishes with word identity perturbations. Any embedding reinitialization or shuffling, regardless of the tokenizer ultimately used, has a drastic effect on downstream performance. Error bars represent 95% confidence intervals over 3 random seeds. Results are depicted for RoBERTa, but are consistent for all 3 models that we tested: RoBERTa, DeBERTa, and XLM-R(all results in Figure 6 in Appendix A).",
109
+ "url": "http://arxiv.org/html/2202.12312v2/extracted/5365144/figures/cr_tok_direct.png"
110
+ },
111
+ "3(b)": {
112
+ "figure_path": "2202.12312v2_figure_3(b).png",
113
+ "caption": "Figure 3: Token embedding transformations are hard to recover from, regardless of tokenizer. Averaged GLUE scores for t-Englishes with word identity perturbations. Any embedding reinitialization or shuffling, regardless of the tokenizer ultimately used, has a drastic effect on downstream performance. Error bars represent 95% confidence intervals over 3 random seeds. Results are depicted for RoBERTa, but are consistent for all 3 models that we tested: RoBERTa, DeBERTa, and XLM-R(all results in Figure 6 in Appendix A).",
114
+ "url": "http://arxiv.org/html/2202.12312v2/extracted/5365144/figures/cr_tok_continue.png"
115
+ },
116
+ "4": {
117
+ "figure_path": "2202.12312v2_figure_4.png",
118
+ "caption": "Figure 4: Our findings generalize to fine-tuning on non-English datasets. Fine-tuning on three different XNLI datasets yields similar findings the English GLUE findings: models can recover from the most extreme syntactic case (random ordering) much more effectively than from any of the embeddings-related perturbations. This indicates that our findings are not related to properties specific to the English language.",
119
+ "url": "http://arxiv.org/html/2202.12312v2/extracted/5365144/figures/cr_xnli.png"
120
+ },
121
+ "5": {
122
+ "figure_path": "2202.12312v2_figure_5.png",
123
+ "caption": "Figure 5: Models are largely able to adapt to syntactic shifts with minor drops in performance. Results for the embedding transformations shown for RoBERTa in Figure 2, for all models that we tested: RoBERTa, DeBERTa, and XLM-R.",
124
+ "url": "http://arxiv.org/html/2202.12312v2/extracted/5365144/figures/cr_all-models_syntax-direct.png"
125
+ },
126
+ "6": {
127
+ "figure_path": "2202.12312v2_figure_6.png",
128
+ "caption": "Figure 6: Token embedding transformations are hard to recover from. Results for the embedding transformations shown for RoBERTa in Figure 3, for all models that we tested: RoBERTa, DeBERTa, and XLM-R.",
129
+ "url": "http://arxiv.org/html/2202.12312v2/extracted/5365144/figures/cr_all-models_tok-direct.png"
130
+ },
131
+ "7": {
132
+ "figure_path": "2202.12312v2_figure_7.png",
133
+ "caption": "Figure 7: Distributions of sequence lengths by different tokenizers.",
134
+ "url": "http://arxiv.org/html/2202.12312v2/extracted/5365144/figures/tokenizer-seq-len.png"
135
+ }
136
+ },
137
+ "validation": true,
138
+ "references": [
139
+ {
140
+ "1": {
141
+ "title": "Word order does matter and shuffled language models know it.",
142
+ "author": "Mostafa Abdou, Vinit Ravishankar, Artur Kulmizev, and Anders S\u00f8gaard. 2022.",
143
+ "venue": "In Proceedings of the 60th Annual Meeting of the Association\nfor Computational Linguistics (Volume 1: Long Papers), pages 6907\u20136919.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "2": {
149
+ "title": "Generalizing and improving bilingual word embedding mappings with a\nmulti-step framework of linear transformations.",
150
+ "author": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018.",
151
+ "venue": "In Thirty-second AAAI conference on artificial intelligence.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "3": {
157
+ "title": "On the cross-lingual transferability of monolingual representations.",
158
+ "author": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020.",
159
+ "venue": "In Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics, pages 4623\u20134637.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "4": {
165
+ "title": "Reusing a pretrained language model on languages with limited corpora\nfor unsupervised nmt.",
166
+ "author": "Alexandra Chronopoulou, Dario Stojanovski, and Alexander Fraser. 2020.",
167
+ "venue": "In Proceedings of the 2020 Conference on Empirical Methods in\nNatural Language Processing (EMNLP), pages 2703\u20132711.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "5": {
173
+ "title": "Electra: Pre-training text encoders as discriminators rather than\ngenerators.",
174
+ "author": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020.",
175
+ "venue": "arXiv preprint arXiv:2003.10555.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "6": {
181
+ "title": "BERTje: A Dutch BERT Model.",
182
+ "author": "Wietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli,\nGertjan van Noord, and Malvina Nissim. 2019.",
183
+ "venue": null,
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "7": {
189
+ "title": "When is\nBERT multilingual? isolating crucial ingredients for cross-lingual\ntransfer.",
190
+ "author": "Ameet Deshpande, Partha Talukdar, and Karthik Narasimhan. 2022.",
191
+ "venue": "In Proceedings of the 2022 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, pages 3610\u20133623, Seattle, United States. Association for\nComputational Linguistics.",
192
+ "url": "https://doi.org/10.18653/v1/2022.naacl-main.264"
193
+ }
194
+ },
195
+ {
196
+ "8": {
197
+ "title": "Bert: Pre-training of deep bidirectional transformers for language\nunderstanding.",
198
+ "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018.",
199
+ "venue": "arXiv preprint arXiv:1810.04805.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "9": {
205
+ "title": "Identifying necessary elements for bert\u2019s multilinguality.",
206
+ "author": "Philipp Dufter and Hinrich Sch\u00fctze. 2020.",
207
+ "venue": "arXiv preprint arXiv:2005.00396.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "10": {
213
+ "title": "Syntactic processes in sentence production.",
214
+ "author": "Merrill F Garrett. 1976.",
215
+ "venue": "New approaches to language mechanisms, 30:231\u2013256.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "11": {
221
+ "title": "Cross-lingual transfer of monolingual models.",
222
+ "author": "Evangelia Gogoulou, Ariel Ekgren, Tim Isbister, and Magnus Sahlgren. 2021.",
223
+ "venue": "arXiv preprint arXiv:2109.07348.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "12": {
229
+ "title": "Cross-lingual\nability of multilingual bert: An empirical study.",
230
+ "author": "Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020.",
231
+ "venue": "In International Conference on Learning Representations.",
232
+ "url": "https://openreview.net/forum?id=HJeT3yrtDr"
233
+ }
234
+ },
235
+ {
236
+ "13": {
237
+ "title": "Sentencepiece: A simple and language independent subword tokenizer\nand detokenizer for neural text processing.",
238
+ "author": "Taku Kudo and John Richardson. 2018.",
239
+ "venue": "arXiv preprint arXiv:1808.06226.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "14": {
245
+ "title": "Quantifying the carbon emissions of machine learning.",
246
+ "author": "Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres.\n2019.",
247
+ "venue": "arXiv preprint arXiv:1910.09700.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "15": {
253
+ "title": "Albert: A lite bert for self-supervised learning of language\nrepresentations.",
254
+ "author": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and\nRadu Soricut. 2019.",
255
+ "venue": "arXiv preprint arXiv:1909.11942.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "16": {
261
+ "title": "Flaubert: Unsupervised language model pre-training for french.",
262
+ "author": "Hang Le, Lo\u00efc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux,\nBenjamin Lecouteux, Alexandre Allauzen, Benoit Crabb\u00e9, Laurent Besacier,\nand Didier Schwab. 2020.",
263
+ "venue": "In Proceedings of the 12th Language Resources and Evaluation\nConference, pages 2479\u20132490.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "17": {
269
+ "title": "Xlm-v: Overcoming the vocabulary bottleneck in multilingual masked\nlanguage models.",
270
+ "author": "Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan\nGhazvininejad, Luke Zettlemoyer, and Madian Khabsa. 2023.",
271
+ "venue": "arXiv preprint arXiv:2301.10472.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "18": {
277
+ "title": "ROBERTa: A robustly\noptimized BERT pretraining approach.",
278
+ "author": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer\nLevy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.",
279
+ "venue": "arXiv preprint arXiv:1907.11692.",
280
+ "url": "https://arxiv.org/abs/1907.11692"
281
+ }
282
+ },
283
+ {
284
+ "19": {
285
+ "title": "Pointer sentinel mixture models.",
286
+ "author": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016.",
287
+ "venue": "arXiv preprint arXiv:1609.07843.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "20": {
293
+ "title": "When being unseen from mbert is just the beginning: Handling new\nlanguages with multilingual language models.",
294
+ "author": "Benjamin Muller, Antonios Anastasopoulos, Beno\u00eet Sagot, and Djam\u00e9\nSeddah. 2021.",
295
+ "venue": "In NAACL-HLT 2021-2021 Conference of the North American Chapter\nof the Association for Computational Linguistics: Human Language\nTechnologies.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "21": {
301
+ "title": "Small data? no\nproblem! exploring the viability of pretrained multilingual language models\nfor low-resourced languages.",
302
+ "author": "Kelechi Ogueji, Yuxin Zhu, and Jimmy Lin. 2021.",
303
+ "venue": "In Proceedings of the 1st Workshop on Multilingual\nRepresentation Learning, pages 116\u2013126, Punta Cana, Dominican Republic.\nAssociation for Computational Linguistics.",
304
+ "url": "https://doi.org/10.18653/v1/2021.mrl-1.11"
305
+ }
306
+ },
307
+ {
308
+ "22": {
309
+ "title": "Mini but\nmighty: Efficient multilingual pretraining with linguistically-informed data\nselection.",
310
+ "author": "Tolulope Ogunremi, Dan Jurafsky, and Christopher Manning. 2023.",
311
+ "venue": "In Findings of the Association for Computational Linguistics:\nEACL 2023, pages 1251\u20131266, Dubrovnik, Croatia. Association for\nComputational Linguistics.",
312
+ "url": "https://aclanthology.org/2023.findings-eacl.93"
313
+ }
314
+ },
315
+ {
316
+ "23": {
317
+ "title": "An exploration\nof vocabulary size and transfer effects in multilingual language models for\nafrican languages.",
318
+ "author": "Akintunde Oladipo, Odunayo Ogundepo, Kelechi Ogueji, and Jimmy Lin. 2022.",
319
+ "venue": "In 3rd Workshop on African Natural Language Processing.",
320
+ "url": "https://openreview.net/forum?id=HOZmF9MV8Wc"
321
+ }
322
+ },
323
+ {
324
+ "24": {
325
+ "title": "Asynchronous pipelines\nfor processing huge corpora on medium to low resource infrastructures.",
326
+ "author": "Pedro Javier Ortiz Su\u00e1rez, Beno\u00eet Sagot, and Laurent Romary. 2019.",
327
+ "venue": "Proceedings of the Workshop on Challenges in the Management of Large\nCorpora (CMLC-7) 2019. Cardiff, 22nd July 2019, pages 9 \u2013 16, Mannheim.\nLeibniz-Institut f\u00fcr Deutsche Sprache.",
328
+ "url": "https://doi.org/10.14618/ids-pub-9021"
329
+ }
330
+ },
331
+ {
332
+ "25": {
333
+ "title": "When classifying\narguments, BERT doesn\u2019t care about word order\u2026except when it\nmatters.",
334
+ "author": "Isabel Papadimitriou, Richard Futrell, and Kyle Mahowald. 2022.",
335
+ "venue": "In Proceedings of the Society for Computation in Linguistics\n2022, pages 203\u2013205, online. Association for Computational Linguistics.",
336
+ "url": "https://aclanthology.org/2022.scil-1.17"
337
+ }
338
+ },
339
+ {
340
+ "26": {
341
+ "title": "Learning music helps you read: Using transfer to study linguistic\nstructure in language models.",
342
+ "author": "Isabel Papadimitriou and Dan Jurafsky. 2020.",
343
+ "venue": "In Proceedings of the 2020 Conference on Empirical Methods in\nNatural Language Processing (EMNLP), pages 6829\u20136839.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "27": {
349
+ "title": "Overlap-based\nvocabulary generation improves cross-lingual transfer among related\nlanguages.",
350
+ "author": "Vaidehi Patil, Partha Talukdar, and Sunita Sarawagi. 2022.",
351
+ "venue": "In Proceedings of the 60th Annual Meeting of the Association\nfor Computational Linguistics (Volume 1: Long Papers), pages 219\u2013233,\nDublin, Ireland. Association for Computational Linguistics.",
352
+ "url": "https://doi.org/10.18653/v1/2022.acl-long.18"
353
+ }
354
+ },
355
+ {
356
+ "28": {
357
+ "title": "MAD-X:\nAn Adapter-Based Framework for Multi-Task Cross-Lingual\nTransfer.",
358
+ "author": "Jonas Pfeiffer, Ivan Vuli\u0107, Iryna Gurevych, and Sebastian Ruder. 2020.",
359
+ "venue": "In Proceedings of the 2020 Conference on Empirical Methods in\nNatural Language Processing (EMNLP), pages 7654\u20137673, Online. Association\nfor Computational Linguistics.",
360
+ "url": "https://doi.org/10.18653/v1/2020.emnlp-main.617"
361
+ }
362
+ },
363
+ {
364
+ "29": {
365
+ "title": "UNKs\neverywhere: Adapting multilingual language models to new scripts.",
366
+ "author": "Jonas Pfeiffer, Ivan Vuli\u0107, Iryna Gurevych, and Sebastian Ruder. 2021.",
367
+ "venue": "In Proceedings of the 2021 Conference on Empirical Methods in\nNatural Language Processing, pages 10186\u201310203, Online and Punta Cana,\nDominican Republic. Association for Computational Linguistics.",
368
+ "url": "https://doi.org/10.18653/v1/2021.emnlp-main.800"
369
+ }
370
+ },
371
+ {
372
+ "30": {
373
+ "title": "Out of\norder: How important is the sequential order of words in a sentence in\nnatural language understanding tasks?",
374
+ "author": "Thang Pham, Trung Bui, Long Mai, and Anh Nguyen. 2021.",
375
+ "venue": "In Findings of the Association for Computational Linguistics:\nACL-IJCNLP 2021, pages 1145\u20131160, Online. Association for Computational\nLinguistics.",
376
+ "url": "https://doi.org/10.18653/v1/2021.findings-acl.98"
377
+ }
378
+ },
379
+ {
380
+ "31": {
381
+ "title": "Stanza: A Python natural language processing toolkit for many human\nlanguages.",
382
+ "author": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning.\n2020.",
383
+ "venue": "In Proceedings of the 58th Annual Meeting of the Association\nfor Computational Linguistics: System Demonstrations.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "32": {
389
+ "title": "Transfusion: Understanding transfer learning for medical imaging.",
390
+ "author": "Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. 2019.",
391
+ "venue": "Advances in neural information processing systems, 32.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "33": {
397
+ "title": "Making monolingual sentence embeddings multilingual using knowledge\ndistillation.",
398
+ "author": "Nils Reimers and Iryna Gurevych. 2020.",
399
+ "venue": "In Proceedings of the 2020 Conference on Empirical Methods in\nNatural Language Processing (EMNLP), pages 4512\u20134525.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "34": {
405
+ "title": "How good is your tokenizer? on the monolingual performance of\nmultilingual language models.",
406
+ "author": "Phillip Rust, Jonas Pfeiffer, Ivan Vuli\u0107, Sebastian Ruder, and Iryna\nGurevych. 2020.",
407
+ "venue": "arXiv preprint arXiv:2012.15613.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "35": {
413
+ "title": "Neural machine translation of rare words with subword units.",
414
+ "author": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015.",
415
+ "venue": "arXiv preprint arXiv:1508.07909.",
416
+ "url": null
417
+ }
418
+ },
419
+ {
420
+ "36": {
421
+ "title": "Masked language modeling and the distributional hypothesis: Order\nword matters pre-training for little.",
422
+ "author": "Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and\nDouwe Kiela. 2021.",
423
+ "venue": "arXiv preprint arXiv:2104.06644.",
424
+ "url": null
425
+ }
426
+ },
427
+ {
428
+ "37": {
429
+ "title": "Investigating transferability in pretrained language models.",
430
+ "author": "Alex Tamkin, Trisha Singh, Davide Giovanardi, and Noah Goodman. 2020.",
431
+ "venue": "Findings of the Association for Computational Linguistics:\nEMNLP 2020.",
432
+ "url": "https://doi.org/10.18653/v1/2020.findings-emnlp.125"
433
+ }
434
+ },
435
+ {
436
+ "38": {
437
+ "title": "From english to foreign languages: Transferring pre-trained language\nmodels.",
438
+ "author": "Ke Tran. 2020.",
439
+ "venue": "arXiv preprint arXiv:2002.07306.",
440
+ "url": null
441
+ }
442
+ },
443
+ {
444
+ "39": {
445
+ "title": "Subword mapping and anchoring across languages.",
446
+ "author": "Giorgos Vernikos and Andrei Popescu-Belis. 2021.",
447
+ "venue": "In Findings of the Association for Computational Linguistics:\nEMNLP 2021, pages 2633\u20132647.",
448
+ "url": null
449
+ }
450
+ },
451
+ {
452
+ "40": {
453
+ "title": "Glue: A multi-task benchmark and analysis platform for natural\nlanguage understanding.",
454
+ "author": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel\nBowman. 2018.",
455
+ "venue": "In Proceedings of the 2018 EMNLP Workshop BlackboxNLP:\nAnalyzing and Interpreting Neural Networks for NLP, pages 353\u2013355.",
456
+ "url": null
457
+ }
458
+ },
459
+ {
460
+ "41": {
461
+ "title": "The galactic dependencies treebanks: Getting more data by\nsynthesizing new languages.",
462
+ "author": "Dingquan Wang and Jason Eisner. 2016.",
463
+ "venue": "Transactions of the Association for Computational Linguistics,\n4:491\u2013505.",
464
+ "url": null
465
+ }
466
+ },
467
+ {
468
+ "42": {
469
+ "title": "Google\u2019s neural machine translation system: Bridging the gap between\nhuman and machine translation.",
470
+ "author": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang\nMacherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016.",
471
+ "venue": "arXiv preprint arXiv:1609.08144.",
472
+ "url": null
473
+ }
474
+ },
475
+ {
476
+ "43": {
477
+ "title": "Identifying the limits of cross-domain knowledge transfer for\npretrained models.",
478
+ "author": "Zhengxuan Wu, Nelson F Liu, and Christopher Potts. 2021.",
479
+ "venue": "arXiv preprint arXiv:2104.08410.",
480
+ "url": null
481
+ }
482
+ }
483
+ ],
484
+ "url": "http://arxiv.org/html/2202.12312v2"
485
+ }
20240123/2204.13209v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2205.05173v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2205.05587v3.json ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Choice of training label matters: how to best use deep learning for quantitative MRI parameter estimation",
3
+ "abstract": "Deep learning (DL) is gaining popularity as a parameter estimation method for quantitative MRI. A range of competing implementations have been proposed, relying on either supervised or self-supervised learning. Self-supervised approaches, sometimes referred to as unsupervised, have been loosely based on auto-encoders, whereas supervised methods have, to date, been trained on groundtruth labels. These two learning paradigms have been shown to have distinct strengths. Notably, self-supervised approaches offer lower-bias parameter estimates than their supervised alternatives. This result is counterintuitive \u2013 incorporating prior knowledge with supervised labels should, in theory, lead to improved accuracy. In this work, we show that this apparent limitation of supervised approaches stems from the na\u00efve choice of groundtruth training labels. By using intentionally-non-groundtruth training labels, pre-computed via independent maximum likelihood estimation, we show that the low-bias parameter estimation previously associated with self-supervised methods can be replicated \u2013 and improved on \u2013 within a supervised learning framework. This approach sets the stage for a single, unifying, deep learning parameter estimation framework, based on supervised learning, where trade-offs between bias and variance are made by careful adjustment of training label.\nOur code is available at https://github.com/seancepstein/training_labels.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Magnetic resonance imaging (MRI) is widely regarded as the premier clinical imaging modality, in large part due to the unparalleled range of contrast mechanisms available to it. Conventional MRI exploits this contrast in a purely qualitative manner: images provide only relative information, such that voxel intensities are only meaningful in the context of their neighbours. In contrast, quantitative MRI (qMRI) provides quantitative images, where voxel intensities are directly, and meaningfully, related to underlying tissue properties. Compared to conventional MRI, this approach promises increased reproducibility, interpretability, and tissue insight, at the cost of time-intensive image acquisition and post-processing (Cercignani et al., 2018 ###reference_5###).\nOne of the biggest time and resource bottlenecks in post-processing is parameter estimation, whereby a signal model is fit to the intensity variation across multiple MR images acquired at different experimental settings. Each voxel requires its own independent model fit: solving for the signal model parameters that best described the single voxel\u2019s data. The computational cost of this curve-fitting process, which scales with both voxel number and model complexity, has become a bottleneck for modern qMRI experiments.\nAccelerating curve fittings with deep learning (DL) was first proposed more than 30 years ago (Bishop and Roach, 1992 ###reference_4###), but has only recently gained popularity within the qMRI community (Golkov et al., 2016 ###reference_9###; Bertleff et al., 2017 ###reference_3###; Liu et al., 2020 ###reference_16###; Barbieri et al., 2020 ###reference_2###; Palombo et al., 2020 ###reference_19###). Just like traditional methods, DL relies on model fitting, but the model being fit is a fundamentally different one. Instead of fitting a qMRI signal model to a single voxel of interest (i.e. curve fitting), DL methods fit (\u201ctrain\u201d) a deep neural network (DNN) model to an ensemble of training voxels. This model maps a single voxel\u2019s signal to its corresponding qMRI parameters; the unknowns in its fitting are network weights, rather than qMRI parameters. Once this DNN model has been fit to (\u201ctrained on\u201d) the training data, parameter estimation is reduced to simply applying it to new unseen data, one voxel at a time. This approach offers two broad advantages over traditional fitting: (1) computational cost is amortised: despite being more computationally expensive than one-voxel signal model fitting, DL training only needs to be performed once, for any number of voxels; once trained, networks provide near-instantaneous parameter estimates on new data, and (2) computational cost is front-loaded: model training can be performed away from the clinic, before patient data is acquired.\nTo date, most DL qMRI fitting methods have been implemented within a supervised learning framework (Golkov et al., 2016 ###reference_9###; Bertleff et al., 2017 ###reference_3###; Yoon et al., 2018 ###reference_22###; Liu et al., 2020 ###reference_16###; Palombo et al., 2020 ###reference_19###; Aliotta et al., 2021 ###reference_1###; Yu et al., 2021 ###reference_23###; Gyori et al., 2022 ###reference_12###). This approach trains DNNs to predict groundtruth qMRI model parameters from noisy qMRI signals. When compared to conventional fitting, this approach has been found to produce high bias, low variance parameter estimates (Grussu et al., 2021 ###reference_10###; Gyori et al., 2022 ###reference_12###).\nAn alternative class of DL methods has also been proposed, sometimes referred to as unsupervised learning (Barbieri et al., 2020 ###reference_2###; Mpt et al., 2021 ###reference_18###), but more accurately described as self-supervised (introduction, ###reference_13###). In this framework, training labels are not explicitly provided, but are instead extracted by the network from its training input. This label generation is designed such that the network learns to predict signal model parameters corresponding to noise-free signals that most-closely approximate noisy inputs. This self-supervised approach has been found to produce similar results to conventional non-DL fitting, i.e. lower bias and higher variance than its groundtruth-labelled supervised alternative (Barbieri et al., 2020 ###reference_2###; Grussu et al., 2021 ###reference_10###).\nFrom an information theoretic standpoint, the comparison between supervised and self-supervised performance raises an obvious unanswered question. How can it be that supervised methods, which provide strictly more information during training than their self-supervised counterparts, produce more biased parameter estimates?\nIn this work we answer this question by showing that this apparent limitation of supervised approaches stems purely from the selection of groundtruth training labels. By using intentionally-non-groundtruth training labels, pre-computed via independent maximum likelihood estimation, we show that the low-bias parameter estimation previously associated with self-supervised methods can be replicated \u2013 and improved on \u2013 within a supervised learning framework.\nThis approach sets the stage for a single, unifying, deep learning parameter estimation framework, based on supervised learning, where trade-offs between bias and variance can be made, on an application-specific basis, by careful adjustment of training label.\nThe rest of the paper is organized as follows: Section 2 ###reference_### describes existing DL parameter estimation approaches, our proposed method, and how they relate to each other; Section 3 ###reference_### describes the evaluation of our method and its comparison to the state of the art; Section 4 ###reference_### contains our findings; and Section 5 ###reference_### summarizes the contribution and discusses future work."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Theory",
15
+ "text": "Quantitative MRI extracts biomarkers from MR data , producing quantitative spatial maps. We here describe existing voxelwise approaches to this problem (conventional fitting and DL alternatives) as well as our proposed novel method."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Conventional iterative fitting",
21
+ "text": "This method, which relies on maximum likelihood estimation (MLE), extracts biomarkers by performing a voxelwise model fit every time new data is acquired. An appropriate signal model is required, parameterised by parameters of interest; for each combination of , the probability of observing the acquired data is known as the likelihood of those parameters:\nfor acquisitions from sampling scheme and noise model . The model parameters which maximise the likelihood are assumed to best represent the tissue contained within the voxel of interest:\nUnder a Gaussian noise model, this likelihood maximization reduces to the commonly-used non-linear least squares (NLLS):\nunder the assumption of signal model associated with groundtruth biomarkers , sampling scheme , and noise :\nEach of these optimisations has unknowns, which are solved for independently across different voxels; the computational cost scales linearly with the number of voxels .\nDevelopments in qMRI acquisition and analysis have led to increased (i) image spatial resolution (i.e. greater ) and (ii) model complexity (i.e. greater ), such that conventional MLE fitting has become increasingly computationally expensive."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Existing deep learning methods",
27
+ "text": "Deep learning approaches address this by reframing independent problems into a single global model fit: learning the function that maps any to its corresponding :\nDeep neural networks aim to approximate this function by composing a large but finite number of building-block functions, parametrised by network parameters (\u201cweights\u201d):\nIn this context, model fitting (\u201ctraining\u201d), is performed over network parameters and involves maximising \u2019s mean performance over a large set of training examples; the trained network is defined by the best-fit parameters . This fitting problem, whilst more computationally expensive to solve than any individual voxel () MLE, is only tackled once; once is learnt, it can be applied at negligible cost to new, unseen, data. This promise of rapid, zero-cost parameter estimation has led to the development of two broad classes of DL-based parameter estimation methods.\nSupervisedGT methods approximate by minimising the difference between a large number of noise-free training labels (groundtruth parameter values) and corresponding network outputs (noise-free parameter estimates); training loss is calculated in the parameter space :\nwhere is the number of training samples and is a tunable weight matrix which accounts for magnitude differences in signal model parameters. is generally a diagonal matrix, with each diagonal element corresponding to the relative weighting of qMRI parameter ; setting as the identity matrix equally weights all parameters in the training loss.\nThese methods produce higher bias, lower variance parameter estimation than conventional MLE fitting (Grussu et al., 2021 ###reference_10###; Gyori et al., 2022 ###reference_12###) and, by adjusting , can be tailored to selectively boost estimation performance on a subset of the parameter space .\nIn contrast, Self-supervised methods compute training loss within the signal space , by minimising the difference between network inputs (noisy signals) and a filtered representation of network outputs (noise-free signal estimates):\nThese methods, which perform similarly to conventional MLE fitting, produce lower bias, higher variance parameter estimation than SupervisedGT(Grussu et al., 2021 ###reference_10###; Barbieri et al., 2020 ###reference_2###). Unlike SupervisedGT, the relative loss weighting of different signal model parameters is dictated by sampling scheme .\nUnder Gaussian noise conditions, single-voxel Self-supervised loss (i.e. minimising the sum of squared differences between a noisy signal and its noise-free signal estimate) is indistinguishable from the corresponding objective function in conventional fitting.\nIn contrast, under the Rician noise conditions encountered in MRI acquisition(Gudbjartsson and Patz, 1995 ###reference_11###), Self-supervised training loss no longer matches conventional fitting. Indeed, the sum of squared errors between noisy signals and noise-free estimates is not an accurate difference metric in the presence of Rician noise.\nTo summarise: existing supervised DL techniques are associated by high estimation bias, low variance, and end-user flexibility; in contrast, self-supervised methods have lower bias, higher variance, but are limited by the fact their loss is calculated in the signal space ."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Proposed deep learning method",
33
+ "text": "In light of this, we propose SupervisedMLE, a novel parameter estimation method which combines the advantages of SupervisedGT and Self-supervised methods. This method is contrasted to existing techniques in Fig 1 ###reference_###.\nThis method mimics Self-supervised\u2019s low-bias performance by learning a regularised form of conventional MLE, but does so in the parameter space , within a supervised learning framework. This addresses the limitations of Self-supervised: Rician noise modelling is incorporated, and parameter loss weighting is not limited by sampling scheme .\nOur method learns by training on noisy signals paired with conventional MLE labels. These labels act as proxies for the groundtruth parameters we wish to estimate:\nwhere is the maximum likelihood estimate associated with the training sample.\nOur method offers one final advantage over Self-supervised approaches. In addition to the parameter estimation improvements relating to noise model correction and parameter loss weighting, it naturally interfaces with SupervisedGT. In so doing, it presents the opportunity to combine low-bias and low-variance methods into a single, tunable hybrid approach, by a simple weighted sum of each method\u2019s loss function:\n###figure_1###"
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Experimental evaluation",
39
+ "text": "Three classes of network were investigated and compared: SupervisedGT, Self-supervised, and SupervisedMLE, as described in Fig 1 ###reference_###. Additionally, to control for differences in loss function weighting between supervised and unsupervised methods, Self-supervised was converted into supervised form by training SupervisedMLE on Gaussian-model based MLE labels. All models are summarised in Table 1 ###reference_###.\n\nGroundtruth\nN/A\n\nN/A\nN/A\n\nMLE\nRician\n\nMLE\nGaussian\nAll networks were trained and tested on the same datasets; differences in performance can be attributed solely to differences in loss function formulation and training label selection."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Signal model",
45
+ "text": "The intravoxel incoherent motion (IVIM) model (Le Bihan et al., 1986 ###reference_14###) was investigated as an exemplar 4-parameter non-linear qMRI model which poses a non-trivial model fitting problem and is well-represented in the DL qMRI literature (Bertleff et al., 2017 ###reference_3###; Barbieri et al., 2020 ###reference_2###; Mpt et al., 2021 ###reference_18###; Mastropietro et al., 2022 ###reference_17###; Rozowski et al., 2022 ###reference_20###):\nwhere corresponds to the signal model , corresponds to the sampling scheme , and corresponds to the parameter-vector . In physical terms, IVIM is a two-compartment diffusion model, wherein signal decay arises from both molecular self-diffusion (described by ) and perfusion-induced \u2018pseudo-diffusion\u2019 (described by ). In Equation 11 ###reference_###, is an intensity normalisation factor and denotes the signal fraction corresponding to the perfusing compartment."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Network architecture",
51
+ "text": "Network architecture was harmonised across all network variants, and represents a common choice in the existing qMRI literature (Barbieri et al., 2020 ###reference_2###): 3 fully connected hidden layers, each with a number of nodes matching the number of signal samples (i.e. b-values), and an output layer with a number of nodes matching the number of model parameters. Wider (150 nodes per layer) and deeper (10 hidden layers) networks were investigated and found to have equivalent performance, during both training and testing, at the cost of increased training time. All networks were implemented in Pytorch 1.9.0 with exponential linear unit activation functions (Clevert et al., 2015 ###reference_6###); ELU performance is similar to ReLU, but is more robust to poor network weight initialisation."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Training data",
57
+ "text": "Training datasets were generated at SNR to investigate parameter estimation performance at both high and low noise levels. At each SNR, 100,000 noise-free signals were generated from uniform IVIM parameter distributions (, , , , representing realistic tissue values), sampling them with a real-world acquisition protocol (Zhao et al., 2015 ###reference_24###) ( ), and adding Rician noise. Training data generative parameters were drawn from uniform, rather than in-vivo, parameter distributions to minimise bias in network parameter estimation(Gyori et al., 2022 ###reference_12###). Data were split 80/20 between training and validation. MLE labels were calculated using a bound-constrained non-linear fitting algorithm, implemented with scipy.optimize.minimize, using either Rician log-likelihood (for SupervisedMLE, Rician) or sum of squared errors (for SupervisedMLE, Gaussian) as fitting objective function. This algorithm was initialised with groundtruth values (i.e. generative ) to improve fitting robustness and avoid local minima. Training/validation samples associated with \u2018poor\u2019 MLE labels (defined as lying on the boundary of the bound-constrained estimation space) were held out during training and ignored during validation."
58
+ },
59
+ {
60
+ "section_id": "3.4",
61
+ "parent_section_id": "3",
62
+ "section_name": "Network training",
63
+ "text": "Network training was performed using an Adam optimizer (learning rate = 0.001, betas = (0.9, 0.999), weight decay=0) as follows: SupervisedGT (at SNR 30) was trained 16 times on the same data, each time initialising with different network weights, to improve robustness to local minima during training. From this set of trained networks, a single SupervisedGT network was selected on the basis of validation loss. The trained weights of this selected network were subsequently used to initialise all other networks; in this way, any differences in network performance could be solely attributed to differences in training label selection and training loss formulation. In the case of supervised loss formulations, the inter-parameter weight vector was chosen as the inverse of each parameter\u2019s mean value over the training set, to obtain equal loss weighting across all four IVIM parameters."
64
+ },
65
+ {
66
+ "section_id": "3.5",
67
+ "parent_section_id": "3",
68
+ "section_name": "Testing data",
69
+ "text": "Networks were tested on both synthetic and real qMRI data. The synthetic approach offers (i) known parameter groundtruths to assess estimation against, (ii) arbitrarily large datasets, and (iii) tunable data distributions, but is based on possibly simplified qMRI signals. This approach was used to assess parameter estimation performance in a controlled, rigorous manner; real data was subsequently used to validate the trends observed in silico.\nSynthetic data was generated with sampling, parameter distributions, and noise levels matching those used in network training. The IVIM parameter space in which the networks were trained was uniformly sub-divided 10 times in each dimension, to analyse estimation performance as a function of parameter value. At each point in the parameter space, 500 corresponding noisy signals were generated and used to test network performance, accounting for variation under noise repetition.\nReal data was acquired from the pelvis of a healthy volunteer, who gave informed consent, on a wide-bore 3.0T clinical system (Ingenia, Philips, Amsterdam, Netherlands), 5 slices, 224 x 224 matrix, voxel size = 1.56 x 1.56 x 7mm, TE = 76ms, TR = 516ms, scan time = 44s per 10 b-values listed in subsection 3.3 ###reference_###. For the purposes of assessing parameter estimation methods, we obtained gold standard voxelwise IVIM parameter estimates from a supersampled dataset (16-fold repetition of the above acquisition, within a single scanning session, generating 160 b-values, total scan time = 11m44s). Conventional MLE was performed on this supersampled data to produce best-guess \u201cgroundtruth\u201d parameters. During testing, the supersampled dataset was split into 16 distinct 10 b-value acquisitions, each corresponding to a single realistic clinical acquisition. All images were visually confirmed to be free from motion artefacts.\nThe mismatch in parameter distributions between this in-vivo data (highly non-uniform) and the previously-described synthetic data (uniform by construction) limited the scope for validating our in-silico results. To address this, a final synthetic testing dataset was generated from the in-vivo MLE-derived \u201cgroundtruth\u201d parameters, and was used for direct comparison between real and simulated data."
70
+ },
71
+ {
72
+ "section_id": "3.6",
73
+ "parent_section_id": "3",
74
+ "section_name": "Evaluation metrics",
75
+ "text": "Parameter estimation performance was evaluated using 3 key metrics: (i) mean bias with respect to groundtruth, (ii) mean standard deviation under noise repetition, and (iii) root mean squared error (RMSE) with respect to groundtruth. RMSE is the most commonly used metric to evaluate estimation performance (Barbieri et al., 2020 ###reference_2###; Bertleff et al., 2017 ###reference_3###), but is limited in its ability to disentangle accuracy and precision; to this end, mean bias and standard deviation were used as more specific measures of network performance.\nIt is important to note that all methods were assessed with respect to groundtruth qMRI parameters, even those trained on MLE labels. For these methods, the training and validation loss (MLE-based) differed from the reported testing loss (groundtruth-based)."
76
+ },
77
+ {
78
+ "section_id": "4",
79
+ "parent_section_id": null,
80
+ "section_name": "Results & discussion",
81
+ "text": "This section summarises our main findings and discusses the advantages offered by the parameter estimation method we propose."
82
+ },
83
+ {
84
+ "section_id": "4.1",
85
+ "parent_section_id": "4",
86
+ "section_name": "Comparison of parameter estimation methods",
87
+ "text": "The relative performance of all previously-discussed parameter estimation methods is summarised in Figures 2 ###reference_### and 3 ###reference_###. These figures show the bias, variance (represented by its square root: standard deviation), and RMSE of parameter estimates with respect to groundtruth values, reported for each model parameter as a function of its value over the synthetic test dataset; each plotted point represents an average over 500 noise instantiations and a marginalisation over all non-visualised parameters. Marginalisation was required for visualisation of a 4-dimensional parameter space, but was confirmed to be representative of the entire, non-marginalised space, as discussed in \u00a74.5 ###reference_###.\n###figure_2### ###figure_3### In keeping with previously reported results, we show a bias/variance trade-off between different parameter estimation methods. Conventional MLE fitting is provided as a reference (plotted in black). Approaches which, on a theoretical level, approximate conventional MLE (Self-supervised and SupervisedMLE, plotted in red), are generally associated with low bias, high variance, and high RMSE, whereas groundtruth-labelled supervised methods (plotted in blue) exhibit lower variance and RMSE at the cost of increased bias.\nIncreases in bias, if consistent across parameter space , do not necessarily reduce sensitivity to differences in underlying tissue properties. However, we show that is associated with bias that varies significantly as a function of groundtruth parameter values. This results in a reduction in information content, visualised as the gradient of the bias plots (top row) in Fig 2 ###reference_###. The more negative the gradient, the more parameter estimates are concentrated in the centre of the parameter estimation space , and the lower the ability of the method to distinguish differences in tissue properties. This information loss can be seen in Fig 4 ###reference_###, which compares to conventional MLE fitting, and shows the compression in over the groundtruth parameter-space .\n###figure_4###"
88
+ },
89
+ {
90
+ "section_id": "4.2",
91
+ "parent_section_id": "4",
92
+ "section_name": "Validation against clinical data",
93
+ "text": "The above trends, found in simulation, were also observed in real-world data. Fig 5 ###reference_### shows the bias, variance,\nand RMSE of parameter estimates with respect to \u201cgroundtruth\u201d values (obtained from the supersampled dataset described in \u00a73.5 ###reference_###). The axes of these plots correspond to these reference values. To aid visualisation, 10 uniform bins were constructed along each parameter dimension, into which clinical voxels were assigned based on their \u201cgroundtruth\u201d parameter values. Fig 5 ###reference_### plots the mean bias, standard deviation, and RMSE associated with each bin as a function of the bin\u2019s central value, together with the distribution of voxels across the 10 bins.\nBy calculating the variance of the 16 images, the SNR of this clinical dataset was found to be 15; Fig 2 ###reference_### is therefore the relevant point of comparison. It can be readily seen that the trends observed in simulated data, described in \u00a74.1 ###reference_###, are replicated for , , and the entire range of , namely the regions of parameter-space which are well-represented in the real-world data. Fig 6 ###reference_### confirms that divergence outside of these ranges is due to under-representation in the in vivo test data; the apparent divergences can be replicated in-silico by matching real-world parameter distributions.\nFig 7 ###reference_### contains exemplar parameter maps from the clinical test data, and shows the real-world implications of the trends summarised in Figures 2 ###reference_### and 5 ###reference_###: \u2019s low-variance, low-RMSE parameter estimation results in artificially smooth IVIM maps biased towards mean parameter values.\n###figure_5### ###figure_6### ###figure_7###"
94
+ },
95
+ {
96
+ "section_id": "4.3",
97
+ "parent_section_id": "4",
98
+ "section_name": "Advantages offered by our method",
99
+ "text": "Our proposed method occupies the low-bias side of the bias-variance trade-off discussed in \u00a73.5 ###reference_###, and offers four broad advantages over the competing method in this space (Self-supervised): (i) flexibility in choosing inter-parameter loss weighting , (ii) incorporation of non-Gaussian (e.g. Rician) noise models, (iii) compatibility with complex, non-differentiable signal models , and (iv) ability to interface with low-variance methods, to produce a hybrid approach tunable to the needs of the task at hand. These advantages are analysed in turn."
100
+ },
101
+ {
102
+ "section_id": "4.3.1",
103
+ "parent_section_id": "4.3",
104
+ "section_name": "4.3.1 Choice of inter-parameter loss weighting",
105
+ "text": "By computing loss in parameter-space , our method has total flexibility in adjusting the relative contribution of different in the training loss function. In contrast, since Self-supervised calculates training loss in , the relative weighting depends on the acquisition protocol . Fig 8 ###reference_### compares our method - weighted so as to not discriminate between different model parameters - with variants designed to overweight single parameters by a factor of . The potential advantages offered by this selective weighting are seen in the estimation , where this approach leads to a small increase in both precision and accuracy. This parameter-specific weighting is not accessible within a Self-supervised framework.\nIn light of the differences arising from inter-parameter loss weighting, for subsequent analysis we use SupervisedMLE, Gaussian as a proxy for Self-supervised; both methods encode the same regularised MLE fitting, but differ in their inter-parameter weighting.\n###figure_8###"
106
+ },
107
+ {
108
+ "section_id": "4.3.2",
109
+ "parent_section_id": "4.3",
110
+ "section_name": "4.3.2 Incorporation of Rician noise modelling",
111
+ "text": "By pre-computing MLE labels using conventional parameter estimation methods, we are able to incorporate accurate Rician noise modelling. Comparison between SupervisedMLE, Rician and SupervisedMLE, Gaussian shows the effect of the choice of noise model; these differences are most pronounced at low SNR (Fig 2 ###reference_###) and high , when the Gaussian approximation of Rician noise is known to break down. In this regime, our method gives less biased, more informative estimates, replicating conventional MLE performance at a fraction of the computational cost. At high , our method has a flatter, more information-rich, bias curve than all other DL methods. This information loss is further visualised in Fig 9 ###reference_###, which shows the compression in estimates over the groundtruth parameter-space . As expected, this compression is most apparent at high values of , when the signal is more likely to approach the Rician noise floor.\n###figure_9###"
112
+ },
113
+ {
114
+ "section_id": "4.3.3",
115
+ "parent_section_id": "4.3",
116
+ "section_name": "4.3.3 Compatibility with complex signal models",
117
+ "text": "An additional advantage of computing training loss in parameter-space is that DNN networks are signal model agnostic: network training does not require explicit calculation of . This approach is advantageous when working with complex signal models, as made clear by comparison with Self-supervised methods. In contrast with our proposed approach, Self-supervised methods embed between network output and training loss (see Fig 1 ###reference_###); this poses two practical limitations.\nThe first relates to efficient implementation of mini-batch loss, which requires a vectorised representation (and calculation) of predicted signals. This may pose a non-trivial challenge in the case of complex signal models. The second limitation relates to how training loss is minimised: network parameters are updated by computing partial derivatives of the training loss. This process requires the loss to be expressed in a differentiable form; embedding in the loss formulation limits Self-supervised methods to signal models that can be expressed in an explicitly differentiable form.\nOur method sidesteps both limitations by not requiring explicit calculation of during training, and is therefore compatible with a wider range of complex qMRI signal models."
118
+ },
119
+ {
120
+ "section_id": "4.3.4",
121
+ "parent_section_id": "4.3",
122
+ "section_name": "4.3.4 Tunable network approach",
123
+ "text": "As discussed above, we show a clear bias/variance trade-off between different parameter estimation methods. The optimal choice of method depends on the task at hand (Epstein et al., 2021 ###reference_7###), and may not lie at either extreme of this trade-off. Therefore, it would be advantageous to be able to combine low-bias and low-variance methods into a single, hybrid approach, with performance tunable by the relative contribution of each constituent method. Our proposed method, which interfaces naturally with , offers exactly that. An example of this approach is shown in Fig 10 ###reference_###: training loss has been weighted equally () between groundtruth and MLE labels, and, as expected, the resulting network performance lies in the middle ground between these two extremes.\n###figure_10###"
124
+ },
125
+ {
126
+ "section_id": "4.3.5",
127
+ "parent_section_id": "4.3",
128
+ "section_name": "4.3.5 Comparison with conventional fitting",
129
+ "text": "Comparison between our proposed method (SupervisedMLE, Rician) and conventional fitting (MLE, Rician) highlights additional advantages offered by our approach. Firstly, Figs 2 ###reference_### and 3 ###reference_### demonstrate qualitatively similar performance between these methods across the entire parameter space. The fact that our method, which offers near-instantaneous parameter estimation, produces similar parameter estimates to well-understood conventional MLE methods justifies its adoption in and of itself. However, our method not only mimics but indeed in many cases outperforms (lower bias, variance, and RMSE) the very same method used to compute those labels. This result not only motivates its use, but also confirms that DL methods are able to exploit information shared between training samples beyond what would be possible by considering each sample in isolation."
130
+ },
131
+ {
132
+ "section_id": "4.4",
133
+ "parent_section_id": "4",
134
+ "section_name": "A note on RMSE",
135
+ "text": "We note that RMSE is a poor summary measure of network performance. RMSE is heavily skewed by outliers, and thus favours methods which give parameter estimates consistently close to mean parameter values. Such estimates, as in the case of , may contain very little information (Fig 4 ###reference_###) despite being associated with low RMSE. Accordingly, we strongly recommend that RMSE be discontinued as a single summary metric for parameter estimation performance: it must always be accompanied by bias, variance, and ideally an analysis of information content.\nRMSE\u2019s limitations as a performance metric during testing may also call into question its suitability as a loss metric during training. This work, much like the rest of the DL qMRI literature, employs a training loss (MSE, described in Sections 2.2 ###reference_### and 2.3 ###reference_###) which is monotonically related to RMSE. Whilst outside the scope of this work, implementing a non-RMSE-derived training loss (such as mean absolute error) may be worth of future investigation."
136
+ },
137
+ {
138
+ "section_id": "4.5",
139
+ "parent_section_id": "4",
140
+ "section_name": "Justification of parameter marginalisation",
141
+ "text": "The above analysis has been largely based on Figs 2 ###reference_### and 3 ###reference_###, which show parameter estimation performance marginalised over 3 dimensions of . This choice, made to aid visualisation, was validated against higher dimensional representations of the same data.\nFig 11 ###reference_### compares SupervisedMLE, Rician and SupervisedGroundtruth performance across the entire qMRI parameter space. It can be seen that trends observed in Fig 2 ###reference_### are replicated here; we draw attention to two such examples. Firstly, Fig 2 ###reference_### suggests SupervisedGroundtruth produces lower standard deviation than SupervisedMLE, Rician; Fig 11 ###reference_### confirms this to be the case across all test data. In contrast, Fig 2 ###reference_### suggests that SupervisedGroundtruth produces higher bias at low and lower bias at high ; Fig 11 ###reference_### confirms a spread of bias differences across the test data: some favouring one method, and others the other. This effect is explored in Fig 12 ###reference_###, which compares estimation performance as a function of and at two specific (non-marginalised) groundtruth values (. As expected from the marginalised representation in Fig 2 ###reference_###, at low SupervisedGroundtruth produces higher bias across the entire - parameter space, whereas at high the opposite is true.\nDespite this, it is important to note the limitations of marginalisation. Fig 12 ###reference_### also shows that the relative performance of SupervisedMLE, Rician and SupervisedGroundtruth varies across all parameter-space dimensions. Consider , where 2 ###reference_### shows similar marginalised RMSE for these methods. In fact, by visualising this difference as a function of and , we reveal two distinct regions: high /low (where SupervisedMLE, Rician produces lower RMSE), and elsewhere (where it produces higher RMSE). This highlights (i) the potential pitfalls of producing summary results by marginalising across entire parameter spaces and (ii) the need to choose parameter-estimation methods appropriate for the specific parameter combinations relevant to the tissues being investigated (Epstein et al., 2021 ###reference_7###).\n###figure_11### ###figure_12###"
142
+ },
143
+ {
144
+ "section_id": "4.6",
145
+ "parent_section_id": "4",
146
+ "section_name": "Non-voxelwise approaches",
147
+ "text": "This work has focused on voxelwise DL parameter estimation methods: networks which map one signal curve to its corresponding parameter estimate. There are, however, alternatives: convolutional neural network methods which map spatially related clusters (\u201cpatches\u201d) of qMRI signals to corresponding clusters of parameter estimates (Fang et al., 2017 ###reference_8###; Ulas et al., 2019 ###reference_21###; Li et al., 2022 ###reference_15###). Our MLE training label approach could be incorporated into such methods, and we leave it to future work to investigate the effect this would have on parameter estimation performance."
148
+ },
149
+ {
150
+ "section_id": "5",
151
+ "parent_section_id": null,
152
+ "section_name": "Conclusions",
153
+ "text": "In this work we draw inspiration from state-of-the-art supervised and self-supervised qMRI parameter estimation methods to propose a novel DNN approach which combines their respective strengths. In keeping with previous work, we demonstrate the presence of a bias/variance trade-off between existing methods; supervised training produces low variance under noise, whereas self-supervised leads to low bias with respect to groundtruth.\nThe increased bias of supervised DNNs is counter-intuitive - when labels are available, these methods have access to more information, and should therefore outperform, non-labelled alternatives. In light of this, we infer that the high bias associated with these supervised methods stems from the nature of the additional information they receive: groundtruth training labels. By substituting these labels with independently-computed maximum likelihood estimates, we show that the low-bias performance previously limited to self-supervised approaches can be achieved within a supervised learning framework.\nThis framework forms the basis of a novel low-bias supervised learning approach to qMRI parameter estimation: training on conventionally-derived maximum likelihood parameter estimates. This method offers four clear advantages to competing non-supervised low-bias DNN approaches: (i) flexibility in choosing inter-parameter loss weighting, which enables network performance to be boosted for qMRI parameters of interest; (ii) incorporation of Rician noise modelling, which improves parameter estimation at low SNR; (iii) separation between signal model and training loss, which enables the estimation of non-differentiable qMRI signal models;and, crucially, (iv) ability to interface with existing supervised low-variance approaches, to produce a tunable hybrid parameter estimation method.\nThis final point underpins the key contribution of this work: unifying low-bias and low-variance parameter estimation under a single supervised learning umbrella. When faced with a parameter estimation problem, we no longer need to choose between extremes of the bias/variance trade-off; we can now tune DNN parameter estimation performance to the specific needs of the task at hand. This sets the stage for future work, where this tuning constant is optimised as part of a computational, task-driven, experimental design framework (Epstein et al., 2021 ###reference_7###).\nAcknowledgments\nSCE is supported by the EPSRC-funded UCL Centre for Doctoral Training in Medical Imaging (EP/L016478/1). TJPB is supported by an NIHR Clinical Lectureship (CL- 2019-18-001) and, together with MHC, is supported by the National Institute for Health Research (NIHR) Biomedical Research Centre (BRC). This work was undertaken at UCLH/UCL, which receives funding from the UK Department of Health\u2019s the NIHR BRC funding scheme.\nEthical Standards\nThe work follows appropriate ethical standards in conducting research and writing the manuscript, following all applicable laws and regulations regarding treatment of animals or human subjects.\nConflicts of Interest\nThe authors confirm they have no conflict of interest to disclose.\nData availability\nData and code are available at https://github.com/seancepstein/training_labels ###reference_abels###."
154
+ }
155
+ ],
156
+ "appendix": [],
157
+ "tables": {
158
+ "1": {
159
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Summary of evaluated parameter estimation networks. denotes parameter space; denotes signal space.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.8.5.1\">\n<th class=\"ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.8.5.1.1\" style=\"width:100.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S3.T1.8.5.1.1.1\">Network name</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.8.5.1.2\" style=\"width:70.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S3.T1.8.5.1.2.1\">Loss space</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.8.5.1.3\" style=\"width:60.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S3.T1.8.5.1.3.1\">Label</span></th>\n<th class=\"ltx_td ltx_align_justify ltx_align_middle ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S3.T1.8.5.1.4\" style=\"width:100.0pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"S3.T1.8.5.1.4.1\">Label noise model</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.5.1\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.2\" style=\"width:100.0pt;\"><span class=\"ltx_text ltx_font_italic ltx_align_top\" id=\"S3.T1.5.1.2.1\">Supervised<sub class=\"ltx_sub\" id=\"S3.T1.5.1.2.1.1\">GT</sub></span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.1\" style=\"width:70.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.5.1.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.3\" style=\"width:60.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.5.1.3.1\">Groundtruth</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T1.5.1.4\" style=\"width:100.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.5.1.4.1\">N/A</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.6.2\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.6.2.2\" style=\"width:100.0pt;\"><span class=\"ltx_text ltx_font_italic ltx_align_top\" id=\"S3.T1.6.2.2.1\">Self-supervised</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T1.6.2.1\" style=\"width:70.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.6.2.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T1.6.2.3\" style=\"width:60.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.6.2.3.1\">N/A</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T1.6.2.4\" style=\"width:100.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.6.2.4.1\">N/A</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.7.3\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.7.3.2\" style=\"width:100.0pt;\"><span class=\"ltx_text ltx_font_italic ltx_align_top\" id=\"S3.T1.7.3.2.1\">Supervised<sub class=\"ltx_sub\" id=\"S3.T1.7.3.2.1.1\">MLE, Rician</sub></span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T1.7.3.1\" style=\"width:70.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.7.3.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T1.7.3.3\" style=\"width:60.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.7.3.3.1\">MLE</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_r ltx_border_t\" id=\"S3.T1.7.3.4\" style=\"width:100.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.7.3.4.1\">Rician</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.8.4\">\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S3.T1.8.4.2\" style=\"width:100.0pt;\"><span class=\"ltx_text ltx_font_italic ltx_align_top\" id=\"S3.T1.8.4.2.1\">Supervised<sub class=\"ltx_sub\" id=\"S3.T1.8.4.2.1.1\">MLE, Gaussian</sub></span></td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.8.4.1\" style=\"width:70.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.8.4.1.1.1\"></p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.8.4.3\" style=\"width:60.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.8.4.3.1\">MLE</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_align_middle ltx_border_b ltx_border_r ltx_border_t\" id=\"S3.T1.8.4.4\" style=\"width:100.0pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"S3.T1.8.4.4.1\">Gaussian</p>\n</td>\n</tr>\n</tbody>\n</table>\n</figure>",
160
+ "capture": "Table 1: Summary of evaluated parameter estimation networks. denotes parameter space; denotes signal space."
161
+ }
162
+ },
163
+ "image_paths": {
164
+ "1": {
165
+ "figure_path": "2205.05587v3_figure_1.png",
166
+ "caption": "Figure 1: Comparison between our proposed method (SupervisedMLE) and existing supervised and self-supervised approaches.",
167
+ "url": "http://arxiv.org/html/2205.05587v3/x1.png"
168
+ },
169
+ "2": {
170
+ "figure_path": "2205.05587v3_figure_2.png",
171
+ "caption": "Figure 2: Parameter estimation performance at low SNR (15) as a function of groundtruth parameter Y\ud835\udc4cYitalic_Y. Performance summarised by bias & RMSE with respect to groundtruth and standard deviation with respect to noise repetition. Conventional MLE fitting is provided as a non-DNN reference standard. For the sake of visualisation, each plotted point represents marginalisation over all non-specified Y\ud835\udc4c{Y}italic_Y dimensions.",
172
+ "url": "http://arxiv.org/html/2205.05587v3/x2.png"
173
+ },
174
+ "3": {
175
+ "figure_path": "2205.05587v3_figure_3.png",
176
+ "caption": "Figure 3: Parameter estimation performance, visualised as in Figure 2, but for high SNR (30) data.",
177
+ "url": "http://arxiv.org/html/2205.05587v3/x3.png"
178
+ },
179
+ "4": {
180
+ "figure_path": "2205.05587v3_figure_4.png",
181
+ "caption": "Figure 4: Comparison between SupervisedGT and reference conventional MLE fitting, expressed in terms of estimation bias and information compression at low SNR (15). Arrows represent the mean mapping from Y\ud835\udc4cYitalic_Y to Y^^\ud835\udc4c\\hat{Y}over^ start_ARG italic_Y end_ARG, averaged over noise, as a function of parameter space Y\ud835\udc4cYitalic_Y. For the sake of visualisation, each plotted point represents marginalisation over all non-specified Y\ud835\udc4c{Y}italic_Y dimensions.",
182
+ "url": "http://arxiv.org/html/2205.05587v3/x4.png"
183
+ },
184
+ "5": {
185
+ "figure_path": "2205.05587v3_figure_5.png",
186
+ "caption": "Figure 5: In vivo parameter estimation performance of networks trained on low SNR (15) synthetic data, as a function of supersampling-derived reference parameter values. The first three rows summarise performance by showing bias & RMSE with respect to reference value and standard deviation with respect to noise repetition, marginalised over all non-specified Y\ud835\udc4c{Y}italic_Y dimensions. The bottom row shows the distribution of reference parameter values across the parameter range being visualised.",
187
+ "url": "http://arxiv.org/html/2205.05587v3/x5.png"
188
+ },
189
+ "6": {
190
+ "figure_path": "2205.05587v3_figure_6.png",
191
+ "caption": "Figure 6: Parameter estimation performance of networks trained on low SNR (15) synthetic data, tested on a synthetic dataset matching the distribution of in vivo reference parameter values. The first three rows summarise performance by showing bias & RMSE with respect to groundtruth value and standard deviation with respect to noise repetition, marginalised over all non-specified Y\ud835\udc4c{Y}italic_Y dimensions. The bottom row shows the distribution of groundtruth parameter values across the parameter range, which matches the in vivo dataset by construction.",
192
+ "url": "http://arxiv.org/html/2205.05587v3/x6.png"
193
+ },
194
+ "7": {
195
+ "figure_path": "2205.05587v3_figure_7.png",
196
+ "caption": "Figure 7: Parameter estimation performance of networks on real-world test data, visualised as spatial maps. Groundtruth maps are taken as the maximum likelihood parameter estimates associated with the complete 160 b-value dataset, whereas network predictions are obtained from a single 10 b-value subsample.",
197
+ "url": "http://arxiv.org/html/2205.05587v3/x7.png"
198
+ },
199
+ "8": {
200
+ "figure_path": "2205.05587v3_figure_8.png",
201
+ "caption": "Figure 8: Comparison between SupervisedMLE, Rician, as described above, and variants which differ in their inter-parameter loss weighting W\ud835\udc4aWitalic_W, at low SNR (15). Each column compares SupervisedMLE, Rician to a different network variant, uniquely trained to overweight the single relevant signal model parameter. For the sake of visualisation, each plotted point represents marginalisation over all non-specified Y\ud835\udc4c{Y}italic_Y dimensions.",
202
+ "url": "http://arxiv.org/html/2205.05587v3/x8.png"
203
+ },
204
+ "9": {
205
+ "figure_path": "2205.05587v3_figure_9.png",
206
+ "caption": "Figure 9: Comparison of the information content captured by SupervisedMLE methods, as a function of the noise model used in computing MLE labels, at low SNR (15). Arrows represent the mean mapping from Y\ud835\udc4cYitalic_Y to Y^^\ud835\udc4c\\hat{Y}over^ start_ARG italic_Y end_ARG, averaged over noise, as a function of parameter space Y\ud835\udc4cYitalic_Y. For the sake of visualisation, each plotted point represents marginalisation over all non-specified Y\ud835\udc4cYitalic_Y dimensions.",
207
+ "url": "http://arxiv.org/html/2205.05587v3/x9.png"
208
+ },
209
+ "10": {
210
+ "figure_path": "2205.05587v3_figure_10.png",
211
+ "caption": "Figure 10: Proof of concept of a hybrid parameter estimation method, formed by training a supervised network with an equally-weighted sum of SupervisedMLE, Rician and SupervisedGT loss functions (\u03b1=0.5\ud835\udefc0.5\\alpha=0.5italic_\u03b1 = 0.5), at low SNR (15). For the sake of visualisation, each plotted point represents marginalisation over all non-specified Y\ud835\udc4cYitalic_Y dimensions.",
212
+ "url": "http://arxiv.org/html/2205.05587v3/x10.png"
213
+ },
214
+ "11": {
215
+ "figure_path": "2205.05587v3_figure_11.png",
216
+ "caption": "Figure 11: Non-marginalised comparison of parameter estimation performance between SupervisedMLE, Rician and SupervisedGroundtruth at low SNR (15). Colour intensity represents density of distribution across all X\ud835\udc4bXitalic_X and all noise repetitions.",
217
+ "url": "http://arxiv.org/html/2205.05587v3/x11.png"
218
+ },
219
+ "12": {
220
+ "figure_path": "2205.05587v3_figure_12.png",
221
+ "caption": "Figure 12: Differences in performance (bias, standard deviation, RMSE) between SupervisedMLE, Rician and SupervisedGroundtruth for two groundtruth values of Ds\u2062l\u2062o\u2062wsubscript\ud835\udc37\ud835\udc60\ud835\udc59\ud835\udc5c\ud835\udc64D_{slow}italic_D start_POSTSUBSCRIPT italic_s italic_l italic_o italic_w end_POSTSUBSCRIPT at low SNR (15). The outermost columns (left and right) correspond to Ds\u2062l\u2062o\u2062w=0.69subscript\ud835\udc37\ud835\udc60\ud835\udc59\ud835\udc5c\ud835\udc640.69D_{slow}=0.69italic_D start_POSTSUBSCRIPT italic_s italic_l italic_o italic_w end_POSTSUBSCRIPT = 0.69 and Ds\u2062l\u2062o\u2062w=2.71subscript\ud835\udc37\ud835\udc60\ud835\udc59\ud835\udc5c\ud835\udc642.71D_{slow}=2.71italic_D start_POSTSUBSCRIPT italic_s italic_l italic_o italic_w end_POSTSUBSCRIPT = 2.71 respectively, and show mean performance under noise repetition, without marginalisation. The central column reproduces the corresponding marginalised representation from Fig 2.",
222
+ "url": "http://arxiv.org/html/2205.05587v3/x12.png"
223
+ }
224
+ },
225
+ "validation": true,
226
+ "references": [
227
+ {
228
+ "1": {
229
+ "title": "Extracting diffusion tensor fractional anisotropy and mean diffusivity from 3-direction DWI scans using deep learning.",
230
+ "author": "E. Aliotta, H. Nourzadeh, and Patel SH.",
231
+ "venue": "Magnetic Resonance in Medicine, 85(2):845\u2013854, 2021.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "2": {
237
+ "title": "Deep learning how to fit an intravoxel incoherent motion model to diffusion-weighted MRI.",
238
+ "author": "S. Barbieri, O. J. Gurney-Champion, R. Klaassen, and Thoeny HC.",
239
+ "venue": "Magnetic Resonance in Medicine, 83(1):312\u2013321, 2020.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "3": {
245
+ "title": "Diffusion parameter mapping with the combined intravoxel incoherent motion and kurtosis model using artificial neural networks at 3 T.",
246
+ "author": "Marco Bertleff, Sebastian Domsch, Sebastian Weing\u00e4rtner, Jascha Zapp, Kieran O\u2019Brien, Markus Barth, and Lothar R. Schad.",
247
+ "venue": "NMR in biomedicine, 30(12), dec 2017.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "4": {
253
+ "title": "Fast curve fitting using neural networks.",
254
+ "author": "C. M. Bishop and C. M. Roach.",
255
+ "venue": "Review of Scientific Instruments, 63(10):4450\u20134456, oct 1992.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "5": {
261
+ "title": "Quantitative MRI of the Brain.",
262
+ "author": "Mara Cercignani, Nicholas G. Dowell, and Paul S. Tofts.",
263
+ "venue": "CRC Press, jan 2018.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "6": {
269
+ "title": "Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs).",
270
+ "author": "Djork Arn\u00e9 Clevert, Thomas Unterthiner, and Sepp Hochreiter.",
271
+ "venue": "4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, nov 2015.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "7": {
277
+ "title": "Task-driven assessment of experimental designs in diffusion MRI: A computational framework.",
278
+ "author": "Sean C. Epstein, Timothy J.P. Bray, Margaret A. Hall-Craggs, and Hui Zhang.",
279
+ "venue": "PLOS ONE, 16(10):e0258442, oct 2021.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "8": {
285
+ "title": "Quantification of relaxation times in MR Fingerprinting using deep learning.",
286
+ "author": "Zhenghan Fang, Yong Chen, Weili Lin, and Dinggang Shen.",
287
+ "venue": "Proceedings of the International Society for Magnetic Resonance in Medicine \u2026 Scientific Meeting and Exhibition. International Society for Magnetic Resonance in Medicine. Scientific Meeting and Exhibition, 25, apr 2017.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "9": {
293
+ "title": "q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans.",
294
+ "author": "Vladimir Golkov, Alexey Dosovitskiy, Jonathan I. Sperl, Marion I. Menzel, Michael Czisch, Philipp S\u00e4mann, Thomas Brox, and Daniel Cremers.",
295
+ "venue": "IEEE transactions on medical imaging, 35(5):1344\u20131351, may 2016.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "10": {
301
+ "title": "Deep Learning Model Fitting for Diffusion-Relaxometry: A Comparative Study.",
302
+ "author": "F. Grussu, M. Battiston, M. Palombo, T. Schneider, Wheeler-Kingshott Camg, and Alexander DC.",
303
+ "venue": "Mathematics and Visualization, pages 159\u2013172, 2021.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "11": {
309
+ "title": "The rician distribution of noisy mri data.",
310
+ "author": "H\u00e1Kon Gudbjartsson and Samuel Patz.",
311
+ "venue": "Magnetic Resonance in Medicine, 34(6):910\u2013914, 1995.",
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "12": {
317
+ "title": "Training data distribution significantly impacts the estimation of tissue microstructure with machine learning.",
318
+ "author": "N. G. Gyori, M. Palombo, C. A. Clark, H. Zhang, and Alexander DC.",
319
+ "venue": "Magnetic Resonance in Medicine, 87(2):932\u2013947, 2022.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "13": {
325
+ "title": "MIT Press; 2022.",
326
+ "author": "Murphy KP. Probabilistic Machine Learning: An introduction.",
327
+ "venue": "probml.ai, Available from.",
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "14": {
333
+ "title": "Mr imaging of intravoxel incoherent motions: application to diffusion and perfusion in neurologic disorders.",
334
+ "author": "D. Le Bihan, E. Breton, D. Lallemand, P. Grenier, E. Cabanis, and Laval-Jeantet M.",
335
+ "venue": "Radiology, 161(2):401\u2013407, 1986.",
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "15": {
341
+ "title": "A simultaneous multi\u2010slice T 2 mapping framework based on overlapping\u2010echo detachment planar imaging and deep learning reconstruction.",
342
+ "author": "Simin Li, Jian Wu, Lingceng Ma, Shuhui Cai, and Congbo Cai.",
343
+ "venue": "Magnetic Resonance in Medicine, 87(5):2239\u20132253, may 2022.",
344
+ "url": null
345
+ }
346
+ },
347
+ {
348
+ "16": {
349
+ "title": "Myelin water imaging data analysis in less than one minute.",
350
+ "author": "Hanwen Liu, Qing San Xiang, Roger Tam, Adam V. Dvorak, Alex L. MacKay, Shannon H. Kolind, Anthony Traboulsee, Irene M. Vavasour, David K.B. Li, John K. Kramer, and Cornelia Laule.",
351
+ "venue": "NeuroImage, 210:116551, apr 2020.",
352
+ "url": null
353
+ }
354
+ },
355
+ {
356
+ "17": {
357
+ "title": "supervised deep neural network approach with standardized targets for enhanced accuracy of ivim parameter estimation from multi-snr images.",
358
+ "author": "A. Mastropietro, D. Procissi, E. Scalco, G. Rizzo, and N. A Bertolino.",
359
+ "venue": "NMR in Biomedicine, 35:10, 2022.",
360
+ "url": null
361
+ }
362
+ },
363
+ {
364
+ "18": {
365
+ "title": "Improved unsupervised physics-informed deep learning for intravoxel incoherent motion modeling and evaluation in pancreatic cancer patients.",
366
+ "author": "Kaandorp Mpt, S. Barbieri, R. Klaassen, van Laarhoven Hwm, H. Crezee, P. T. While, et al.",
367
+ "venue": "Magnetic Resonance in Medicine, 86(4):2250\u20132265, 2021.",
368
+ "url": null
369
+ }
370
+ },
371
+ {
372
+ "19": {
373
+ "title": "SANDI: A compartment-based model for non-invasive apparent soma and neurite imaging by diffusion MRI.",
374
+ "author": "M. Palombo, A. Ianus, M. Guerreri, D. Nunes, D. C. Alexander, N. Shemesh, et al.",
375
+ "venue": "NeuroImage, 215(11683):5, 2020.",
376
+ "url": null
377
+ }
378
+ },
379
+ {
380
+ "20": {
381
+ "title": "Input layer regularization for magnetic resonance relaxometry biexponential parameter estimation.",
382
+ "author": "M. Rozowski, J. Palumbo, J. Bisen, C. Bi, M. Bouhrara, W. Czaja, et al.",
383
+ "venue": "Magnetic Resonance in Chemistry, 2022.",
384
+ "url": null
385
+ }
386
+ },
387
+ {
388
+ "21": {
389
+ "title": "Convolutional Neural Networks for Direct Inference of Pharmacokinetic Parameters: Application to Stroke Dynamic Contrast-Enhanced MRI.",
390
+ "author": "Cagdas Ulas, Dhritiman Das, Michael J. Thrippleton, Maria del C. Vald\u00e9s Hern\u00e1ndez, Paul A. Armitage, Stephen D. Makin, Joanna M. Wardlaw, and Bjoern H. Menze.",
391
+ "venue": "Frontiers in Neurology, 9(JAN):1147, jan 2019.",
392
+ "url": null
393
+ }
394
+ },
395
+ {
396
+ "22": {
397
+ "title": "Quantitative susceptibility mapping using deep neural network: QSMnet.",
398
+ "author": "J. Yoon, E. Gong, I. Chatnuntawech, B. Bilgic, J. Lee, W. Jung, et al.",
399
+ "venue": "NeuroImage, 179:199\u2013206, 2018.",
400
+ "url": null
401
+ }
402
+ },
403
+ {
404
+ "23": {
405
+ "title": "Model-informed machine learning for multi-component T2 relaxometry.",
406
+ "author": "T. Yu, E. J. Canales-Rodr\u00edguez, M. Pizzolato, G. F. Piredda, T. Hilbert, E. Fischi-Gomez, et al.",
407
+ "venue": "Medical Image Analysis, 69:10194, 2021.",
408
+ "url": null
409
+ }
410
+ },
411
+ {
412
+ "24": {
413
+ "title": "Detection of Active Sacroiliitis with Ankylosing Spondylitis through Intravoxel Incoherent Motion Diffusion-Weighted MR Imaging.",
414
+ "author": "Ying-hua Zhao, Shao-lin Li, Zai-yi Liu, Xin Chen, Xiang-cheng Zhao, Shao-yong Hu, Zhen-hua Liu, Ying-jie Mei MS, Queenie Chan, and Chang-hong Liang.",
415
+ "venue": "European Radiology, 25(9):2754\u20132763, sep 2015.",
416
+ "url": null
417
+ }
418
+ }
419
+ ],
420
+ "url": "http://arxiv.org/html/2205.05587v3"
421
+ }
20240123/2205.13743v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2206.02059v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2206.14359v5.json ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "TE2Rules: Explaining Tree Ensembles using Rules",
3
+ "abstract": "Tree Ensemble (TE) models, such as Gradient Boosted Trees, often achieve optimal performance on tabular datasets, yet their lack of transparency poses challenges for comprehending their decision logic. This paper introduces TE2Rules (Tree Ensemble to Rules), a novel approach for explaining binary classification tree ensemble models through a list of rules, particularly focusing on explaining the minority class. Many state-of-the-art explainers struggle with minority class explanations, making TE2Rules valuable in such cases. The rules generated by TE2Rules closely approximate the original model, ensuring high fidelity, providing an accurate and interpretable means to understand decision-making. Experimental results demonstrate that TE2Rules scales effectively to tree ensembles with hundreds of trees, achieving higher fidelity within runtimes comparable to baselines. TE2Rules allows for a trade-off between runtime and fidelity, enhancing its practical applicability. The implementation is available here:https://github.com/linkedin/TE2Rules.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "In recent years, many decision support systems have been constructed as black box models using machine learning such as Tree Ensembles (TE) and Deep Neural Networks. Lack of understanding of the internal logic of decision systems constitutes both a practical and an ethical issue, especially for critical tasks that directly affect the lives of people like health care, credit approval, criminal justice etc. In such use cases, there is a possibility of making wrong decisions, learned from spurious correlations in the training data. The cost of making wrong decisions in these domains is very high. Hence, having some explanations (like what part of the input is the model focusing on, or under what conditions satisfied by the input, does the model behave similarly) is important for building trust on these decision systems. Moreover, recent legal regulations like General Data Protection Regulation (GDPR, 2018) enables all individuals to obtain \u201cmeaningful explanations of the logic involved\u201d when automated decision making takes place.\nIn this work, we focus our attention on explaining tree ensemble (TE) models which are popular in many use cases involving tabular data (Grinsztajn, Oyallon, and Varoquaux 2022 ###reference_7###; Qin et al. 2021 ###reference_13###). Our focus is exclusively on binary classification models, as they are prevalent in many critical decision-making systems such as disease diagnosis and spam detection. In the realm of binary classification, providing explanations for one class can be more crucial than the other. Consider healthcare or fraud detection, where explaining why a model identifies a data point as positive (e.g., detecting a tumor or predicting a scammer) is vital. Interestingly, the positive class often represents the minority class in the training/test data.\nA popular method to explain any model is to learn an interpretable surrogate model that closely approximates the original model. The accuracy of the surrogate model with respect to the original model is called fidelity. A good explainer needs to have high fidelity on test data. Additionally, for effective explanation of the minority class, the explainer must demonstrate good fidelity specifically on the part of the test data where the model predicts the minority class. Therefore, the explainer needs to maintain high fidelity overall and also high fidelity on the minority class predictions. However, many state of art rule-based explainers excel in overall fidelity but struggle to explain the minority class accurately. This is problematic, especially when the minority class corresponds to the positive class. This limitation hinders their utility in explaining predictions for the important positive class.\nIn this work, we introduce a novel algorithm TE2Rules (Tree Ensemble to Rules) designed to mine rules only for the (minority) positive class. Each individual rule takes the form \u201dIf feature and feature \u2026and feature, then model prediction = 1,\u201d where the label 1 signifies the positive class. The rules mined by TE2Rules are short with only a few features and each rule has a high precision, i.e, of all the data points that satisfy the rule, a high fraction of them (default of 95%) get positive class prediction by the model. Besides high precision, each rule has a decent coverage on positives. We post process these rules by selecting a small number of rules that cover most of the positives in the data. This small collection of rules can explain the model at a global level with high overall fidelity as well as high fidelity on positive class predictions. TE2Rules can mine these rules in a runtime that is comparable to other state of the art model explainers making the algorithm scale to tree ensembles with hundreds of trees.\nTE2Rules achieves it capabilities by leveraging the Apriori Algorithm. The Apriori algorithm is a data mining algorithm that identifies items that are frequently found together in a collection of itemsets. In the context of TE2Rules, positive class predictions are explained by identifying sets of internal tree nodes that are commonly active with positive class predictions, but not with negative class predictions. These sets of tree nodes are then converted into rules using the necessary conditions that an input data point must satisfy to traverse that particular combination of tree nodes within the tree ensemble.\nIn this work, we show that 1) many existing state of the art rule based explainers for tree ensemble (TE) models have poor fidelity on positives (the minority class) though they may have good overall fidelity. 2) To solve this problem, we propose a novel method, TE2Rules (Tree Ensemble to Rules), that can generate rules corresponding to a single class of interest (say positive class) by merging decision paths from multiple trees in the tree ensemble. Since all the rules are mined for a single class, there is no conflict among the labels predicted by different rules. 3) We show that the resulting rules have high overall fidelity and high fidelity on positives (the class of interest) even if the positives happen to be a minority class in the dataset. TE2Rules can achieve such high performance at comparable number of rules relative to existing baselines, at the cost of slightly higher run time. 4) By stopping the algorithm in the early stages, we can tradeoff fidelity with runtime."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "In the past, several methods have been proposed to explain a tree ensemble model using rules or decision trees that can be better understood by a human. Some of these approaches work at a global level by approximating the tree ensemble with a set of rules or decision tree. While some other approaches work at a local (instance-level) by finding a rule or a decision tree that best explains the decisions of the tree ensemble for data points sampled from the neighborhood of that instance.\nRule-based explainers: inTrees(Deng 2019 ###reference_3###) generate rules from decisions made from individual trees in the tree ensemble and selects high precision rules among them. Another closely related method, ruleFit (Friedman and Popescu 2008 ###reference_5###) runs a sparse linear regression on rules generated from individual trees to select the most important rules. However, a sparse ensemble of rules is not as interpretable as a list of if-then rules. Both inTrees and ruleFit generate rules from nodes of individual trees and do not consider node combinations from multiple trees. Hence, their search space of rules is limited to rules from individual trees, resulting in low fidelity. deFragTrees (Hara and Hayashi 2018 ###reference_9###) identifies fragmented regions in the input space defined by the splits made by the tree ensemble and tries to simplify them into a short set of rules that are almost equivalent to the tree ensemble using bayesian inference. deFragTrees works on simplifying rules obtained from all possible node combinations from multiple trees in the ensemble and can achieve higher fidelity than methods like inTrees. However, most rule-based explainers are not targeted to explain any one single class. They often end up mining a lot of rules for the majority class and miss out on the minority class. In such cases, they are not very effective in explaining the minority class prediction.\nTree-based explainers: BATrees (Vidal, Pacheco, and Schiffer 2020 ###reference_15###), uses the tree ensemble to generate more labeled data points that can be used to fit a decision tree on the data. However, trying to obtain a single tree decision tree to represent a tree ensemble can result in a very deep tree, making it harder to interpret. Node Harvest (Meinshausen 2010 ###reference_12###) selects a few nodes from the shallow parts of the trees in the ensemble and creates an ensemble of shallow trees. The simplified model is easier to interpret than the original model. But, it is still not as easy to interpret as a decision tree or rule list.\nLocal instance-level explainers: Some explanation methods are model-agnostic and can be applied to models beyond tree ensembles. Anchors (Ribeiro, Singh, and Guestrin 2018 ###reference_14###) finds high precision, if-then rules satisfied by the instance, using multi-armed bandit and beam search algorithms. LoRE (Local Rule-based Explanations) (Guidotti et al. 2018 ###reference_8###) constructs interpretable models (decision trees) based on local samples. For each input data point, rules are generated that are locally accurate. A local search is conducted for every incoming data point to be explained. Hence, these methods can be prohibitively expensive if the objective is to explain a very large set of data points for a single model.\nInterpretable models: Instead of explaining existing machine learning models, an alternative approach is to directly learn interpretable models from data. Some methods, like Falling Rule List (Wang and Rudin 2015 ###reference_16###), Interpretable Decision Sets (Lakkaraju, Bach, and Leskovec 2016 ###reference_10###), and Bayesian Rule List (Yang, Rudin, and Seltzer 2017 ###reference_18###; Wang et al. 2017 ###reference_17###), learn rule lists directly from data using association rule mining algorithms such as Apriori (Agrawal, Srikant et al. 1994 ###reference_1###) or its variants like FP-Growth (Borgelt 2005 ###reference_2###).\nThe Apriori Algorithm is a data mining algorithm designed to identify subsets of items that frequently co-occur in a collection of itemsets. Apriori is particularly valuable in analyzing e-commerce transactions. For instance, in a database of customer transactions where each transaction includes a set of items bought together (like milk, bread, jam, eggs, bread, jam, etc.), Apriori can identify frequently co-occurring subsets of items (such as bread, jam).\nSome other approaches like SkopeRules (Gautier, Jafre, and Ndiaye 2020 ###reference_6###) learns rule lists by first fitting a tree ensemble model on the data and then extracting rules from it. Unlike the methods that explain a trained tree ensemble model, SkopeRules learns a tree ensemble model internally and doesn\u2019t explain an existing trained tree ensemble model.\nIn this work, our focus is on explainers that take a tree ensemble model and some data as input, producing rules that explain model prediction, especially for the minority (positive) class. The explainer operates globally, producing different possible rules under which the model gives positive predictions. For this purpose, we utilize the Apriori Algorithm to identify frequently co-occurring decisions made within the nodes of the tree ensemble that result in positive class predictions. While the Apriori Algorithm has been employed in recent times to learn interpretable models directly from data, our work represents the first application of association rule mining for explaining a trained tree ensemble model. We choose inTrees and deFragTrees as our baselines since they are also global explainers that generate a rule list from the tree ensemble model and a slice of training data."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Method",
21
+ "text": "Apriori Algorithm: Before describing TE2Rules, it is important to understand the Apriori Algorithm. Apriori is a data mining algorithm designed to identify subsets of items that frequently appear together in a collection of itemsets. It uses 2 user-defined parameters: and . Apriori looks for subsets containing upto items, where the items within each subset occur together more than times in the overall collection of itemsets. For example, Table 4 ###reference_### shows a dataset with a collection of 7 itemsets. The itemset {bread, jam} appears as a subset in 4 different itemsets out of 7 in the dataset. Apriori tries to find such frequently occurring subsets. Apriori algorithm runs in stages. In the -th stage, it tries to find subsets of size that occur more than times. Here\u2019s a brief description of how Apriori works.\nIn stage 1, Apriori examines individual items or -item sets that have a frequency more than in the collection. Such, itemsets are considered to be frequent enough in stage 1.\nIn stage (with ), it identifies itemsets of size that occur more than times in the dataset. To generate candidate itemsets for the current stage , the Apriori algorithm looks at pairs of itemsets of size from the previous stage. Combining such a pair would result in an itemset of size . However, Apriori only considers pairs of itemsets that have items in common. Combining such a pair results in itemset with items.\nThe algorithm leverages the \u201cAnti-monotone property,\u201d which states that if an itemset is frequent, then all of its subsets must also be frequent. For each candidate itemset generated above, Apriori checks if all its subsets of size have been found to be frequent in the previous stage.\nFor a candidate itemset that has passed the test of \u201cAnti-monotone property,\u201d Apriori counts its number of occurences in the dataset and keeps only those candidate itemsets that occur more than times in the dataset. These itemsets are considered to be frequent enough in stage .\nExample: Let\u2019s apply Apriori to discover subsets of items that appear more than = 1 times in Table 4 ###reference_### by running Apriori till stages.\nStage 1: In stage 1, Apriori considers individual items or sets of item, that have a frequency exceeding in the collection. The counts of these items in the collection are presented in Table 4 ###reference_###. Among these items, {jam}, {bread}, {milk}, {butter} are identified as frequent since they occur more than = 1 times in the collection.\nStage 2: In stage 2, Apriori combines pairs of itemsets found to be frequent in stage 1 to form itemsets of 2 items. This ensures that itemsets containing {eggs} are excluded from exploration in stage 2, since the itemset {eggs} was not frequent enough in stage 1. The counts of these 2-item itemsets formed in this manner are presented in Table 4 ###reference_###. Among these items, {jam, bread}, {jam, milk}, {bread, milk}, {jam, butter} occur more than times in the collection.\nStage 3: In stage 3, Apriori combines pairs of itemsets that were identified as frequent in stage 2. If a pair of itemsets shares one item in common, Apriori merges them to create a new itemset with three items. For instance, it combines {jam, bread} and {jam, milk} to produce {jam, bread, milk}, and similarly for other combinations.\nAfter generating these 3-item itemsets, Apriori verifies that all 2-item subsets derived from each itemset was found to be frequent enough in stage 2. For example, the itemset {jam, bread, butter} is formed from {jam, bread} and {jam, butter}. But, the itemset {jam, bread, butter} also contains the subset {bread, butter}, which was not frequent in stage 2. Thus, Apriori removes this 3-item itemset. Following this approach, only the 1 itemset remains: jam, bread, milk with all its subsets already identified as frequent in stage 2. This itemset also meets the frequency threshold by appearing more than times in the collection (as illustrated in Table 4 ###reference_###).\nThus, the itemsets {jam}, {bread}, {milk}, {butter}, {jam, bread}, {jam, milk}, {bread, milk}, {jam, butter}, {jam, bread, milk} occur frequently (more than times) in the collection.\n###figure_1### TE2Rules Algorithm: TE2Rules takes a trained tree ensemble (TE) model with hundreds of decision trees and a slice of training data as input and gives a list of rules for positive class as output. Consider a tree ensemble containing trees, each with a maximum depth of . These trees are made of internal nodes, and each node decides whether to move left (if feature) or right (if feature) based on a single feature. When we input a data point into this tree ensemble model, it traverses all trees, navigating internal nodes based on feature conditions. This journey involves passing through internal nodes + leaf node in each tree. Consequently, the data point traverses a total of tree nodes across the entire ensemble. It is important to note that any other data point meeting these conditions would receive the same model prediction. Figure 5 ###reference_### shows a tree ensemble model with trees of depth and a slice of training data. The model predicts whether a fruit is edible based on 3 features: color, odor, variety. Here\u2019s a simplified overview of how TE2Rules operates on this tree ensemble.\nStep 1: Pre-Processing: In this step, TE2Rules transforms each data point from the training data slice into an itemset of tree nodes obtained from its journey through the tree ensemble, treating each node as a separate item. For example, in Figure 5 ###reference_###, consider the first data point in the training dataset, described by (color=red, odor=sweet, variety=native). This data point is transformed into the itemset {}, encompassing all the nodes it traverses within the tree ensemble. Subsequently, the original training data is depicted as a collection of itemsets, with each data point having a corresponding itemset.\nStep 2: Apriori: In this step, TE2Rules finds itemsets that frequently appear alongside positive predictions among all the itemsets found in step 1. To achieve this, TE2Rules retains only the itemsets associated with positive model predictions and applies the Apriori algorithm to find sets of nodes () that appear frequently. This step involves running multiple stages of Apriori and in stage , it identifies itemsets with nodes. The Apriori algorithm is run with user defined parameters for = min_support and = num_stages.\nStep 3: Itemset-Rule: In this step, TE2Rules converts each itemset found in step 2 into an itemset-rule. An itemset consists of a set of nodes. For a node to be visited by a data point, all decisions along the path from the root to that particular node (i.e. all its ancestors up to the root node) must be satisfied. Each node is represented by the corresponding rule needed to reach it. For example, to reach node , the data point must satisfy the rule \u201ccolor=red and odor=sweet\u201d. Similarly, for visiting node , the data point must satisfy the rule \u201ccolor=red\u201d. The root node is represented by the empty rule, since all data points start their journey from the root node.\nLikewise, each itemset can be expressed as a rule formed by combining (via disjunction) the rules for each of its constituent nodes. For example, an itemset with nodes {} would be represented by \u201c((color=red) and (color=red and odor=sweet) and (odor=sweet))\u201d = \u201c(color=red and odor=sweet)\u201d. In this way, each itemset found by Apriori is converted into an itemset-rule.\nStep 4: High Precision Rules: In this step, TE2Rules transforms every itemset-rule identified in step 3 into the rule: \u201cIf itemset-rule Then model prediction = positive\u201d and retains them only if their precision exceeds a user defined threshold of min_precision. The precision of a rule measures its correctness. In this particular context, precision denotes the fraction: count( data points satisfying itemset-rule and model prediction = positive) / count(data points satisfying itemset-rule). By default, TE2Rules uses a min_precision threshold of 0.95. Step 3 identifies itemsets that occur frequently with positives. This step helps in finding itemsets that frequently occur with positives but infrequently with negatives. For example, in Figure 5 ###reference_###, the itemset {} representing the rule \u201ccolorred and odor=sweet\u201d occurs only with positives and never with negatives. Hence, TE2Rules would find the rule \u201cIf colorred and odor=sweet, then model prediction = positive\u201d as a possible rule to explain some of the positives.\n1.\npriors_count 2.5 & age 36.5\n0.957\n0.560\n2.\npriors_count 12.5 & age 21.5 & sex_Female 0.5 & days_arrest 17.5\n0.952\n0.156\n3.\npriors_count 5.5 & days_arrest -1.5\n0.947\n0.421\n4.\npriors_count 1.5 & priors_count 15.5 & age 28.5\n0.981\n0.382\n5.\ncharge_Felony 0.5 & priors_count 1.5 & age 22.5 & sex_Male 0.5\n0.893\n0.139\n6.\ndays_arrest 0.5\n0.929\n0.026\n7.\npriors_count 4.5 & age 51.5\n0.957\n0.518\n8.\npriors_count 12.5 & days_arrest -8.5\n0.974\n0.145\n9.\npriors_count 0.5 & priors_count 12.5 & age 23.5 & days_arrest -4.5\n0.917\n0.178\n10.\npriors_count 2.5 & age 22.5 & race_African_American 0.5 & sex_Female 0.5\n0.984\n0.117\nStep 5: Post-Processing: In this step, TE2Rules selects a small number of rules among all the rules found in step 4 to explain the entire tree ensemble model. This process resembles solving a set cover problem, where each rule covers positives, acting as a set. A greedy algorithm is employed, starting with an empty rule list and successively selecting rules that cover the highest number of positives not already covered by the list. The chosen rule is added to the list, and the process is repeated until all positive instances are covered.\nIt\u2019s important to note that multiple rules may explain the same data, and the greedy algorithm randomly selects rules in case of ties. Another approach involves having domain experts review all rules generated by TE2Rules, assigning weights based on their alignment with human decision-making. This introduces a weighted set cover problem and addressing it is beyond the scope of this work. In this work, we use the same weight for all rules, and the greedy algorithm is used to address the set cover problem.\nTE2Rules uses three user-defined parameters: min_support, num_stages, and min_precision. TE2Rules uses a default value of for min_precision. TE2Rules uses a very conservative value of as the default value for min_support. This forces TE2Rules to explore all possible rules with support greater than to explain the positive model predictions. Users can speed up TE2Rules by setting a higher value for min_support, skipping rules with little support in the training data. However, this may result in a slightly reduced fidelity of the rule list mined by TE2Rules. Unless otherwise specified, TE2Rules utilizes these default parameter values in all our experiments. In the results section, we describe the trade-off between fidelity and runtime with varying num_stages.\nExample: Here is an example of rule list generated by TE2Rules from a tree ensemble (TE) model. This TE model is an XGBoost model, consisting of 50 trees with a depth of 3 and was trained on the compas dataset. The compas dataset contains outcomes from a commercial algorithm assessing the likelihood of a convicted criminal to reoffend. Positive label indicates a high likelihood of reoffending. The underlying XGBoost model achieved an AUC of 0.765 on training data and 0.724 on test data. TE2Rules was used to explain the predictions of the XGBoost model using 10% of the training data. TE2Rules uses only the model predictions on this slice of training data and does not have access to ground truth labels. TE2Rules was run till stage 3.\nTE2Rules identified a list of 10 rules that collectively achieved a fidelity of 0.973 on the training data slice. These rules successfully explained all positive predictions (minority class) with a fidelity of 1.00 on positive model predictions, while achieving a fidelity of 0.956 on the negative model predictions. The rules generalized well to the test data, achieving a fidelity of 0.949 on overall test data, 0.994 on positive predictions, and 0.915 on negative predictions in test data.\nThe rules found by TE2Rules are shown in Table 1 ###reference_###, along with precision and recall of each rules on training/test data. Precision represents the fraction of data points satisfying the rule that are also identified as positive by the model, while recall is the fraction of positive model predictions covered by the rule. Notably, each rule identified by TE2Rules maintains a precision above . It is essential to highlight that while these rules are accurate in explaining the model predictions, they may not align with how humans would have arrived at the same decision. These rules only highlight how the model arrived at the decision. These rules were condensed from rules found by TE2Rules at the end of Step 4. These selected rules in Step 5 represent just one of the many possible representations of the model explanations. For alternate representations, human input from domain experts would be necessary to select a different set of rules from the 316 rules based on which ones align closely with human decision-making.\n###figure_2### ###figure_3###"
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Results",
27
+ "text": "Datasets: We demonstrate the effectiveness of TE2Rules using 3 datasets from domains (like finance, legal sector) where transparency in decision making process is crucial: compas, bank, adult (Larson et al. 2016 ###reference_11###; Dua and Graff 2017 ###reference_4###). The compas dataset consists of the results from a commercial algorithm used to assess a convicted criminal\u2019s likelihood of reoffending. The bank dataset consists of results from a marketing campaign by a banking institution on whether a client will subscribe to their term deposit. The adult dataset consists of census data on whether a person has income over 50K$. All these datasets contain demographic attributes of participants like age, gender, race, etc.\nBaselines: We compared TE2Rules with two popular baselines: inTrees and deFragTrees.\ninTrees goes through each node in the tree ensemble and extracts rules from each node using the decision path to reach the node from its respective root node. For each rule extracted from a node, it assigns the majority label from the support of the rule. Further, it selects a small set of high precision rules and presents it in a falling rule list format.\ndeFragTrees identifies all possible rules that can be formed out of different node combinations from the tree ensemble. It then simplifies these rules by inducing a probability distribution over the rules and finding the simplest representation of this distribution. In this process, it finds a short falling rule list to represent the tree ensemble.\nThese algorithms operate at different ends of the spectrum. inTrees mines rules from individual nodes and completely discounts the effects of node combinations from multiple trees. deFragTrees mines rules by simplifying rules from all possible node combinations from multiple trees. TE2Rules provides a middle ground of exploring rules in stages of node combinations, with .\nWe report results of TE2Rules run with stages 1, 2 and 3. deFragTrees requires a parameter to specify the maximum number of rules to mine. We run deFragTrees with 1x, 5x and 10x times the number of rules mined by TE2Rules (with num_stages = 3). Both the baselines (and TE2Rules) take the trained model and a sample of training data (10%) as input to mine rules to explain the model. All explainers are run using the same sampled training dataset, trained model and evaluated using the same test dataset.\nImplementation: TE2Rules and deFragTrees are implemented in python while inTrees is implemented in R. We trained our xgboost models in python using scikit-learn and exported them in a format that can be ingested in R. Our implementation of TE2Rules with instructions to reproduce the results can be found here: https://github.com/linkedin/TE2Rules ###reference_###. All experiments were conducted on a 64-bit Ubuntu OS 20.04, with an Intel Xeon 2.4 GHz CPU and 32 GB RAM.\nModels: We trained gradient boosted tree ensemble (TE) models with 100, 200, 500 trees with depth 3, 5 for binary classification in python scikit-learn. In all our results, we use red, green, blue colors to denote TE models with 100, 200, 500 trees, respectively. We explain these models using inTrees, deFragTrees and TE2Rules. We report the number of extracted rules and time taken to extract the rules. We evaluate the performance of the rules using fidelity: accuracy of the rules with respect to the model predictions. We report the fidelity of the rules on the test data (overall fidelity) and on the portion of test data on which the model labels it as positive class (positive fidelity). In all these datasets, positive class happens to be the minority class.\n###figure_4### ###figure_5### ###figure_6###"
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "Performance: Fidelity",
33
+ "text": "Figure 6 ###reference_### displays the fidelity of explainers on xgboost models with varying tree configurations (100, 200, 500 trees with depth 5). Each model is represented by a unique color, and comparisons between explainers (: deFragTrees, : TE2Rules, : inTrees) should be made within the same color. TE2Rules results are reported for stages 1, 2, and 3, each being an independent run involving stages 1-4 followed by post-processing (Step - 5). In each plot, three \u201ctriangles\u201d of the same color denote TE2Rules runs until stages 1, 2, and 3 for each xgboost model. Among triangles of the same color, the one with the lowest fidelity on positives corresponds to stage-1. While fidelity on positives generally improves with stages, stage-3 often provides minimal improvement, resulting in overlap with stage-2 triangles, indicating similar numbers of rules and fidelity on positives. Consequently, most plots within Figure 6 ###reference_### typically show only 2 triangles.\nIn the top 3 plots of Figure 6 ###reference_###, all explainers achieve very high fidelity on the overall test data. In the bottom 3 plots, baselines exhibit poor fidelity on the portion of the test set with positive model predictions, with deFragTrees outperforming inTrees. TE2Rules achieves higher fidelity than both inTrees and deFragTrees on positive model predictions.\nTE2Rules achieves high fidelity on positive model predictions by combining rules from multiple trees, whereas inTrees only fetches rules from individual trees. DeFragTrees performs better by mining rules from the global model prediction boundaries. Although stage-1 of TE2Rules is closer to inTrees in principle, as it mines rules from individual nodes, TE2Rules outperforms inTrees. This is because the rule list mined by inTrees for the minority class (positives) is insufficient, mostly explaining the majority class (negatives) and resulting in poor performance in fidelity on positives. DeFragTrees faces a similar challenge. Despite mining 10 times as many rules as TE2Rules, deFragTrees struggles to explain the minority class effectively compared to TE2Rules stage-3. Therefore, TE2Rules proves to be more effective in explaining the minority class (positives) compared to inTrees and deFragTrees by mining rules from multiple node combinations within the tree ensemble."
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "Number of Rules",
39
+ "text": "From Figure 6 ###reference_###, we observe that TE2Rules mines more rules than inTrees but fewer rules than deFragTrees to explain xgboost models. Despite deFragTrees extracting a smaller number of rules compared to TE2Rules, it struggles to effectively explain the minority class (positives), exhibiting lower fidelity of positives compared to TE2Rules.\nAmong the various stages of TE2Rules, stage-1 (the triangle with lower fidelity on positives) often generates more rules than stages 2 and 3 (higher fidelity on positives). As illustrated in Figure 6 ###reference_###, among the blue triangles in the bank dataset (bottom row, second plot), the one with higher positive fidelity (stage-2) has a lower number of rules than the one with lower positive fidelity (stage-1).\nThis is because the number of rules in the final output of TE2Rules consists of rules selected at the end of post-processing (Step-5). The number of rules at the end of Step 4 is always higher for TE2Rules run with more stages, as successive stages add more rules on top of each other. However, the post-processing step selects a small subset of rules from this pool. Therefore, the number of rules after post-processing can be lower for TE2Rules run until stage 2 (or 3) compared to the run until stage 1. This is particularly true because later stages may uncover more potent rules capable of explaining a greater number of positives that rules at the end of stage 1 simply cannot. Consequently, fewer such powerful rules are needed to account for all the positives. Thus, running TE2Rules for more stages can often result in a smaller number of rules with higher fidelity on positives."
40
+ },
41
+ {
42
+ "section_id": "4.3",
43
+ "parent_section_id": "4",
44
+ "section_name": "Scalability: Runtime",
45
+ "text": "Figure 7 ###reference_### illustrates the runtime and fidelity (on positives) of explainers for three XGBoost models: 100, 200, and 500 trees with a depth of 5. It also shows the runtime of different runs of TE2Rules run till stages 1, 2 and 3. In each color, triangle with lower fidelity on positives corresponds to the lower stages. In general, two clusters of points corresponding to TE2Rules emerge: one on the left (lower runtime, lower fidelity on positives) and the other on the top right corner (higher runtime, higher fidelity on positives). The first cluster represents TE2Rules stage-1, while the second encompasses stages 2 and 3. Overlapping triangles for stages 2 and 3, as explained in the previous subsection, can occur.\nIn stage-1, TE2Rules achieves higher fidelity on positives compared to inTrees and with a shorter runtime. With stages 2 and 3, TE2Rules achieves even higher fidelity on positives, surpassing that of deFragTrees with runtimes that are comparable or slightly higher than that of deFragTrees. Thus, TE2Rules demonstrates impressive performance in terms of fidelity on positives while maintaining runtime efficiency, making it a robust choice for explaining tree ensemble models."
46
+ },
47
+ {
48
+ "section_id": "4.4",
49
+ "parent_section_id": "4",
50
+ "section_name": "Fidelity-Runtime tradeoff",
51
+ "text": "Figure 8 ###reference_### shows the run time and positive fidelity of TE2Rules with stages for 6 different xgboost models with 100, 200, 500 trees and depth 3, 5. We note that there is marginal improvement in fidelity on positives beyond stage-2. This shows that, most positives in the data can be explained within 2 stages of TE2Rules. So for all our use cases, running TE2Rules for 3 stages was sufficient.\nSimilarly, the runtime of TE2Rules run till stage 3 is not very different from that of TE2Rules run till stage 2. A brute force search across node combinations would have meant exploring exponentially more nodes in every stage. But due to the smart way of generating stage-3 candidates from stage-2 using the Apriori Algorithm, very few (almost no) candidates with non-zero support are generated in stage-3. This reduces the runtime of stage-3 significantly. Since, most of the rules are mined in early stages, TE2Rules can be stopped early (2 to 3 stages) without loosing much fidelity on positives."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Conclusion",
57
+ "text": "We presented a novel approach, TE2Rules to explain a binary tree ensemble (TE) classifier using rules mined specially for a class of interest. We showed that our explainer is faithful (with high fidelity) to the model on both the overall test data and specifically on the minority class. It achieves such high performance in runtimes that are comparable to the state of the art baselines. Further, we show that stopping the algorithm in early stages can tradeoff runtime without loosing much fidelity on positives."
58
+ }
59
+ ],
60
+ "appendix": [],
61
+ "tables": {
62
+ "1": {
63
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx3.T1\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"Sx3.T1.28\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx3.T1.28.29.1\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.29.1.1\" rowspan=\"2\" style=\"width:8.7pt;\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.29.1.2\" rowspan=\"2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.28.29.1.2.1\">Rule</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"Sx3.T1.28.29.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.28.29.1.3.1\">Precision</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" colspan=\"2\" id=\"Sx3.T1.28.29.1.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.28.29.1.4.1\">Recall</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.28.30.2\">\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.30.2.1\" style=\"width:30.4pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.28.30.2.1.1\">Train</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.30.2.2\" style=\"width:30.4pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"Sx3.T1.28.30.2.2.1\">Test</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.30.2.3\" style=\"width:30.4pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"Sx3.T1.28.30.2.3.1\">Train</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.30.2.4\" style=\"width:30.4pt;\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"Sx3.T1.28.30.2.4.1\">Test</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.2.2.3\" style=\"width:8.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.2.2.3.1\">1.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.2.2.2\" style=\"width:260.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.2.2.2.2.2\">priors_count 2.5 &amp; age 36.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.2.2.4\" style=\"width:30.4pt;\">0.973</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.2.2.5\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.2.2.5.1\">0.957</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.2.2.6\" style=\"width:30.4pt;\">0.580</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.2.2.7\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.2.2.7.1\">0.560</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.6.6\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.6.6.5\" style=\"width:8.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.6.6.5.1\">2.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.6.6.4\" style=\"width:260.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.6.6.4.4.4\">priors_count 12.5 &amp; age 21.5 &amp; sex_Female 0.5 &amp; days_arrest 17.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.6.6.6\" style=\"width:30.4pt;\">0.967</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.6.6.7\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.6.6.7.1\">0.952</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.6.6.8\" style=\"width:30.4pt;\">0.155</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.6.6.9\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.6.6.9.1\">0.156</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.8.8\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.8.8.3\" style=\"width:8.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.8.8.3.1\">3.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.8.8.2\" style=\"width:260.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.8.8.2.2.2\">priors_count 5.5 &amp; days_arrest -1.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.8.8.4\" style=\"width:30.4pt;\">0.962</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.8.8.5\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.8.8.5.1\">0.947</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.8.8.6\" style=\"width:30.4pt;\">0.404</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.8.8.7\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.8.8.7.1\">0.421</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.11.11\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.11.11.4\" style=\"width:8.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.11.11.4.1\">4.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.11.11.3\" style=\"width:260.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.11.11.3.3.3\">priors_count 1.5 &amp; priors_count 15.5 &amp; age 28.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.11.11.5\" style=\"width:30.4pt;\">0.975</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.11.11.6\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.11.11.6.1\">0.981</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.11.11.7\" style=\"width:30.4pt;\">0.409</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.11.11.8\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.11.11.8.1\">0.382</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.15.15\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.15.15.5\" style=\"width:8.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.15.15.5.1\">5.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.15.15.4\" style=\"width:260.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.15.15.4.4.4\">charge_Felony 0.5 &amp; priors_count 1.5 &amp; age 22.5 &amp; sex_Male 0.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.15.15.6\" style=\"width:30.4pt;\">0.952</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.15.15.7\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.15.15.7.1\">0.893</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.15.15.8\" style=\"width:30.4pt;\">0.109</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.15.15.9\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.15.15.9.1\">0.139</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.16.16\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.16.16.2\" style=\"width:8.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.16.16.2.1\">6.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.16.16.1\" style=\"width:260.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.16.16.1.1.1\">days_arrest 0.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.16.16.3\" style=\"width:30.4pt;\">1.000</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.16.16.4\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.16.16.4.1\">0.929</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.16.16.5\" style=\"width:30.4pt;\">0.036</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.16.16.6\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.16.16.6.1\">0.026</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.18.18\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.18.18.3\" style=\"width:8.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.18.18.3.1\">7.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.18.18.2\" style=\"width:260.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.18.18.2.2.2\">priors_count 4.5 &amp; age 51.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.18.18.4\" style=\"width:30.4pt;\">0.969</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.18.18.5\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.18.18.5.1\">0.957</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.18.18.6\" style=\"width:30.4pt;\">0.503</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.18.18.7\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.18.18.7.1\">0.518</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.20.20\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.20.20.3\" style=\"width:8.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.20.20.3.1\">8.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.20.20.2\" style=\"width:260.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.20.20.2.2.2\">priors_count 12.5 &amp; days_arrest -8.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.20.20.4\" style=\"width:30.4pt;\">1.000</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.20.20.5\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.20.20.5.1\">0.974</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.20.20.6\" style=\"width:30.4pt;\">0.124</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.20.20.7\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.20.20.7.1\">0.145</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.24.24\">\n<td class=\"ltx_td ltx_align_justify ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.24.24.5\" style=\"width:8.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.24.24.5.1\">9.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.24.24.4\" style=\"width:260.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.24.24.4.4.4\">priors_count 0.5 &amp; priors_count 12.5 &amp; age 23.5 &amp; days_arrest -4.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.24.24.6\" style=\"width:30.4pt;\">0.968</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.24.24.7\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.24.24.7.1\">0.917</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.24.24.8\" style=\"width:30.4pt;\">0.161</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_r ltx_border_t\" id=\"Sx3.T1.24.24.9\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.24.24.9.1\">0.178</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx3.T1.28.28\">\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.28.5\" style=\"width:8.7pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.28.28.5.1\">10.</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.28.4\" style=\"width:260.2pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.28.28.4.4.4\">priors_count 2.5 &amp; age 22.5 &amp; race_African_American 0.5 &amp; sex_Female 0.5</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.28.6\" style=\"width:30.4pt;\">0.958</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.28.7\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.28.28.7.1\">0.984</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.28.8\" style=\"width:30.4pt;\">0.124</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_b ltx_border_r ltx_border_t\" id=\"Sx3.T1.28.28.9\" style=\"width:30.4pt;\">\n<p class=\"ltx_p ltx_align_top\" id=\"Sx3.T1.28.28.9.1\">0.117</p>\n</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Rules generated by TE2Rules on a XGBoost model with 50 trees, depth 3, trained on compas dataset</figcaption>\n</figure>",
64
+ "capture": "Table 1: Rules generated by TE2Rules on a XGBoost model with 50 trees, depth 3, trained on compas dataset"
65
+ }
66
+ },
67
+ "image_paths": {
68
+ "2": {
69
+ "figure_path": "2206.14359v5_figure_2.png",
70
+ "caption": "Figure 5: An example of a tree ensemble with n = 2 trees each with depth d = 2 and a slice of data used to run TE2Rules. The tree ensemble uses features like color, odor, variety of a fruit to predict if it is edible. The positive class corresponds to edible = 1.",
71
+ "url": "http://arxiv.org/html/2206.14359v5/extracted/5358259/plots/TreeEnsembleIllustration.png"
72
+ },
73
+ "3(a)": {
74
+ "figure_path": "2206.14359v5_figure_3(a).png",
75
+ "caption": "Figure 6: Comparison of fidelity on test dataset (overall and on positives) versus number of rules mined for different explainers: TE2Rules (\u25b3\u25b3\\triangle\u25b3), inTrees (\u2218\\circ\u2218), deFragTrees (+++). All explainers are run on TE models with 100 (red), 200 (green), 500 (blue) trees of depth 5.",
76
+ "url": "http://arxiv.org/html/2206.14359v5/extracted/5358259/plots/fid_tot.png"
77
+ },
78
+ "3(b)": {
79
+ "figure_path": "2206.14359v5_figure_3(b).png",
80
+ "caption": "Figure 6: Comparison of fidelity on test dataset (overall and on positives) versus number of rules mined for different explainers: TE2Rules (\u25b3\u25b3\\triangle\u25b3), inTrees (\u2218\\circ\u2218), deFragTrees (+++). All explainers are run on TE models with 100 (red), 200 (green), 500 (blue) trees of depth 5.",
81
+ "url": "http://arxiv.org/html/2206.14359v5/extracted/5358259/plots/fid_pos.png"
82
+ },
83
+ "4": {
84
+ "figure_path": "2206.14359v5_figure_4.png",
85
+ "caption": "Figure 7: Comparison of fidelity (positives) on test data versus runtime for different explainers: TE2Rules (\u25b3\u25b3\\triangle\u25b3), inTrees (\u2218\\circ\u2218), deFragTrees (+++). All explainers are run on TE models with 100 (red), 200 (green), 500 (blue) trees of depth 5.",
86
+ "url": "http://arxiv.org/html/2206.14359v5/extracted/5358259/plots/time.png"
87
+ },
88
+ "5(a)": {
89
+ "figure_path": "2206.14359v5_figure_5(a).png",
90
+ "caption": "Figure 8: Comparison of runtime and fidelity (positives) of TE2Rules with more number of stages. TE2Rules is run on TE models with 100 (red), 200 (green), 500 (blue) trees of depth 3 (solid line), 5 (dashed line).",
91
+ "url": "http://arxiv.org/html/2206.14359v5/extracted/5358259/plots/stages_time.png"
92
+ },
93
+ "5(b)": {
94
+ "figure_path": "2206.14359v5_figure_5(b).png",
95
+ "caption": "Figure 8: Comparison of runtime and fidelity (positives) of TE2Rules with more number of stages. TE2Rules is run on TE models with 100 (red), 200 (green), 500 (blue) trees of depth 3 (solid line), 5 (dashed line).",
96
+ "url": "http://arxiv.org/html/2206.14359v5/extracted/5358259/plots/stages_fid.png"
97
+ }
98
+ },
99
+ "validation": true,
100
+ "references": [
101
+ {
102
+ "1": {
103
+ "title": "Fast algorithms for mining association rules.",
104
+ "author": "Agrawal, R.; Srikant, R.; et al. 1994.",
105
+ "venue": "In Proc. 20th int. conf. very large data bases, VLDB, volume 1215, 487\u2013499. Citeseer.",
106
+ "url": null
107
+ }
108
+ },
109
+ {
110
+ "2": {
111
+ "title": "An Implementation of the FP-growth Algorithm.",
112
+ "author": "Borgelt, C. 2005.",
113
+ "venue": "In Proceedings of the 1st international workshop on open source data mining: frequent pattern mining implementations, 1\u20135.",
114
+ "url": null
115
+ }
116
+ },
117
+ {
118
+ "3": {
119
+ "title": "Interpreting tree ensembles with intrees.",
120
+ "author": "Deng, H. 2019.",
121
+ "venue": "International Journal of Data Science and Analytics, 7(4): 277\u2013287.",
122
+ "url": null
123
+ }
124
+ },
125
+ {
126
+ "4": {
127
+ "title": "UCI Machine Learning Repository.",
128
+ "author": "Dua, D.; and Graff, C. 2017.",
129
+ "venue": null,
130
+ "url": null
131
+ }
132
+ },
133
+ {
134
+ "5": {
135
+ "title": "Predictive learning via rule ensembles.",
136
+ "author": "Friedman, J. H.; and Popescu, B. E. 2008.",
137
+ "venue": "The Annals of Applied Statistics, 2(3): 916 \u2013 954.",
138
+ "url": null
139
+ }
140
+ },
141
+ {
142
+ "6": {
143
+ "title": "scikit-learn-contrib/skope-rules.",
144
+ "author": "Gautier, R.; Jafre, G.; and Ndiaye, B. 2020.",
145
+ "venue": "https://github.com/scikit-learn-contrib/skope-rules.",
146
+ "url": null
147
+ }
148
+ },
149
+ {
150
+ "7": {
151
+ "title": "Why do tree-based models still outperform deep learning on tabular data?",
152
+ "author": "Grinsztajn, L.; Oyallon, E.; and Varoquaux, G. 2022.",
153
+ "venue": "arXiv:2207.08815.",
154
+ "url": null
155
+ }
156
+ },
157
+ {
158
+ "8": {
159
+ "title": "Local Rule-Based Explanations of Black Box Decision Systems.",
160
+ "author": "Guidotti, R.; Monreale, A.; Ruggieri, S.; Pedreschi, D.; Turini, F.; and Giannotti, F. 2018.",
161
+ "venue": "CoRR, abs/1805.10820.",
162
+ "url": null
163
+ }
164
+ },
165
+ {
166
+ "9": {
167
+ "title": "Making tree ensembles interpretable: A bayesian model selection approach.",
168
+ "author": "Hara, S.; and Hayashi, K. 2018.",
169
+ "venue": "In International conference on artificial intelligence and statistics, 77\u201385. PMLR.",
170
+ "url": null
171
+ }
172
+ },
173
+ {
174
+ "10": {
175
+ "title": "Interpretable decision sets: A joint framework for description and prediction.",
176
+ "author": "Lakkaraju, H.; Bach, S. H.; and Leskovec, J. 2016.",
177
+ "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1675\u20131684.",
178
+ "url": null
179
+ }
180
+ },
181
+ {
182
+ "11": {
183
+ "title": "How we analyzed the COMPAS recidivism algorithm. ProPublica.",
184
+ "author": "Larson, J.; Mattu, S.; Kirchner, L.; and Angwin, J. 2016.",
185
+ "venue": null,
186
+ "url": null
187
+ }
188
+ },
189
+ {
190
+ "12": {
191
+ "title": "Node harvest.",
192
+ "author": "Meinshausen, N. 2010.",
193
+ "venue": "The Annals of Applied Statistics, 4(4).",
194
+ "url": null
195
+ }
196
+ },
197
+ {
198
+ "13": {
199
+ "title": "Are Neural Rankers still Outperformed by Gradient Boosted Decision Trees?",
200
+ "author": "Qin, Z.; Yan, L.; Zhuang, H.; Tay, Y.; Pasumarthi, R. K.; Wang, X.; Bendersky, M.; and Najork, M. 2021.",
201
+ "venue": "In International Conference on Learning Representations.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "14": {
207
+ "title": "Anchors: High-Precision Model-Agnostic Explanations.",
208
+ "author": "Ribeiro, M. T.; Singh, S.; and Guestrin, C. 2018.",
209
+ "venue": "In AAAI.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "15": {
215
+ "title": "Born-Again Tree Ensembles.",
216
+ "author": "Vidal, T.; Pacheco, T.; and Schiffer, M. 2020.",
217
+ "venue": "arXiv:2003.11132.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "16": {
223
+ "title": "Falling rule lists.",
224
+ "author": "Wang, F.; and Rudin, C. 2015.",
225
+ "venue": "In Artificial intelligence and statistics, 1013\u20131022. PMLR.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "17": {
231
+ "title": "A bayesian framework for learning rule sets for interpretable classification.",
232
+ "author": "Wang, T.; Rudin, C.; Doshi-Velez, F.; Liu, Y.; Klampfl, E.; and MacNeille, P. 2017.",
233
+ "venue": "The Journal of Machine Learning Research, 18(1): 2357\u20132393.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "18": {
239
+ "title": "Scalable Bayesian rule lists.",
240
+ "author": "Yang, H.; Rudin, C.; and Seltzer, M. 2017.",
241
+ "venue": "In International conference on machine learning, 3921\u20133930. PMLR.",
242
+ "url": null
243
+ }
244
+ }
245
+ ],
246
+ "url": "http://arxiv.org/html/2206.14359v5"
247
+ }
20240123/2209.07805v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2209.09930v2.json ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Deep Superpixel Generation and Clustering for Weakly Supervised Segmentation of Brain Tumors in MR Images",
3
+ "abstract": "Training machine learning models to segment tumors and other anomalies in medical images is an important step for developing diagnostic tools but generally requires manually annotated ground truth segmentations, which necessitates significant time and resources. This work proposes the use of a superpixel generation model and a superpixel clustering model to enable weakly supervised brain tumor segmentations. The proposed method utilizes binary image-level classification labels, which are readily accessible, to significantly improve the initial region of interest segmentations generated by standard weakly supervised methods without requiring ground truth annotations. We used 2D slices of magnetic resonance brain scans from the Multimodal Brain Tumor Segmentation Challenge 2020 dataset and labels indicating the presence of tumors to train the pipeline. On the test cohort, our method achieved a mean Dice coefficient of 0.691 and a mean 95% Hausdorff distance of 18.1, outperforming existing superpixel-based weakly supervised segmentation methods.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Segmentation is crucial in medical imaging for localizing regions of interest (ROI), such as tumors, which can then assist in the identification of anomalies. Machine learning (ML) can automate the segmentation task with excellent performance, as demonstrated by top-performing models in the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 challenge [Henry et al., 2021 ###reference_1###, Isensee et al., 2021 ###reference_2###, Jia et al., 2021 ###reference_3###]. However, training ML segmentation models demands large datasets of manually annotated medical images which are not only tedious and expensive to acquire, but also may be inaccessible for specific diseases such as rare cancers. Weakly supervised training of segmentation models, which does not require segmentation labels, has great potential to localize anomalies only using image-level classification labels that are much less expensive to acquire than manual pixel-level annotations.\nWork into weakly supervised segmentation where the only available ground truths are image-level classification labels often involves training a classification model that is used to infer tumor segmentations through class activation maps. This approach has been used in a variety of medical imaging problems including the segmentation of organs [Chen et al., 2022 ###reference_4###], pulmonary nodules [Feng et al., 2017 ###reference_5###], and brain lesions [Wu et al., 2019 ###reference_6###].\nAnother weakly supervised segmentation approach is to utilize superpixels. Superpixels are pixels grouped based on various characteristics, including pixel gray levels and proximity. By grouping pixels together, superpixels capture redundancy and reduce the complexity of computer vision tasks making them valuable for image segmentation [Chen et al., 2020 ###reference_7###, Kwak et al., 2017 ###reference_8###, Yi et al., 2022 ###reference_9###]. A notable approach to ML-based superpixel segmentation uses a Fully Convolutional Network (FCN) to generate oversegmented superpixels with less computational complexity [Yang et al., 2020 ###reference_10###].\nWe hypothesize that superpixels can be leveraged to acquire additional contextual information to improve weakly supervised segmentations. We propose to simultaneously train a superpixel generation model and a superpixel clustering model using localization seeds acquired from a classifier trained with the image-level labels. For each pixel, the superpixel generator assigns association scores to each superpixel group, and the clustering model predicts weights for each superpixel based on their overlap with the tumor. Pixels are soft clustered based on their association with highly weighted superpixels to form segmentations. The superpixel models combine information from the pixel intensities with information from the localization seeds, yielding segmentations that are consistent with both the classifier understanding from the localization seeds and the pixel intensities of the MR images.\nThe novelty of the work is summarized by the following points:\nSimultaneous deep superpixel generation and clustering enable effective weakly supervised segmentation of brain tumors on MRI datasets.\nLocalization seeds that undersegment the cancerous and non-cancerous regions are effective priors of information and can be generated from binary classifiers trained to identify cancerous images.\nThe use of deep superpixel generation and clustering improves segmentation performance and inference time over other superpixel-based methods."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Materials and methods",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Dataset and Preprocessing",
21
+ "text": "To form our dataset, the 369 T1-weighted, post-contrast T1-weighted, T2-weighted, and T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) MRI volumes from the BraTS 2020 dataset [Bakas et al., 2017a ###reference_11###, b ###reference_12###, c ###reference_13###, 2019 ###reference_14###, Menze et al., 2015 ###reference_15###] were stacked together. The stacked MRI volumes were then split into axial slices to form stacked 2-dimensional (2D) images with 4 channels. Only the training set of the BraTS dataset was used because it is the only one with publicly available ground truths.\nThe images were preprocessed by first cropping each image and segmentation map using the smallest bounding box which contained the brain, clipping all non-zero intensity values to their 1 and 99 percentiles to remove outliers, normalizing the cropped images using min-max scaling, and then randomly cropping the images to fixed patches of size along the coronal and sagittal axes, as done by Henry et al. [Henry et al., 2021 ###reference_1###] and Wang et al. [Wang et al., 2019 ###reference_16###] in their work with BraTS datasets. The 369 available patient volumes were then split into 295 (80%), 37 (10%), and 37 (10%) volumes for the training, validation, and test cohorts, respectively. After splitting the volumes into 2D images, the first 30 and last 30 slices of each volume were removed, as done by Han et al. [Han et al., 2019 ###reference_17###] because these slices lack useful information. The training, validation, and test cohorts had 24635, 3095, and 3077 stacked 2D images, respectively. For the training, validation, and test cohorts, respectively; 68.9%, 66.3%, and 72.3% of images were cancerous. The images will be referred to as , where and . Ground truths for each slice were assigned 0 if the segmentations were empty, and otherwise."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Proposed Weakly Supervised Segmentation Method",
27
+ "text": "We first trained a classifier model to identify whether an image contains a tumor, then generated localization seeds from the model using Randomized Input Sampling for Explanation of Black-box Models (RISE) [Petsiuk et al., 2018 ###reference_18###]. The localization seeds use the classifier\u2019s understanding to split each pixel of the images into one of three categories. The first, referred to as positive seeds, indicates regions of the image with high likelihood of containing a tumor. The second, referred to as negative seeds, indicates regions with low likelihood of containing a tumor. The final category, referred to as unseeded regions, corresponds to the remaining areas of the images and indicates regions of low confidence from the classifier. This results in positive seeds that undersegment the tumor, and negative seeds that undersegment the non-cancerous regions. Assuming that the positive and negative seeds are accurate, these seeds simplify the task to classifying the unseeded regions as overlapping with or not overlapping with a tumor. The seeds are used as pseudo-ground truths to simultaneously train both a superpixel generator and a superpixel clustering model which, when used together, can produce the final refined segmentations from the probability heat map of the superpixel-based segmentations. A flowchart of the proposed methodology is presented in Figure 1 ###reference_###. We chose to use 2D images over 3D images because converting 3D MRI volumes to 2D MR images yields significantly more data samples and reduces memory costs. Furthermore, previous work demonstrated that brain tumors can be effectively segmented from 2D images [Noori et al., 2019 ###reference_19###].\n###figure_1###"
28
+ },
29
+ {
30
+ "section_id": "2.2.1",
31
+ "parent_section_id": "2.2",
32
+ "section_name": "2.2.1 Classifier Model",
33
+ "text": "The classifier model uses a VGG-16 architecture [Simonyan and Zisserman, 2014 ###reference_20###] with batch normalization, whose output is passed through a Sigmoid function to generate the probability that each contains a tumor, where is a set of brain MR images. Prior to being input to the classifier model, the images are upsampled by a factor of 2. The images are not upsampled for any other model in the proposed method. This classifier model is trained using as the ground truths, where is a binary label with a value of 1 if contains tumor and 0 otherwise. The remainder of the method only uses images predicted by the classifier to contain tumors, to avoid attempting to segment healthy images. This subset of will be referred to as . The methodology is independent of the VGG-16 architecture, and thus, other classifier architectures can be used instead.\nThe classifier was trained to optimize the binary cross-entropy between the output probabilities and the binary ground truths using Adam optimizer with , and a weight decay of [Kingma and Ba, 2014 ###reference_21###]. The classifier was trained for 100 epochs using a batch size of 32. The learning rate was initially set to and then decreased by a factor of 10 when the validation loss did not decrease by ."
34
+ },
35
+ {
36
+ "section_id": "2.2.2",
37
+ "parent_section_id": "2.2",
38
+ "section_name": "2.2.2 RISE Method",
39
+ "text": "RISE [Petsiuk et al., 2018 ###reference_18###] is used to generate heat maps for each of the images predicted to be cancerous. The heat maps indicate the approximate likelihood for tumors to be present at each pixel. These heat maps were converted to localization seeds by setting the pixels corresponding to the top 20% of values in as positive seeds, and setting the pixels corresponding to the bottom 20% of values as negative seeds. is a binary map indicating positive seeds and is a binary map indicating negative seeds. When using RISE, we set the number of masks for an image to 4000 and use the same masks across all images."
40
+ },
41
+ {
42
+ "section_id": "2.2.3",
43
+ "parent_section_id": "2.2",
44
+ "section_name": "2.2.3 Proposed Superpixel Generation and Clustering Models",
45
+ "text": "The superpixel generation model and the superpixel clustering model are used to output the final segmentations without using the ground truth segmentations. The superpixel generation model assigns soft association scores to each pixel, where is the maximum number of superpixels to generate, which we set to 64. The association maps are represented by , where is the probability that the pixel at is assigned to the superpixel . Soft associations may result in a pixel having similar associations to multiple superpixels. The superpixel clustering model then assigns superpixel scores to each superpixel indicating the likelihood that each superpixel represents a cancerous region. The superpixel scores are represented by where represents the probability that superpixel contains a tumor. The pixels can then be soft clustered into a tumor segmentation by performing a weighted sum along the superpixel association scores using the superpixel scores as weights. The result of the weighted sum is the likelihood that each pixel belongs to a tumor segmentation based on its association with strongly weighted superpixels.\nThe superpixel generator takes input and outputs a corresponding value by passing the direct output of the superpixel generation model through a SoftMax function to rescale the outputs from 0 to 1 along the superpixel associations. The clustering model uses a ResNet-18 architecture [He et al., 2016 ###reference_22###] and receives a concatenation of and as input. The outputs of the clustering model are passed through a SoftMax function to yield superpixel scores . Heatmaps that localize the tumors can be acquired from and by multiplying each of the association maps in by their corresponding scores , and then summing along the channels as shown in Equation 1 ###reference_###. The superpixel generator architecture is based on AINet proposed by Wang et al. [Wang et al., 2021 ###reference_23###], which is a Fully Convolutional Network (FCN)-based superpixel segmentation model that uses a variational autoencoder (VAE). Unlike AINet, which outputs local superpixel associations, we use global associations so that can be passed into the superpixel clustering model. This allows the generator model to be trained in tandem with the clustering model. The training uses two different loss functions. The first loss function, , was proposed by Yang et al. [Yang et al., 2020 ###reference_10###] and minimizes the variation in pixel intensities and pixel positions in each superpixel. This loss is defined in Eq. 2 ###reference_###, where represents a pixel\u2019s coordinates ranging from to , and is a coefficient used to tune the size of the superpixels, which we set as . We selected this value for by multiplying the value suggested by the original work, [Yang et al., 2020 ###reference_10###], by 100 to achieve the desired superpixel size. and are the vectors representing the mean superpixel location and the mean superpixel intensity for superpixel , respectively. The second loss function, , is a loss from the Seed, Expand, and Constrain paradigm for weakly supervised segmentation. This loss is designed to train models to output segmentations that include positive seeded regions and exclude negative seeded regions [Kolesnikov and Lampert, 2016 ###reference_24###]. This loss is defined in Eq. 1 ###reference_###-4 ###reference_### where C indicates whether the positive or negative seeds of an image is being evaluated. These losses, when combined together, encourage the models to account for both the localization seeds and the pixel intensities. This results in localizing the unseeded regions that correspond to the pixel intensities in the positive seeds. The combined loss is presented in Eq. 5 ###reference_###, where is a weight for the seed loss. The output can then be thresholded to generate final segmentations .\nThe superpixel generation and clustering models were trained using an Adam optimizer with , a weight decay of . The models were trained for 100 epochs using a batch size of 32. The learning rate was initially set to , which was halved every 25 epochs. The weight for the seed loss, , was set to ."
46
+ },
47
+ {
48
+ "section_id": "3",
49
+ "parent_section_id": null,
50
+ "section_name": "Results",
51
+ "text": "We trained our models using images and binary image-level labels without using any segmentation ground truths. The classifier achieved training, validation, and test accuracies of , , and , respectively, using a decision threshold of . Table 1 ###reference_### presents the per-image mean Dice coefficients (Dice) and the 95% Hausdorff distance (HD95) between the output segmentations for our proposed method and the ground truth segmentations.\n###table_1### We also present the performance of baseline methods for comparison. The first baseline method is the proposed method using a seed loss weight of rather than a seed loss weight of . This is to determine the impact of the seed loss weight on the segmentation performance. The second baseline method is the performance of the AINet architecture used by the superpixel generator model with the superpixel components removed and altered to directly output segmentations. This method, referred to as ablation, serves as an ablation study that investigates the impact of the superpixel component of the proposed method. The third baseline method is our proposed method with the VGG-16 classifier replaced by a PatchConvNet classifier [Touvron et al., 2021 ###reference_25###]. PatchConvNet is a more recent classifier that is designed to generate accurate attention maps, which we used as the seeds in place of the RISE generated seeds for this baseline method.\nIn addition, we also compared our proposed method to two other methods designed for weakly supervised segmentation. The first is the Superpixel Pooling Network (SPN), proposed by Kwak et al., which uses pre-generated superpixels to perform weakly supervised segmentation [Kwak et al., 2017 ###reference_8###]. This method relies on pre-generated superpixels, which we generated using Felzenszwalb\u2019s Algorithm using a scale of 100 and a standard deviation of 0.8. We chose these hyperparameters as they set the number of output superpixels to approximately 100, thereby decreasing training time. The second is a Multiple Instance Learning (MIL) method proposed by Lerousseau et al. [Lerousseau et al., 2020 ###reference_26###]. MIL involves training a learning model using instances arranged in sets, patches of an image in this case, and then aggregating the predictions of the instances to output a prediction for the whole set. To train the MIL baseline, we used a VGG-16 model with batch normalization. At each epoch, we extracted 50 patches of shape from the images after upsampling them to . At each iteration, we set the 20% of patches with the highest predictions to be cancerous and 20% of patches with the lowest predictions to be non-cancerous, as these thresholds were demonstrated to be effective in the original work and are consistent with the thresholds we used when generating seeds using RISE.\nThe SPN and MIL methods differ from the other baseline methods in that they are not variants of the proposed method, and thus do not assign empty segmentations to images classified as non-cancerous. To allow for effective comparison, we present the performance of these two baseline methods with and without using a classifier to assign empty segmentations. The results in Table 1 ###reference_### for SPN and MIL using the classifier are noted by the term (classifier). For these results, we used the VGG-16 classifier trained for our proposed method.\nWe assess the generalizability of the proposed method by evaluating each trained model on the BraTS 2023 dataset [Baid et al., 2021 ###reference_27###, Menze et al., 2015 ###reference_15###, Bakas et al., 2017c ###reference_13###, a ###reference_11###, b ###reference_12###]. To do so, we removed data that appeared in the BraTS 2020, preprocessed the images as detailed in the methodology, and then extracted the cross-section with the largest tumor area from each patient. This resulted in 886 images from the BraTS 2023 dataset. The performance of each model on these images can be found under the BraTS 2023 columns in Table 1 ###reference_###.\nEach of the presented methods uses a decision threshold to convert the output probability maps to binary segmentations. The decision threshold for each method was determined by evaluating the Dice on the validation cohort at threshold intervals of 0.1 and choosing the threshold that yielded the maximum validation Dice. The proposed methods used thresholds of 0.6 and 0.9 for seed loss weights of 50 and 10, respectively, while the ablation and PatchConvNet methods used thresholds of 0.5 and 0.9, respectively. Both SPN models used threshold of 0.9 while the MIL models used a threshold of 0.3 when using the classifier and a threshold of 0.9 when not using the classifier.\nFigure 2 ###reference_### presents three images from the test set and their corresponding segmentations generated at each step of the pipeline, as well as the ground truth segmentations.\n###figure_2### When evaluating the inference time of the proposed method compared to SPN, the proposed method had an average inference time of 5.85 milliseconds while the SPN method had an average inference time of 28.5 milliseconds and the MIL method had an average inference time of 1.77 milliseconds per patch."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "Discussion",
57
+ "text": ""
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "Key Findings",
63
+ "text": "When comparing the performance of the proposed method to the SPN and MIL baseline methods, the proposed method and the ablation method outperformed SPN and MIL in both Dice and HD95. The improved performance indicates that the SPN and MIL methods, while being effective in tasks with large training datasets, can worsen in tasks with limited available data such as brain tumor segmentation. MIL is frequently used for weakly supervised segmentation of histopathology images because of the need to interpret the large gigapixel resolution images in patches. We believe the significantly reduced spatial dimensions and resolutions of the MR images negatively impacted the performance of the MIL baseline. The MR images lacked the resolution required to extract patches with sufficient information that only occupied a small portion of its source image. As such, the MIL baseline was unable to effectively learn to segment the tumors.\nPatchConvNet also suffered from the smaller dataset size. The PatchConvNet classifier was not able to generate effective undersegmented positive and negative seeds to guide the training of the superpixel generator and clustering models. This can be attributed to the smaller dataset size, which PatchConvNet was not designed for, and the use of attention-based maps for the seeds. With the smaller dataset size, PatchConvNet was unable to acquire an effective understanding of the tumors. As a result, the attention maps acquired from PatchConvNet did not consistently undersegment the cancerous and non-cancerous regions, which is a critical assumption when using the seeds. Using a VGG-16 classifier and generating the seeds using RISE resulted in localization seeds that tend to undersegment the cancerous and non-cancerous regions despite the limited available data. In contexts with more available training data, PatchConvNet could be used to generate effective seeds but PatchConvNet seems to struggle in tasks with small dataset sizes, which are very common in medical contexts.\nSuperpixels generated from algorithmic methods such as\nSimple Linear Iterative Clustering or Felzenszwalb segmentation have previously been used for weakly supervised segmentation, often in conjunction with CAMs, [Chen et al., 2020 ###reference_7###, Kwak et al., 2017 ###reference_8###, Yi et al., 2022 ###reference_9###]. The SPN is one such approach. Unlike the proposed method, SPN uses pre-generated superpixels and produces the CAMs by outputting weights for each superpixel and grouping the the highest weighted superpixels together.\nThe use of simultaneously generated superpixels is a key novelty of our work. When using traditional superpixel generation algorithms, the precision of the segmentations is dependent on the number of superpixels, as fewer superpixels can result in less refined boundaries. However, such superpixel generation algorithms lack a means of setting a consistent number of superpixels across all images. Accounting for the varying number of superpixels leads to significantly increased computational complexity. This is demonstrated by how the inference time of our proposed method was 79.47% faster than the inference time of the SPN.\nUsing a consistent number of superpixels across images for pre-generated superpixels can raise concerns regarding the quality of the segmentation boundaries. Training a deep learning model to generate superpixels simultaneously with a superpixel clustering model allows for the gradients of the loss functions encouraging accurate segmentations to propagate through the superpixel generation model as well. This helps the superpixel generation model to not just learn to generate superpixels, but to generate superpixels with refined boundaries around the tumors. Thus, simultaneous generation and clustering of superpixels using neural networks improves the inference time and the segmentation performance when using superpixels for segmentation.\nFigure 2 ###reference_### demonstrates how the proposed method can reduce the number of outputted superpixels despite using a predefined number of superpixels. In our test cohort, the models reduced the number of superpixels from a predefined limit of 64 to approximately 22 per image by outputting 64 superpixels but having the majority of superpixels have no associated pixels.\nIn Figure 2 ###reference_###, the superpixels do not perfectly contour the segmented regions because the segmentations are calculated using a weighted sum of the superpixel scores based on their association with each pixel. Thus, pixels whose most associated superpixel is not primarily a part of the segmented region can be segmented so long as it has a sufficiently high association score with the primarily segmented superpixel. As such, the segmentations cannot be generated simply by selecting superpixels outputted by the method, they need to be soft clustered using the superpixel association and weights. Despite the lower number of superpixels when using higher seed loss weights, the method is still able to segment smaller tumors. It can also be seen that superpixels outside of the tumor regions do not align with brain subregions or local patterns. This indicates that the superpixels are tuned to segment specifically brain tumors. While Figure 2 ###reference_### implies that only one superpixel is approximately required for each image, we argue that the clustering component has the benefit of allowing this method to be applied to tasks with multiple localized anomalies in each image."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "Limitations",
69
+ "text": "A limitation of this method is its reliance on superpixels which are computed based on pixel intensity. While the superpixels provide valuable information that improve segmentations of brain tumors, the superpixels also provide constraints on the set of problems this method can be applied to. In particular, this method would be ineffective for segmenting non-focal ROIs.\nIn addition, the proposed method relies on the localization seeds to be trained effectively. Despite not requiring the localization seeds during inference, poor localization seeds during training can propagate errors leading to poor segmentations during inference. The performance of the PatchConvNet baseline demonstrates the importance of seed accuracy. PatchConvNet was unable to output effective localization seeds and using the seeds from PatchConvNet with our proposed method decreased the Dice coefficient from 0.691 to 0.134 on the test cohort. As such, effective localization seeds from an accurate classifier that undersegment the cancerous and non-cancerous regions are crucial for effective performance using the proposed method.\nAnother limitation is that this method cannot be trained end-to-end. While the method is a weakly supervised approach as it does not require any segmentation ground truths to train, it can also be interpreted as a fully supervised classification task followed by an unsupervised superpixel generation and clustering task. Without having seeds generated from an accurate classifier to guide the downstream models, crucial information that informs the segmentation task is lost. Many clinical contexts have classifiers already available that can be applied to this method. However, the proposed method cannot be applied to contexts without readily available classifiers that require end-to-end training.\nA shortcoming of this study is its use of 2D images rather than 3D images due to the GPU memory costs required to generate 3D superpixels using an FCN-based superpixel generation model. The method is not limited to 2D images and thus it is of interest to explore applications of this method in 3D contexts. Previous studies have demonstrated that 3D segmentation leads to superior performance compared to 2D segmentation, which suggests that this method could be improved further when applied to 3D images [Avesta et al., 2023 ###reference_28###].\nAs is the case with other weakly supervised segmentation methods, the performance of our proposed method does not match the performance of fully supervised methods. 3D Fully supervised methods have achieved Dice coefficients ranging from 0.88 to 0.92 on the test cohort of the BraTS 2020 dataset [Isensee et al., 2021 ###reference_2###, Jia et al., 2020 ###reference_29###, Wang et al., 2020 ###reference_30###, Yuan, 2020 ###reference_31###]. However, until weakly supervised segmentations match the performance of fully supervised approaches, weakly supervised segmentation methods serve a different purpose than fully supervised segmentation methods. Weakly supervised segmentations are very effective at generating initial segmentations that can be revised by radiologists or for downstream semi-supervised training to reduce workload on medical datasets that lack manual annotations. In summary, despite the lower performance of our proposed method compared to fully supervised methods, our proposed method is effective for generating initial segmentations when manually annotated training data is not available."
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "Conclusion",
75
+ "text": "We introduced a weakly supervised superpixel-based approach to segmentation that incorporates contextual information through simultaneous superpixel generation and clustering. Integrating superpixels with localization seeds provides information on the boundaries of the tumors, allowing for the segmentation of tumors only using image-level labels. We demonstrated that generating superpixels using a deep learning model during training is not only faster but also yields improved segmentations compared to using superpixels generated from traditional approaches. This work can be used to improve the development of future weakly supervised segmentation methods through the integration of superpixels."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {
80
+ "1": {
81
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Dice coefficients and 95% Hausdorff distances between generated segmentations and true segmentations.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S3.T1.2\">\n<tr class=\"ltx_tr\" id=\"S3.T1.2.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S3.T1.2.3.1\" rowspan=\"2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text\" id=\"S3.T1.2.3.1.1\"><span class=\"ltx_text\" id=\"S3.T1.2.3.1.1.1\"></span> <span class=\"ltx_text\" id=\"S3.T1.2.3.1.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.3.1.1.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.2.3.1.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.3.1.1.2.1.1.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Method</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T1.2.3.1.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S3.T1.2.3.2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Dice</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"4\" id=\"S3.T1.2.3.3\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">HD95</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text\" id=\"S3.T1.2.4.1.1\"><span class=\"ltx_text\" id=\"S3.T1.2.4.1.1.1\"></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.1.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.4.1.1.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.2.4.1.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.4.1.1.2.1.1.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Training</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.1.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text\" id=\"S3.T1.2.4.2.1\"><span class=\"ltx_text\" id=\"S3.T1.2.4.2.1.1\"></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.2.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.4.2.1.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.2.4.2.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.4.2.1.2.1.1.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Validation</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.2.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.3\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text\" id=\"S3.T1.2.4.3.1\"><span class=\"ltx_text\" id=\"S3.T1.2.4.3.1.1\"></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.3.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.4.3.1.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.2.4.3.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.4.3.1.2.1.1.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Test</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.3.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.4\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text\" id=\"S3.T1.2.4.4.1\"><span class=\"ltx_text\" id=\"S3.T1.2.4.4.1.1\"></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.4.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.4.4.1.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.2.4.4.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.4.4.1.2.1.1.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">BraTS</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.2.4.4.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.4.4.1.2.1.2.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">2023</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.4.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.5\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text\" id=\"S3.T1.2.4.5.1\"><span class=\"ltx_text\" id=\"S3.T1.2.4.5.1.1\"></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.5.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.4.5.1.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.2.4.5.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.4.5.1.2.1.1.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Training</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.5.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.6\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text\" id=\"S3.T1.2.4.6.1\"><span class=\"ltx_text\" id=\"S3.T1.2.4.6.1.1\"></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.6.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.4.6.1.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.2.4.6.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.4.6.1.2.1.1.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Validation</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.6.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.7\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text\" id=\"S3.T1.2.4.7.1\"><span class=\"ltx_text\" id=\"S3.T1.2.4.7.1.1\"></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.7.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.4.7.1.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.2.4.7.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.4.7.1.2.1.1.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Test</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.7.1.3\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.4.8\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text\" id=\"S3.T1.2.4.8.1\"><span class=\"ltx_text\" id=\"S3.T1.2.4.8.1.1\"></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.8.1.2\">\n<span class=\"ltx_tabular ltx_align_middle\" id=\"S3.T1.2.4.8.1.2.1\">\n<span class=\"ltx_tr\" id=\"S3.T1.2.4.8.1.2.1.1\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.4.8.1.2.1.1.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">BraTS</span></span>\n<span class=\"ltx_tr\" id=\"S3.T1.2.4.8.1.2.1.2\">\n<span class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.4.8.1.2.1.2.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">2023</span></span>\n</span></span> <span class=\"ltx_text\" id=\"S3.T1.2.4.8.1.3\"></span></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.1.1.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Proposed ( = 50)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.2.1\">0.733</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.3\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.3.1\">0.715</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.4\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.4.1\">0.691</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.5\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.5.1\">0.745</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.6\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.6.1\">16.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.7\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.7.1\">13.4</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.8\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.8.1\">18.1</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.1.1.9\" style=\"padding-left:2.3pt;padding-right:2.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.9.1\">20.8</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.2.2.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Proposed ( = 10)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.609</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.3\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.608</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.4\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.594</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.5\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.574</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.6\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">20.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.7\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">17.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.8\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">24.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.2.9\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">34.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.2.5.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">Ablation</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.710</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.3\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.697</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.4\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.671</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.5\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.697</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.6\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">18.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.7\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">13.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.8\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">18.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.5.9\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">22.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.2.6.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">PatchConvNet</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.185</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.3\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.159</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.4\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.134</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.5\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.001</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.6\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">53.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.7\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">49.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.8\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">54.13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.6.9\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">87.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.2.7.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">SPN</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.7.2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.125</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.7.3\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.117</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.7.4\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.139</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.7.5\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.262</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.7.6\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">57.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.7.7\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">53.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.7.8\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">58.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.2.7.9\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">74.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.2.8.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">SPN (classifier)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.423</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.3\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.394</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.4\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.375</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.5\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.260</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.6\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">55.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.7\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">48.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.8\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">53.9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.8.9\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">73.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S3.T1.2.9.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">MIL</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.190</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.3\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.209</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.4\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.199</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.5\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.108</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.6\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">25.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.7\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">24.2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.8\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">25.5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.2.9.9\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">49.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.2.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T1.2.10.1\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">MIL (classifier)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.10.2\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.426</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.10.3\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.403</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.10.4\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.391</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.10.5\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">0.126</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.10.6\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">47.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.10.7\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">40.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.10.8\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">47.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.2.10.9\" style=\"padding-left:2.3pt;padding-right:2.3pt;\">53.4</td>\n</tr>\n</table>\n</figure>",
82
+ "capture": "Table 1: Dice coefficients and 95% Hausdorff distances between generated segmentations and true segmentations."
83
+ }
84
+ },
85
+ "image_paths": {
86
+ "1": {
87
+ "figure_path": "2209.09930v2_figure_1.png",
88
+ "caption": "Figure 1: Flowchart of proposed weakly supervised segmentation method. For the localization seeds component; green indicates positive seeds, magenta indicates negative seeds, black indicates unseeded regions. Solid lines represent use as inputs and outputs. Dashed lines represent use in loss functions.",
89
+ "url": "http://arxiv.org/html/2209.09930v2/extracted/5362808/fig1.jpg"
90
+ },
91
+ "2": {
92
+ "figure_path": "2209.09930v2_figure_2.png",
93
+ "caption": "Figure 2: Visualization of T2-FLAIR channel of MR images, generated superpixels, output segmentations, and true segmentations for three examples.",
94
+ "url": "http://arxiv.org/html/2209.09930v2/extracted/5362808/fig2.jpg"
95
+ }
96
+ },
97
+ "validation": true,
98
+ "references": [
99
+ {
100
+ "1": {
101
+ "title": "Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-Net neural networks: A BraTS 2020 challenge solution.",
102
+ "author": "Th\u00e9ophraste Henry, Alexandre Carr\u00e9, Marvin Lerousseau, Th\u00e9o Estienne, Charlotte Robert, Nikos Paragios, and Eric Deutsch.",
103
+ "venue": "In Alessandro Crimi and Spyridon Bakas, editors, Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, pages 327\u2013339, Cham, 2021. Springer International Publishing.",
104
+ "url": null
105
+ }
106
+ },
107
+ {
108
+ "2": {
109
+ "title": "nnU-Net for brain tumor segmentation.",
110
+ "author": "Fabian Isensee, Paul F. J\u00e4ger, Peter M. Full, Philipp Vollmuth, and Klaus H. Maier-Hein.",
111
+ "venue": "In Alessandro Crimi and Spyridon Bakas, editors, Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, pages 118\u2013132, Cham, 2021. Springer International Publishing.",
112
+ "url": null
113
+ }
114
+ },
115
+ {
116
+ "3": {
117
+ "title": "HNF-Net for brain tumor segmentation using multimodal mr imaging: 2nd place solution to BraTS challenge 2020 segmentation task.",
118
+ "author": "Haozhe Jia, Weidong Cai, Heng Huang, and Yong Xia.",
119
+ "venue": "In Alessandro Crimi and Spyridon Bakas, editors, Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, pages 58\u201368, Cham, 2021. Springer International Publishing.",
120
+ "url": null
121
+ }
122
+ },
123
+ {
124
+ "4": {
125
+ "title": "C-CAM: Causal cam for weakly supervised semantic segmentation on medical image.",
126
+ "author": "Zhang Chen, Zhiqiang Tian, Jihua Zhu, Ce Li, and Shaoyi Du.",
127
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11676\u201311685, June 2022.",
128
+ "url": null
129
+ }
130
+ },
131
+ {
132
+ "5": {
133
+ "title": "Discriminative localization in cnns for weakly-supervised segmentation of pulmonary nodules.",
134
+ "author": "Xinyang Feng, Jie Yang, Andrew F. Laine, and Elsa D. Angelini.",
135
+ "venue": "In Maxime Descoteaux, Lena Maier-Hein, Alfred Franz, Pierre Jannin, D. Louis Collins, and Simon Duchesne, editors, Medical Image Computing and Computer Assisted Intervention - MICCAI 2017, pages 568\u2013576, Cham, 2017. Springer International Publishing.",
136
+ "url": null
137
+ }
138
+ },
139
+ {
140
+ "6": {
141
+ "title": "Weakly supervised brain lesion segmentation via attentional representation learning.",
142
+ "author": "Kai Wu, Bowen Du, Man Luo, Hongkai Wen, Yiran Shen, and Jianfeng Feng.",
143
+ "venue": "In Dinggang Shen, Tianming Liu, Terry M. Peters, Lawrence H. Staib, Caroline Essert, Sean Zhou, Pew-Thian Yap, and Ali Khan, editors, Medical Image Computing and Computer Assisted Intervention \u2013 MICCAI 2019, pages 211\u2013219, Cham, 2019. Springer International Publishing.",
144
+ "url": null
145
+ }
146
+ },
147
+ {
148
+ "7": {
149
+ "title": "SPMF-Net: Weakly supervised building segmentation by combining superpixel pooling and multi-scale feature fusion.",
150
+ "author": "Jie Chen, Fen He, Yi Zhang, Geng Sun, and Min Deng.",
151
+ "venue": "Remote Sensing, 12(6), 2020.",
152
+ "url": null
153
+ }
154
+ },
155
+ {
156
+ "8": {
157
+ "title": "Weakly supervised semantic segmentation using superpixel pooling network.",
158
+ "author": "Suha Kwak, Seunghoon Hong, and Bohyung Han.",
159
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence, 31(1), Feb 2017.",
160
+ "url": null
161
+ }
162
+ },
163
+ {
164
+ "9": {
165
+ "title": "Weakly-supervised semantic segmentation with superpixel guided local and global consistency.",
166
+ "author": "Sheng Yi, Huimin Ma, Xiang Wang, Tianyu Hu, Xi Li, and Yu Wang.",
167
+ "venue": "Pattern Recognition, 124:108504, 2022.",
168
+ "url": null
169
+ }
170
+ },
171
+ {
172
+ "10": {
173
+ "title": "Superpixel segmentation with fully convolutional networks.",
174
+ "author": "Fengting Yang, Qian Sun, Hailin Jin, and Zihan Zhou.",
175
+ "venue": "2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13961\u201313970, 2020.",
176
+ "url": null
177
+ }
178
+ },
179
+ {
180
+ "11": {
181
+ "title": "Segmentation Labels for the Pre-operative Scans of the TCGA-GBM collection, 2017a.",
182
+ "author": "Spyridon Bakas, Hamed Akbari, Aristeidis Sotiras, Michel Bilello, Martin Rozycki, Justin Kirby, John Freymann, Keyvan Farahani, and Christos Davatzikos.",
183
+ "venue": "URL https://wiki.cancerimagingarchive.net/x/KoZyAQ.",
184
+ "url": null
185
+ }
186
+ },
187
+ {
188
+ "12": {
189
+ "title": "Segmentation Labels for the Pre-operative Scans of the TCGA-LGG collection, 2017b.",
190
+ "author": "Spyridon Bakas, Hamed Akbari, Aristeidis Sotiras, Michel Bilello, Martin Rozycki, Justin Kirby, John Freymann, Keyvan Farahani, and Christos Davatzikos.",
191
+ "venue": "URL https://wiki.cancerimagingarchive.net/x/LIZyAQ.",
192
+ "url": null
193
+ }
194
+ },
195
+ {
196
+ "13": {
197
+ "title": "Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features.",
198
+ "author": "Spyridon Bakas, Hamed Akbari, Aristeidis Sotiras, Michel Bilello, Martin Rozycki, Justin S. Kirby, John B. Freymann, Keyvan Farahani, and Christos Davatzikos.",
199
+ "venue": "Scientific Data, 4(1):170117, December 2017c.",
200
+ "url": null
201
+ }
202
+ },
203
+ {
204
+ "14": {
205
+ "title": "Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the brats challenge.",
206
+ "author": "Spyridon Bakas, Mauricio Reyes, Andras Jakab, et al.",
207
+ "venue": "Apr 2019.",
208
+ "url": null
209
+ }
210
+ },
211
+ {
212
+ "15": {
213
+ "title": "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).",
214
+ "author": "Bjoern H. Menze, Andras Jakab, Stefan Bauer, et al.",
215
+ "venue": "IEEE Transactions on Medical Imaging, 34(10):1993\u20132024, October 2015.",
216
+ "url": null
217
+ }
218
+ },
219
+ {
220
+ "16": {
221
+ "title": "3D U-Net based brain tumor segmentation and survival days prediction.",
222
+ "author": "Feifan Wang, Runzhou Jiang, Liqin Zheng, Chun Meng, and Bharat Biswal.",
223
+ "venue": "In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 5th International Workshop, BrainLes 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Revised Selected Papers, Part I, page 131\u2013141, Berlin, Heidelberg, 2019. Springer-Verlag.",
224
+ "url": null
225
+ }
226
+ },
227
+ {
228
+ "17": {
229
+ "title": "Combining noise-to-image and image-to-image GANs: Brain MR image augmentation for tumor detection.",
230
+ "author": "Changhee Han, Leonardo Rundo, Ryosuke Araki, Yudai Nagano, Yujiro Furukawa, Giancarlo Mauri, Hideki Nakayama, and Hideaki Hayashi.",
231
+ "venue": "IEEE Access, 7:156966\u2013156977, 2019.",
232
+ "url": null
233
+ }
234
+ },
235
+ {
236
+ "18": {
237
+ "title": "RISE: Randomized input sampling for explanation of black-box models.",
238
+ "author": "Vitali Petsiuk, Abir Das, and Kate Saenko.",
239
+ "venue": "In Proceedings of the British Machine Vision Conference (BMVC), 2018.",
240
+ "url": null
241
+ }
242
+ },
243
+ {
244
+ "19": {
245
+ "title": "Attention-guided version of 2D UNet for automatic brain tumor segmentation.",
246
+ "author": "Mehrdad Noori, Ali Bahri, and Karim Mohammadi.",
247
+ "venue": "In 2019 9th International Conference on Computer and Knowledge Engineering (ICCKE), pages 269\u2013275, 2019.",
248
+ "url": null
249
+ }
250
+ },
251
+ {
252
+ "20": {
253
+ "title": "Very deep convolutional networks for large-scale image recognition.",
254
+ "author": "Karen Simonyan and Andrew Zisserman.",
255
+ "venue": "arXiv preprint arXiv:1409.1556, 2014.",
256
+ "url": null
257
+ }
258
+ },
259
+ {
260
+ "21": {
261
+ "title": "Adam: A method for stochastic optimization.",
262
+ "author": "Diederik Kingma and Jimmy Ba.",
263
+ "venue": "International Conference on Learning Representations, 12 2014.",
264
+ "url": null
265
+ }
266
+ },
267
+ {
268
+ "22": {
269
+ "title": "Deep residual learning for image recognition.",
270
+ "author": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.",
271
+ "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770\u2013778, 2016.",
272
+ "url": null
273
+ }
274
+ },
275
+ {
276
+ "23": {
277
+ "title": "AINet: Association implantation for superpixel segmentation.",
278
+ "author": "Yaxiong Wang, Yunchao Wei, Xueming Qian, Li Zhu, and Yi Yang.",
279
+ "venue": "2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7058\u20137067, 2021.",
280
+ "url": null
281
+ }
282
+ },
283
+ {
284
+ "24": {
285
+ "title": "Seed, expand and constrain: Three principles for weakly-supervised image segmentation.",
286
+ "author": "Alexander Kolesnikov and Christoph H. Lampert.",
287
+ "venue": "In European Conference on Computer Vision (ECCV). Springer, 2016.",
288
+ "url": null
289
+ }
290
+ },
291
+ {
292
+ "25": {
293
+ "title": "Augmenting convolutional networks with attention-based aggregation.",
294
+ "author": "Hugo Touvron, Matthieu Cord, Alaaeldin El-Nouby, Piotr Bojanowski, Armand Joulin, Gabriel Synnaeve, Jakob Verbeek, and Herv\u2019e J\u2019egou.",
295
+ "venue": "arXiv preprint arXiv:2112.13692, 2021.",
296
+ "url": null
297
+ }
298
+ },
299
+ {
300
+ "26": {
301
+ "title": "Weakly Supervised Multiple Instance Learning Histopathological Tumor Segmentation.",
302
+ "author": "Marvin Lerousseau, Maria Vakalopoulou, Marion Classe, Julien Adam, Enzo Battistella, Alexandre Carr\u00e9, Th\u00e9o Estienne, Th\u00e9ophraste Henry, Eric Deutsch, and Nikos Paragios.",
303
+ "venue": "In MICCAI 2020 - Medical Image Computing and Computer Assisted Intervention, pages 470\u2013479, Lima, Peru, October 2020.",
304
+ "url": null
305
+ }
306
+ },
307
+ {
308
+ "27": {
309
+ "title": "The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classification, 2021.",
310
+ "author": "Ujjwal Baid, Satyam Ghodasara, Suyash Mohan, Michel Bilello, Evan Calabrese, Errol Colak, Keyvan Farahani, Jayashree Kalpathy-Cramer, Felipe C. Kitamura, Sarthak Pati, Luciano M. Prevedello, Jeffrey D. Rudie, Chiharu Sako, Russell T. Shinohara, Timothy Bergquist, Rong Chai, James Eddy, Julia Elliott, Walter Reade, Thomas Schaffter, Thomas Yu, Jiaxin Zheng, Ahmed W. Moawad, Luiz Otavio Coelho, Olivia McDonnell, Elka Miller, Fanny E. Moron, Mark C. Oswood, Robert Y. Shih, Loizos Siakallis, Yulia Bronstein, James R. Mason, Anthony F. Miller, Gagandeep Choudhary, Aanchal Agarwal, Cristina H. Besada, Jamal J. Derakhshan, Mariana C. Diogo, Daniel D. Do-Dai, Luciano Farage, John L. Go, Mohiuddin Hadi, Virginia B. Hill, Michael Iv, David Joyner, Christie Lincoln, Eyal Lotan, Asako Miyakoshi, Mariana Sanchez-Montano, Jaya Nath, Xuan V. Nguyen, Manal Nicolas-Jilwan, Johanna Ortiz Jimenez, Kerem Ozturk, Bojan D. Petrovic, Chintan Shah, Lubdha M. Shah, Manas Sharma, Onur Simsek, Achint K. Singh, Salil Soman, Volodymyr\nStatsevych, Brent D. Weinberg, Robert J. Young, Ichiro Ikuta, Amit K. Agarwal, Sword C. Cambron, Richard Silbergleit, Alexandru Dusoi, Alida A. Postma, Laurent Letourneau-Guillon, Gloria J. Guzman Perez-Carrillo, Atin Saha, Neetu Soni, Greg Zaharchuk, Vahe M. Zohrabian, Yingming Chen, Milos M. Cekic, Akm Rahman, Juan E. Small, Varun Sethi, Christos Davatzikos, John Mongan, Christopher Hess, Soonmee Cha, Javier Villanueva-Meyer, John B. Freymann, Justin S. Kirby, Benedikt Wiestler, Priscila Crivellaro, Rivka R. Colen, Aikaterini Kotrotsou, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Hassan Fathallah-Shaykh, Roland Wiest, Andras Jakab, Marc-Andre Weber, Abhishek Mahajan, Bjoern Menze, Adam E. Flanders, and Spyridon Bakas.",
311
+ "venue": null,
312
+ "url": null
313
+ }
314
+ },
315
+ {
316
+ "28": {
317
+ "title": "Comparing 3d, 2.5d, and 2d approaches to brain image auto-segmentation.",
318
+ "author": "Arman Avesta, Sajid Hossain, MingDe Lin, Mariam Aboian, Harlan M. Krumholz, and Sanjay Aneja.",
319
+ "venue": "Bioengineering, 10(2):181, February 2023.",
320
+ "url": null
321
+ }
322
+ },
323
+ {
324
+ "29": {
325
+ "title": "H2nf-net for brain tumor segmentation using multimodal mr imaging: 2nd place solution to brats challenge 2020 segmentation task, 2020.",
326
+ "author": "Haozhe Jia, Weidong Cai, Heng Huang, and Yong Xia.",
327
+ "venue": null,
328
+ "url": null
329
+ }
330
+ },
331
+ {
332
+ "30": {
333
+ "title": "Modality-pairing learning for brain tumor segmentation, 2020.",
334
+ "author": "Yixin Wang, Yao Zhang, Feng Hou, Yang Liu, Jiang Tian, Cheng Zhong, Yang Zhang, and Zhiqiang He.",
335
+ "venue": null,
336
+ "url": null
337
+ }
338
+ },
339
+ {
340
+ "31": {
341
+ "title": "Automatic brain tumor segmentation with scale attention network, 2020.",
342
+ "author": "Yading Yuan.",
343
+ "venue": null,
344
+ "url": null
345
+ }
346
+ }
347
+ ],
348
+ "url": "http://arxiv.org/html/2209.09930v2"
349
+ }
20240123/2210.01407v6.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2210.02651v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2211.01758v2.json ADDED
@@ -0,0 +1,514 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Optimal Algorithms for Stochastic Complementary Composite Minimization",
3
+ "abstract": "Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization in the stochastic setting. This problem corresponds to the minimization of the sum of a (weakly) smooth function endowed with a stochastic first-order oracle, and a structured uniformly convex (possibly nonsmooth and non-Lipschitz) regularization term. Despite intensive work on closely related settings, prior to our work no complexity bounds for this problem were known. We close this gap by providing novel excess risk bounds, both in expectation and with high probability. Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems. We conclude by providing numerical results comparing our methods to the state of the art.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Regularization is one of the most common and successful techniques in stochastic optimization. A regularized objective is given by\nHere, is a closed convex set, 111In our model, we can consider endowed with a first-order stochastic oracle, which is strictly more general than a population loss function. The latter representation is only used as a motivation. represents an expected population loss function, and is a regularization term that promotes a desired structure for the obtained solution, such as sparsity or having low norm.\nTo illustrate more concretely this problem, consider a generalized ridge regression model studied in [18 ###reference_18###]. This model arises in (random design) linear regression, when applying the maximum likelihood principle under Gaussian output noise and prior parameter distribution given by a density , where . This family of densities models the geometry of the target predictor. The resulting model is then\nWe note this model also arises in sparse risk minimization [25 ###reference_25###], where .\nTypically, the two functions in (1 ###reference_###) satisfy complementary properties, such as smoothness for and strong convexity for .\nFurther, in cases such as (2 ###reference_###), is only uniformly convex (when ) [4 ###reference_4###].\nIn this work, we are particularly interested in situations where the underlying norm of the space is non-Euclidean: notice that this norm quantifies the smoothness and strong convexity parameters. Here, it is known that the composite objective (1 ###reference_###) may not simultaneously enjoy smoothness and strong convexity, or that its condition number may increase polynomially with the dimension222This limitation is not specific to composite objectives, but to arbitrary functions. [13 ###reference_13###, 16 ###reference_16###]. This limitation calls for a more nuanced exploitation of the objective\u2019s structure.\nThe complementary composite minimization model has been recently proposed to address this limitation [16 ###reference_16###]. Here, deterministic algorithms that combine gradient computations of with regularized proximal steps on have been proposed. Interestingly, these algorithms attain accelerated linear convergence rates with an effective condition number parameter, that is the ratio between the smoothness constant of with the strong convexity constant of . Our goal in this work is to investigate algorithms for the model (1 ###reference_###) when is endowed with a stochastic first-order oracle."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Contributions",
15
+ "text": "Our work initiates the study of complementary composite minimization (1 ###reference_###) in the stochastic setting. We provide novel algorithms, matching lower complexity bounds, and conclude with numerical experiments to show the benefits of our approach, compared to the state of the art. We remark that our methods are very general, encompassing problems in the form (1 ###reference_###) where is convex and weakly smooth, and is uniformly convex.\nOur algorithms are inspired by the literature on stochastic acceleration for first-order methods [26 ###reference_26###, 19 ###reference_19###, 20 ###reference_20###]. We first provide a non-accelerated algorithm, that we call the non-accelerated composite stochastic mirror-descent (NACSMD), which at every iteration it computes a stochastic gradient of , and uses it to perform a proximal-type step involving the stochastic gradient of and the non-linearized , which is furthermore localized by the use of a Bregman divergence term. Combining this method with a standard restarting scheme, linear convergence (up to a level determined by the noise) is obtained, as shown in the second row of Table 1 ###reference_###. Despite this algorithm not being optimal, it is useful to illustrate the main algorithmic building blocks, and it is straightforward to analyze.\nAs mentioned above, the non-accelerated algorithm is known to be suboptimal, even in the noiseless case, where [16 ###reference_16###]. Therefore, we propose an accelerated counterpart,\nthat we call the accelerated composite stochastic mirror-descent (ACSMD). As usually in acceleration, the method involves a step similar to the non-accelerated method, which is further enhanced by two auxiliary sequences. One of them provides the sequence of points where the stochastic oracle for is queried, and the other provides the sequence of points whose objective value attains the accelerated rate. This type of acceleration does not suffice on its own to get linear convergence, and therefore a similar restarting scheme that the one used for the non-accelerated method provides linear convergence: both results can be found in Table 1 ###reference_###. Interestingly, this complexity bound improves upon previous upper bounds proved in the deterministic setting (i.e., where ) [16 ###reference_16###], showing a more subtle decomposition of the complexity into three terms: a linearly convergent term (called initialization cost in the table) \u2013 involving the square root of the effective condition number \u2013, a polynomial convergence term \u2013 which in the smooth and strongly convex case vanishes \u2013, and a stochastic term \u2013 involving a signal-to-noise type ratio. Further, our results in the stochastic setting are the first of their kind.\nFinally, we remark that our results do not only hold in expectation, but also with high-probability. We achieve this by using concentration inequalities for martingale difference sequences [44 ###reference_44###]. We establish these results\nunder moment generating function (mgf) assumptions for the stochastic oracle, where these bounds are adjusted to the uniform convexity of the regularizer. This framework provides a higher flexibility and it is better suited for the noise assumptions used for the in-expectation results. Furthermore, our restarting analysis is done by studying the random deviations of the whole algorithm, without splitting the concentration analysis among rounds. This is in stark contrast of other restarting algorithms that require explicit shrinking of the optimization domain (e.g., [20 ###reference_20###]), which is computationally challenging and degrades the probabilistic guarantee proportionally to the number of rounds. Our advances come from the simple observation that one can unravel the complete recursion of the restarted algorithm in a path-wise way, and then establishing concentration in the usual way (with modified weights, due to the restarts).\nOur accelerated algorithms are nearly optimal in a natural oracle model, where stochastic first-order oracle access to and full access to is assumed. This oracle is a stochastic analog of the oracle model introduced in [16 ###reference_16###]. We extend the results of [16 ###reference_16###], by incorporating the impact of the stochastic oracle into the complexity. Our lower bounds combine those of the deterministic setting [16 ###reference_16###] with an information-theoretic lower bound for stochastic noise, which is based on a Bernoulli oracle. This type of argument has been used in stochastic convex optimization in past work [32 ###reference_32###], and we adapt it to incorporate the uniform convexity of the regularization term.\nWe run our restarted (NAC-SMD and AC-SMD) algorithms on generalized ridge regression problems as described in eqn. (2 ###reference_###). We have tested our algorithms against the state of the art [19 ###reference_19###, 20 ###reference_20###] on synthetic examples with varying dimension and smoothness parameter. These results do not only confirm the validity of our theoretical advances, but show quantitative improvements upon the state of the art, and some further practical benefits, particularly a more moderate computational overhead when the smoothness parameter is overestimated. We consider this feature important, as estimating this parameter can be difficult in practical scenarios."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Related Work",
21
+ "text": "Stochastic convex optimization is an intensively studied topic, which has been widely used to solve large-scale machine learning problems (see e.g., [32 ###reference_32###, 30 ###reference_30###, 26 ###reference_26###, 19 ###reference_19###, 20 ###reference_20###, 40 ###reference_40###, 27 ###reference_27###]). Furthermore, the concept of regularization [41 ###reference_41###], coming from inverse problems and statistics [39 ###reference_39###, 28 ###reference_28###], is a well-established and successful model for solving ill-posed problems with theoretical guarantees. Beyond the classical theory, we emphasize that the use of regularizers that are only uniformly convex (as opposed to strongly convex) has become the focus of various works [25 ###reference_25###, 12 ###reference_12###, 8 ###reference_8###, 1 ###reference_1###, 2 ###reference_2###]. The necessity of this assumption is crucially related to the structure of the Banach spaces where these variational problems are naturally posed.\nPrevious works on stochastic composite minimization (e.g., [26 ###reference_26###]) require strong convexity and smoothness of to attain linear convergence. For the complementary setting, where the strong convexity assumption only holds for the regularizer , results are rare and typically provide upper bounds only in Euclidean settings (see, e.g. [22 ###reference_22###]). Furthermore, the approach in [22 ###reference_22###] is not compatible with the restarting scheme algorithm suggested in [20 ###reference_20###] (called multistage in that paper); note that all existing linearly convergent methods in the stochastic setting use such restarts. And even for the optimal performance, the convergence proof presented in the article [20 ###reference_20###] requires an assumption about the proximal function to be lower bounded and upper bounded by : by contrast, our approach does not need this assumption.\nAlthough not particularly focused on complementary settings, the work of Juditsky and Nesterov [24 ###reference_24###] (together with the classical monograph [32 ###reference_32###]) is one of the few that studies uniformly convex objectives in the stochastic setting. The setting of this paper is slightly different from ours: the stochastic objectives considered are nonsmooth and uniformly convex, and the space is endowed with a strongly convex distance generating function. Although we can adapt our techniques to extend those of [24 ###reference_24###], we have omitted these results for brevity. Other recent works focused on weak moment assumptions for the stochastic gradients (possibly with infinite variance) [43 ###reference_43###], but the approach was done only for non-smooth optimization and in a non composite setting. In particular, no form of acceleration can be obtained there.\nThe closest work to ours is that of deterministic complementary composite minimization [16 ###reference_16###], which establishes the convergence of accelerated dual averaging algorithms in this setting. This work is the main inspiration for both algorithmic design, step-size schedules, as well as the lower bounds. We note however that, even in the deterministic settings, our upper bounds are sharper, which we attribute to the great flexibility of our step-size policy and restarting schedule. Independently, in [11 ###reference_11###] composite acceleration under the lens of relative Lipschitzness and relative strong convexity was obtained by application of extragradient-type algorithms. Again, this derivation [11 ###reference_11###, Thm. 4] is only made for deterministic objectives.\nAt the technical level, we have extended the proof in [19 ###reference_19###] to exploit the uniform/strong convexity of the regularizer, and our analysis provides a more flexible choice of step-size parameters. For the accelerated method, we also mix the AGD+ step-size from [16 ###reference_16###], with the usual ones in [19 ###reference_19###] to create our own sequence of step-sizes, a particular point is that the choice becomes more intuitive and it is not unique anymore.\nIndependently and concurrently to our work, Dubios-Taine at al. [17 ###reference_17###] studied stochastic composite minimization in a smooth plus strongly convex setting (a particular case of our work). However, their algorithm only obtains constant accuracy under constant noise, as opposed to our vanishing and optimal accuracy bounds. Here as well, it appears that our advantage comes from the flexibility of the step-size schedule."
22
+ },
23
+ {
24
+ "section_id": "2",
25
+ "parent_section_id": null,
26
+ "section_name": "Preliminaries",
27
+ "text": "We introduce here several notions which are relevant for our work. In what follows, we let . For the algebraic and ordering properties of this space, see e.g., [5 ###reference_5###].\nLet and . A function subdifferentiable on its domain is -uniformly-convex w.r.t. a norm if\n\nIn the case of , the definition of uniform convexity coincides with the more well-known notion of strong convexity. In that case, it is known that the function , where , is -uniformly convex w.r.t. .\nAnother example, the negative entropy, defined as if (the standard unit simplex in ), and otherwise; is -uniformly convex w.r.t. . For these two examples we refer the reader to [5 ###reference_5###, Section 5.3.2].\nNow let us consider the case of . Then, it is possible to show that is -uniformly convex w.r.t. . We provide more details in Appendix A ###reference_###. All the previous examples can be extended to their spectral counterparts, namely the Schatten spaces , where given the spectrum of a matrix , , we define its Schatten norm as (see e.g. [4 ###reference_4###, 23 ###reference_23###]). These matrix counterparts arise naturally in linear inverse problems [38 ###reference_38###, 36 ###reference_36###].\nIn what follows, we denote by any subgradient of a function at point . This is only for notational convenience, and it can be done without loss of generality.\nLet and . A differentiable function is -weakly smooth w.r.t. a norm if\n\nLet be a convex function which is continuously-differentiable on the interior of its domain, we define the Bregman divergence of as\nNote that if is (uniformly) convex, then the Bregman divergence is (uniformly) convex on its first argument.\nThe following result is a consequence of the three-points identity [10 ###reference_10###]. We note that [26 ###reference_26###] follows a similar route, where a negative Bregman divergence term is upper bounded by zero. We maintain this term, as it is crucial for our improved rates.\nLet be a convex function and be convex and continuously differentiable. If we consider\nthen for all :\n\nFrom the first order optimality conditions, for all :\nwith the gradient taken with respect to the first entry. We also apply the three-points identity from [10 ###reference_10###]:\nThen:\n\nGiven parameters we define the following parameters to simplify notation:\nNow we introduce a key lemma, which first arose in a more restricted form in [15 ###reference_15###] in the context of methods with inexact gradients, and has later been used to bridge uniform convexity and uniform smoothness inequalities in first-order methods as in [35 ###reference_35###], [13 ###reference_13###] and [16 ###reference_16###]. Here, we use a homogeneous version of the lemma.\nFrom notation 3 ###reference_###, if is -weakly smooth, then for all\n\nFor , we know \nNow use the Young inequality as in [13 ###reference_13###], for we have We consider , , , , we scale everything with :\nwhere the middle term comes from\nplugging this bound back in the first step of the proof shows the result."
28
+ },
29
+ {
30
+ "section_id": "2.1",
31
+ "parent_section_id": "2",
32
+ "section_name": "The Stochastic Oracle Model",
33
+ "text": "We are interested in studying problem (1 ###reference_###) in a natural oracle model, that we will refer to as the stochastic composite oracle model. We make the following assumptions:\nis convex and -weakly smooth.\nis -uniformly convex, continuously differentiable, and dom.\nNotice that under these assumptions, problem (1 ###reference_###) has a unique solution, that we will denote by .\nNow we proceed to specify the oracle assumptions for both functions.\nFrom the notation introduced in (3 ###reference_###), we assume the existence of an oracle that for any given provides a random variable such that\n\nThe first equation states that that is an unbiased estimator of the gradient . On the other hand, the second equation controls the -th moment of the noise of this oracle. Notice that by the Jensen inequality this assumption is more restrictive for higher values of .\nOur algorithms will be based on the mirror-descent method. For this, we will use the regularizer as our distance-generating function (dgf) which is continuously differentiable on the interior of its domain and is -uniformly convex.\nWe introduce the standard assumption on the computability of the prox-mapping for the dgf [30 ###reference_30###].\nWe assume that for any linear function , the problem below can be solved efficiently,\n\nNotice also that Assumption 2 ###reference_mption2### implies the computability of subproblems involving the Bregman divergence, for any .\nFor convenience, we introduce the gradient noise random variable,\nTo derive high probability accuracy bounds, we will use Bernstein-type concentration inequalities [44 ###reference_44###], with adaptations regarding the exponent . The usual assumption in the literature relates to a sub-Gaussian tail bound on the norm of the stochastic oracle error (see, e.g. [30 ###reference_30###, 26 ###reference_26###, 19 ###reference_19###, 20 ###reference_20###]). However, weaker moment bounds such as Assumption (1 ###reference_mption1###) with are inconsistent with sub-Gaussian tails.\nWe assume that given a sample , for all\n\nWe notice that this assumption implies the bound (6 ###reference_###) in Assumption 1 ###reference_mption1###:\nThis assumption gives also an upper bound for inner products of the gradient noise, which is straightforward from the H\u00f6lder inequality, thus we omit its proof.\nSuppose that , for some . Then, under Assumption 3 ###reference_mption3###, if we let , then:\n\nIn Appendix C ###reference_### we derive the necessary concentration inequalities for these random variables, as well as their respective martingales. Although these results are not entirely new (see e.g., [9 ###reference_9###, 45 ###reference_45###]), we include these analyses for completeness, and since they are not common in the optimization community. Moreover, our derivations work directly on the moment generating functions, avoiding the smoothing (also called \u201cstandardization\u201d) approaches carried out in the aforementioned works."
34
+ },
35
+ {
36
+ "section_id": "2.2",
37
+ "parent_section_id": "2",
38
+ "section_name": "Restarting scheme",
39
+ "text": "Finally, regarding linear convergence rates,\nthere is a key technique of restarting an algorithm multiple times to reduce the initialization error exponentially fast. In our context, the idea was introduced\nin [20 ###reference_20###] with the restarting procedure occurring at every -th iteration. In that reference, the authors have also used the assumption of an existing strongly convex proximal function (distance generating function), which is not needed in our algorithm.\nHere we will suggest a simpler analysis of the restarting algorithm for faster convergence in expectation. Instead of restarting every iterations, we will simply restart the algorithm periodically.\nThe general idea of the restarting scheme is to fully use the recursive form of the Bregman divergence term which appear on both sides of the accuracy guarantee (we will show that our algorithms have this feature in the next section). Leveraging that implicit distance guarantee, the algorithm can exponentially boost its convergence rate by only increasing its other polynomially convergent terms by an absolute constant factor (namely, 3). The following lemma, whose proof is deferred to Appendix B ###reference_###, illustrates this.\nConsider the output of an algorithm. If there exist and such that we know for all :\nwhere and are random variables.\nThen for the output of the Restarting Algorithm 1 ###reference_### with , we have that\n\nWe precise here that means that we have greater upper-bounds for than for both in expectation and with high probability, up to constant factors. Here is the implicit distance that we are reducing and all terms related to are the increased costs."
40
+ },
41
+ {
42
+ "section_id": "3",
43
+ "parent_section_id": null,
44
+ "section_name": "Algorithms",
45
+ "text": "We now proceed to introduce and analyze the algorithms for the stochastic complementary composite minimization problem. For the convergence analysis, we need to introduce a partial linearization of the objective .:\nNotice that if , this corresponds to the first order Taylor approximation of the objective, however in the complementary composite setting, we only linearize the term that can be linearly approximated, namely . By convexity and weak smoothness of , we know for all :"
46
+ },
47
+ {
48
+ "section_id": "3.1",
49
+ "parent_section_id": "3",
50
+ "section_name": "Non-Accelerated Method",
51
+ "text": "Our first method is a non-accelerated composite stochastic mirror-descent (NACSMD) method. This method has some resemblance to the classical stochastic mirror-descent method [32 ###reference_32###, 30 ###reference_30###], with the difference that is not linearized in the subproblem, an idea that traces back to the proximal-gradient and composite minimization literature [6 ###reference_6###, 33 ###reference_33###].\nThe updates above require two sequences of step-sizes . On the other hand, we require the step-size schedule to satisfy the following conditions.\nLet be such that for all :\n\nWe notice that the constraints above have multiples solutions; for example, we can consider polynomial step-sizes with various degrees. This implies a high degree of flexibility for our methods. The related convergence rates will be derived from the following result.\nSuppose Algorithm 2 ###reference_### runs under the step-size schedule 1 ###reference_iguration1###. Then, if and if we let , for all :\nwhere are defined in eqn. (3 ###reference_###).\nNotice the result above provides both a guarantee on the optimality gap and on the distance to the optimal solution (when choosing ), as is expected for a uniformly convex program.\nBy the proximal lemma (Lemma 2.5 ###reference_heorem5###) applied to , , , , and , we have:\nCombining with the inequality of the proximal gradient equation (10 ###reference_###) and adding on both sides, we have for all arbitrary gap from Lemma 2.7 ###reference_heorem7###:\nwhere in the last inequality we used the convexity of to upper bound , as well as the -uniform convexity of to upper bound the last term.\nNow we need to give an upper bound . For this, we will use the Young inequality, we will fix the gap value :\nCombining with the above bounds, we have:\nSumming the previous equation from to , we get\nNow we will use the convexity of the problem. From the Jensen inequality, as is convex, we can aggregate the left hand side by considering the weighted sequence, \nThen, after rearranging terms for all :\nconcluding the proof.\nThe previous result applies almost surely and under any step-size sequence (as long as the Step-Size Schedule 1 ###reference_iguration1### constraint is satisfied). We now focus on a family of step-sizes that increase polynomially with ,\nThe first condition comes from the fact that we need and to be divergent, that require a minimum degree of polynomials. We also notice that one can choose higher polynomial degree , but that would change the convergence only up to a constant factor. Now we present the final result for NACSMD.\nUnder Assumption 1 ###reference_mption1### and choosing step-sizes as in (18 ###reference_###), if , we have for all :\nwhere and omits absolute constants that depend on . Moreover, if is bounded with diameter and if we note the previous upper bound for , under Assumption 3 ###reference_mption3###, we have for all :\nwith omits another absolute constant that depend on .\nFor the case where , we note that the inexact gradient trick is not needed to obtain the convergence rate. A similar in-expectation result can be obtained:\nAnd the concentration result stays the same. The details are left to the reader.\nThe proof for the in-expectation guarantee follows directly from Theorem 3.1 ###reference_heorem1###. For the high-probability guarantee, we defer its proof to Appendix D.1 ###reference_###.\nThe in-expectation guarantee (19 ###reference_###) in Theorem 3.1 ###reference_heorem1### shows a decomposition of the accuracy in three terms, that we denote respectively by initialization, geometric gap and variance. Regarding the initialization, we expect that in uniformly convex settings this term can be decreased exponentially fast, which we will achieve by a restarting strategy; the geometric gap exhibits the polynomial convergence rates observed in (non-strongly) convex optimization; finally, the last term reflects the statistically optimal rates inherent to stochastic convex optimization.\nLastly, we want to apply the restarting algorithm (Algorithm 1 ###reference_###) presented before to reduce the complexity of the first term. The restarting lemma (Lemma 2.10 ###reference_heorem10###) reduces the complexity due to initialization term with . To apply the lemma, the coefficient and degree related to initialization term will be fixed at Similarly for the geometric gap term for the variance term and for the centered noise term . We precise that means proportional up to a constant that depends on only. Therefore the final complexity for the in-expectation bound is:\nAlso the additional high probability (i.e. the gap not exceeding more than with probability ) cost for small enough is :"
52
+ },
53
+ {
54
+ "section_id": "3.2",
55
+ "parent_section_id": "3",
56
+ "section_name": "Accelerated Method",
57
+ "text": "We propose now an accelerated counterpart of the NACSMD algorithm, that we will call the accelerated composite stochastic mirror-descent (ACSMD) method. For this, we follow the approach from [20 ###reference_20###] which attains acceleration by maintaining two sequences of averaged points, one for querying the stochastic first order oracle for , and another for attaining the faster convergence. In the deterministic setting, this algorithm is comparable to the AGD+ algorithm used in [16 ###reference_16###] to obtain acceleration in complementary composite settings. However, there is another distinction, as our method follows a mirror-descent style update, rather than the dual averaging approach pursued in [16 ###reference_16###]. Comparing both methods, our algorithm has a more flexible choice of step-sizes as we will see in the Step-Size Schedule 2 ###reference_iguration2###, not only there are multiple solutions, but we also show that they all attain optimal rates; in particular, for different polynomial step-size schedules, their accuracy only differs by a constant factor.\nLet be two sequences such that:\n\nFor two chosen sequences of step-sizes , we have a general convergence result.\nSuppose Algorithm 3 ###reference_### runs under the step-size schedule 2 ###reference_iguration2###.\nThen, if and if we denote , for all , :\n\nWe notice that acceleration factors appear in the last term with , which represents the geometric gap.\nApplying the proximal lemma (Lemma 2.5 ###reference_heorem5###) to , , , , and ,\nwe have:\nwhere we used eqn. (10 ###reference_###) in the last line. Then:\nNext, we use the smoothness of , and the convexity of both and . Let to be determined later: by Lemma 2.7 ###reference_heorem7###,\nwhere in the last inequality we used the convexity of , specifically\n.\nNow, using (24 ###reference_###), we have for all :\nLet now . From the Young inequality, we have:\nCombining everything, we have:\nAdding all of our terms from to ,\nwe obtain\nDue to the step-size schedule 2 ###reference_iguration2###, the last term is nonpositive, thus upper bounding it by zero proves the result.\nAs before, there is a family of step-sizes that increases polynomially with :\nWith these parameters, we can state an expected excess risk bound for our accelerated algorithm as follows.\nUnder Assumption 1 ###reference_mption1### and choosing step-sizes as in (26 ###reference_###), if , we have for all :\nwhere and omits absolute constants that depend on . Moreover, if is bounded with diameter and if we note the previous upper bound for , under Assumption 3 ###reference_mption3###, we have for all :\nwith omits another absolute constant that depend on .\nSimilarly to Theorem 3.3 ###reference_heorem3###, if , then the inexact gradient trick is unnecessary, and with minor adaptations to the proof, the rate below follows\nFor the high probability bound, an analog of eqn. (27 ###reference_###) holds, where is replaced by . The details are left to the reader.\nThe in-expectation guarantee follows directly from Theorem 3.5 ###reference_heorem5###, hence its proof is omitted.\nThe proof of the concentration bound is deferred to Appendix D.1 ###reference_###. Moreover, we combine the previous result with the restarting algorithm (Algorithm 1 ###reference_###) to reduce the initialization error with . As before, that includes new coefficients for the initialisation term and geometric gap , but the stochastic terms stay the same ; this reflects the inherent nature of this term in the accuracy. The final complexity for the in-expectation guarantee becomes:\nAnd the additional high probability (i.e. the gap not exceeding more than with probability ) cost for small enough is :\nWe notice that even if we apply our algorithm in the deterministic case , the convergence rate is sharper than the one in [16 ###reference_16###]. And we will see in the next subsection that our current stochastic bound is optimal when we only have an estimator of the gradient with -th finite moment."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Lower Complexity Bounds",
63
+ "text": "To show the near-optimality of our algorithms, we provide matching lower bounds in all parameter settings. These lower bounds are obtained by combining (deterministic) oracle complexity bounds for complementary composite minimization from past work, together with lower bounds applicable to stochastic oracles that satisfy Assumption 1 ###reference_mption1###. Due to space limitations, we do not provide a detailed description of the oracle model, recommending as references [31 ###reference_31###, 29 ###reference_29###]. In general terms, this model captures the interaction of an algorithm with an instance only through queries (corresponding to feasible solutions) whose oracle answers depend only locally on the function (e.g., its gradient). After this interaction, the algorithm must commit to a candidate solution (this choice can even be randomized, but it must be based exclusively on the information collected so far), and the efficiency of the method is determined by the number of queries it performs. Further, the algorithm must provide an output (which either in expectation or with high probability) has suboptimality gap at most"
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Deterministic Lower Complexity Bound",
69
+ "text": "The oracle complexity of deterministic complementary composite minimization was first studied in [16 ###reference_16###]. We refer the reader to this work for a more precise description of the oracle model, which \u2013 in a nutshell \u2013 assumes exact first-order oracle access to and full access to .\nConsider the space , where . Then, the oracle complexity of complementary composite minimization problems where is -weakly smooth, is -uniformly convex and has -diameter at least , is lower bounded by:\nwhere is bounded below by an absolute constant, and\nwhere is a universal constant.\nThis result is applicable to all settings of and , for arbitrary choices of , however the lower bound only applies to sufficiently large values of . This limitation is inherent, as when the uniform convexity parameter becomes sufficiently small better complexity rates are obtained by non-uniformly convex stochastic optimization. Note moreover that the sufficiently large condition \u2013 given by \u2013 scales polynomially with the target accuracy, hence the restriction is mild.\nIn the case , the lower bound of the theorem, , is nearly tight in the case , due to the upper bound ; an analogous rate was obtained in [16 ###reference_16###]. We defer the case to the next subsection. On the other hand, when we obtain a polynomial lower bound . Our upper bound in this setting when is , hence the lower bound is nearly-optimal up to a poly-logarithmic additive term. Regarding this gap, we remark that our result is a refinement of results in [16 ###reference_16###], where the logarithmic term appears multiplicatively in the second term of our upper bound."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Stochastic Lower Complexity Bound",
75
+ "text": "Our stochastic lower bound is inspired by a very classical argument [31 ###reference_31###], which we extend to the uniformly convex setting, as well as extending to arbitrary moment parameter .\nConsider the class of problems (1 ###reference_###) where , , and satisfying\nUnder Assumptions 1 ###reference_mption1### and 2 ###reference_mption2###, any algorithm for this problem class is such that, for any , with probability it fails on achieving accuracy after\nmany queries to a certain stochastic first-order oracle.\nFirst, we give an overview of the approach. We will consider a 1-dimensional instance of the form , and , where\n can be adversarially selected, and is a binary random variable that takes the value 0 with probability and the value with probability ; we will choose very small. Notice that tilts the optimal solution to the right or left of the origin, and we will show that in fact learning is necessary and sufficient to accurately minimize the composite objective. The key idea is that the oracle in this case corresponds to samples from , which only rarely provide ; in this case, we are unable to learn the parameter , and essentially cannot make any progress. Hence, controlling the probability that samples from are all zero suffice to assert that the algorithm is unlikely to succeed in terms of objective function value.\nLet and be a random variable that takes value 0 w.p. and value w.p. . Clearly, . Given and to be determined, consider the following functions:\nThus, the objective satisfies the complementary composite structure: namely, the objective is composed by a -smooth function (where in fact ) plus a -uniformly convex function.\nIn what follows, we will determine values of and such that the -level sets of and are disjoint, and that Assumption 1 ###reference_mption1### is satisfied\nLevel set disjointness: By the optimality conditions for , it is easy to see the optimal solution is , and hence\nHence, the condition below suffices to have the property that the -level sets of and are disjoint\nCentered moment bounds. Let us compute the -th moment of our stochastic oracle:\nWe want to impose that this moment is upper bounded by , which is equivalent to:\nNotice that by (29 ###reference_###), we have that , which in turn implies , so it suffices that\nFinally, notice the left hand side is a monotonically increasing function of , and that when it converges to 0, whereas when it diverges to . Hence, there exists a unique choice of such that equality is satisfied. From now on, we make this choice of .\nThe proof is concluded by noticing that the probability that samples provide is . Notice that under this event, the algorithm has collected no information about . Therefore, if we choose uniformly at random, the expected suboptimality of the algorithm will be at least . We finally lower bound the probability of the event above, for which we use the elementary inequality , for :\nNotice now that for our choice of , we have:\nFinally, making the choice of shows the probability is at least , proving the result.\nTo conclude this Section, we briefly discuss the consequences of the separate lower bounds proved in Theorems 4.1 ###reference_heorem1### and 4.2 ###reference_heorem2###; in particular: Do they imply a lower bound given by the sum of the two? The answer is yes (possibly with a degradation by an absolute constant factor), and the argument is the following: Consider an adversary which first tosses a fair coin, and based on its outcome it selects either the family of instances from the proof of Theorem 4.1 ###reference_heorem1###, or alternatively it selects the family of instances from the proof of Theorem 4.2 ###reference_heorem2###. Then, for any algorithm, its expected running time against this random instance must be proportional to the sum of the two lower bounds. Furthermore, if the algorithm is deterministic, then we can derandomize this choice, making the lower bound to hold with high-probability."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Numerical results",
81
+ "text": "We apply our methods to the generalized ridge regression problem, as presented in eqn. (2 ###reference_###),\nWe generate synthetic data from a uniform distribution and Gaussian noise:\nWe know that the loss function is smooth and strongly convex with respect to the norm, but the condition number depends on the dimension:\nDimension dependence arises from the following chain of inequalities (each of which can be tight in the worst case):\nHence, the condition number of can be upper bounded by .\nOn the other hand, the regularizer is -uniformly convex with respect to the norm (see, e.g., [4 ###reference_4###]). To check the performance of the algorithms, we choose the following setting. Below we denote by the strong convexity parameter of :\n###figure_1### To better evaluate the performance of different algorithms, we first pick and , we run multiple simulations with different parameters of and , then we measure the needed number of iterations to achieve the relative precision. For the step size, we choose as a constant for NACSMD and different polynomials for ACSMD.\nNow we can briefly analyze the results we have obtained. We emphasize three aspects that illustrate the benefits of our accelerated method."
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "Acknowledgments",
87
+ "text": "Research partially supported by an INRIA Associate Teams grant.\nCG\u2019s research was partially supported by FONDECYT 1210362 grant and National Center for Artificial Intelligence CENIA FB210017, Basal ANID."
88
+ }
89
+ ],
90
+ "appendix": [
91
+ {
92
+ "section_id": "Appendix 1",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix A Example of uniform convexity",
95
+ "text": "We remind the context of the uniform convexity of . We would like to show that, for all , .\nBy the separability on and , we only need to show the result in dimension one, which means that, for all\nwhich is proved in [46 ###reference_46###, Proposition 3.2]."
96
+ },
97
+ {
98
+ "section_id": "Appendix 2",
99
+ "parent_section_id": null,
100
+ "section_name": "Appendix B Analysis of the restarting algorithm",
101
+ "text": "of Lemma 2.10 ###reference_heorem10###.\nThe proof is composed into two parts, we will first analyze the output in the rounds of fixed length after each iteration. Then once the initialisation error(related to ) is considerably reduced, we will analyse the output after the remaining iterations to show that the complexity costs for other terms (related to ) have at most doubled.\nFor the first iterations rounds, we notice that the assumption in the lemma 2.10 ###reference_heorem10### provide a recursive form for the proximal function. If we call the restarting point that we use for the -th round and the output:\nwith random variables appeared in -th round. We realize that the distance to the optimal solution has almost been halved compared to our initial distance and we are paying a constant cost related to . In other words, we have a recursion of the form , where is a constant.\nNow we only need to remind that the restarting point is the ending of the previous epoch: . For each round, we are paying the same constant price, but since the scale is halved each time, the sum of them is converging. For example if :\nThus, we have by induction for :\nThe remaining part is to run iterations with the new starting point with the final output:"
102
+ },
103
+ {
104
+ "section_id": "Appendix 3",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix C Concentration inequalities",
107
+ "text": "Let be a random variable that satisfies (9 ###reference_###). Then\n\nFirst, consider the case .\nBy Markov\u2019s inequality:\nThen we can also calculate the moments, for :\nwhere the gamma function. Hence as we know for and :\nNext, for the case ,\nwe use the Young inequality:\n\nAs mentioned earlier, most of the approaches in the literature one work with a smooth surrogate of the exponential mgf [9 ###reference_9###, 45 ###reference_45###]. On the other hand, our approach works directly with the mgf.\nWe now state the concentration bounds derived for martingale difference sequences under the mgf bound given in Assumption 3 ###reference_mption3###.\nLet be a martingale difference sequence with respect to (i.e., for all ) such that conditionally on satisfies Assumptions 1 ###reference_mption1### and 3 ###reference_mption3###. For all , if we consider and , then:\n\nBefore giving the main idea of the proof, we first notice that by tower property of conditional expectations:\nHence, we start the proof by using the standard Cr\u00e1mer-Chernoff bound, in conjunction with Lemma C.1 ###reference_heorem1###:\nIf , we can minimize the upper bound above, which is attained at . Therefore:\nElse when , we just consider :\nSimilarly for all :\nIf , the infimum above is attained at , which lies in the interval and since :\nelse if we just consider :\n\nFor an algorithm working in the composite oracle model, let\u2019s consider . Suppose conditionally on (where the stochastic gradient in iteration is ) satisfies assumptions 1 ###reference_mption1### and 3 ###reference_mption3###. Consider a polynomial step-size sequence with . Then for all ,\nwith some constants that depend on only.\n\nThe idea of the proof is to apply the previous result with and we define by the identity\nThen we obtain after multiplying by :\nWe finish the proof by considering and by noticing that:"
108
+ },
109
+ {
110
+ "section_id": "Appendix 4",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix D Details of proofs in Section 3",
113
+ "text": "Since the stochastic terms for NACSMD and ACSMD are almost the same, we will consider the following notation:\nThe only difference in the acceleration method is that we have instead of , but notice their stochastic noise is of the same kind.\nTo simplify the notation, in this subsection we will note: From Markov inequality, we know that for :\nNow we use convexity of exponential and linearity of the expectation:\nWe obtain that:\nSince under step-sizes schedule 1 ###reference_iguration1### or 2 ###reference_iguration2###, we have , combining with Theorem C.5 ###reference_heorem5###, we know if :\nand if :"
114
+ }
115
+ ],
116
+ "tables": {
117
+ "1": {
118
+ "table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S1.T1.10\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S1.T1.10.11.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S1.T1.10.11.1.1\" style=\"padding-bottom:2.15277pt;\">Complexity</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S1.T1.10.11.1.2\" style=\"padding-bottom:2.15277pt;\">Initialization cost</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S1.T1.10.11.1.3\" style=\"padding-bottom:2.15277pt;\">Deterministic cost</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S1.T1.10.11.1.4\" style=\"padding-bottom:2.15277pt;\">Stochastic cost</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S1.T1.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_tt\" id=\"S1.T1.3.3.4\">NACSMD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S1.T1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S1.T1.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S1.T1.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_l ltx_border_r\" id=\"S1.T1.6.6.4\">ACSMD</th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.6.6.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.10.10\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r\" id=\"S1.T1.10.10.5\">Lower bound</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S1.T1.8.8.2\">\n \n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S1.T1.9.9.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S1.T1.10.10.4\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S1.T1.22.6.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S1.T1.20.5\" style=\"font-size:90%;\">Summary of upper and lower complexity bounds in the paper (up to constant factors that may depend on ). The complexity is decomposed as the sum of three different terms, where the first two of them also arise in deterministic settings <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib16\" title=\"\">16</a>]</cite>. The results applies for with , except the lower bound which is applicable only when .</span></figcaption>\n</figure>",
119
+ "capture": "Table 1: Summary of upper and lower complexity bounds in the paper (up to constant factors that may depend on ). The complexity is decomposed as the sum of three different terms, where the first two of them also arise in deterministic settings [16]. The results applies for with , except the lower bound which is applicable only when ."
120
+ },
121
+ "2": {
122
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.4.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.4.5.1.1\" style=\"padding-bottom:2.15277pt;\">Iteration required</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.4.5.1.2\" style=\"padding-bottom:2.15277pt;\">Lan</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.4.5.1.3\" style=\"padding-bottom:2.15277pt;\">NACSMD</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.4.5.1.4\" style=\"padding-bottom:2.15277pt;\">ACSMD1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.4.5.1.5\" style=\"padding-bottom:2.15277pt;\">ACSMD2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.4.5.1.6\" style=\"padding-bottom:2.15277pt;\">ACSMD3</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T2.1.1.1\">\n=20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.1.1.2\">91</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.1.1.3\">81</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.1.1.4\">32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.1.1.5\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.1.1.6\">14</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.1\">\n=50</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2\">110</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.3\">66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.4\">26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.5\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.6\">12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.1\">\n=100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.2\">145</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.3\">82</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.4\">33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.5\">21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.6\">15</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.1\">\n=200</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.2\">138</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.3\">76</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.4\">26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.5\">17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.6\">12</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.6.1.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.7.2\" style=\"font-size:90%;\">Simulation results with different dimensions.</span></figcaption>\n</figure>",
123
+ "capture": "Table 2: Simulation results with different dimensions."
124
+ },
125
+ "3": {
126
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.5.6.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.5.6.1.1\" style=\"padding-bottom:2.15277pt;\">Iteration required</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.5.6.1.2\" style=\"padding-bottom:2.15277pt;\">Lan</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.5.6.1.3\" style=\"padding-bottom:2.15277pt;\">NACSMD</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.5.6.1.4\" style=\"padding-bottom:2.15277pt;\">ACSMD1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.5.6.1.5\" style=\"padding-bottom:2.15277pt;\">ACSMD2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.5.6.1.6\" style=\"padding-bottom:2.15277pt;\">ACSMD3</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.2\">110</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.3\">66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.4\">26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.5\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.1.1.6\">12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.2.2\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.2.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.2.2.2\">149</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.2.2.3\">84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.2.2.4\">33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.2.2.5\">21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.2.2.6\">15</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.3.2\">114</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.3.3\">123</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.3.4\">26</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.3.5\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.3.3.6\">12</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.2\">228</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.3\">266</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.4\">28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.5\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.6\">13</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.2\">457</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.3\">&gt;999</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.4\">31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.5\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.6\">14</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.11.3.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S5.T3.9.2\" style=\"font-size:90%;\">Simulation results with overestimated parameter ().</span></figcaption>\n</figure>",
127
+ "capture": "Table 3: Simulation results with overestimated parameter ()."
128
+ }
129
+ },
130
+ "image_paths": {
131
+ "1": {
132
+ "figure_path": "2211.01758v2_figure_1.png",
133
+ "caption": "Figure 1: Performance comparison between Algorithms 2, 3 and the one suggested in [20] with the restarting scheme 1. We evaluate the decreasing speed of the log relative error through iterations for an extended Ridge regression problem.",
134
+ "url": "http://arxiv.org/html/2211.01758v2/extracted/5363874/figures/Ridge_q=10.0_d=50_Coeff=1.0.png"
135
+ },
136
+ "2": {
137
+ "figure_path": "2211.01758v2_figure_2.png",
138
+ "caption": "Figure 2: Performance comparison for Algorithm 3 with different polynomial degrees as described before without the restarting scheme. We evaluate the decreasing speed of the log relative error through iterations for an extended Ridge regression problem.",
139
+ "url": "http://arxiv.org/html/2211.01758v2/extracted/5363874/figures/Non_restart_Ridge_q=10.0_d=50_Coeff=1.0.png"
140
+ }
141
+ },
142
+ "validation": true,
143
+ "references": [
144
+ {
145
+ "1": {
146
+ "title": "Iterative refinement for -norm regression.",
147
+ "author": "Deeksha Adil, Rasmus Kyng, Richard Peng, and Sushant Sachdeva.",
148
+ "venue": "In Proc. ACM-SIAM SODA\u201919, 2019.",
149
+ "url": null
150
+ }
151
+ },
152
+ {
153
+ "2": {
154
+ "title": "Fast, provably convergent IRLS algorithm for -norm linear\nregression.",
155
+ "author": "Deeksha Adil, Richard Peng, and Sushant Sachdeva.",
156
+ "venue": "In Proc. NeurIPS\u201919, 2019.",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "3": {
162
+ "title": "Private stochastic convex optimization: Optimal rates in \ngeometry.",
163
+ "author": "Hilal Asi, Vitaly Feldman, Tomer Koren, and Kunal Talwar.",
164
+ "venue": "CoRR, abs/2103.01516, 2021.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "4": {
170
+ "title": "Sharp uniform convexity and smoothness inequalities for trace norms.",
171
+ "author": "Keith Ball, Eric A Carlen, and Elliott H Lieb.",
172
+ "venue": "Inventiones mathematicae, 115(1):463\u2013482, 1994.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "5": {
178
+ "title": "First-Order Methods in Optimization.",
179
+ "author": "Amir Beck.",
180
+ "venue": "SIAM-Society for Industrial and Applied Mathematics, Philadelphia,\nPA, USA, 2017.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "6": {
186
+ "title": "A fast iterative shrinkage-thresholding algorithm for linear inverse\nproblems.",
187
+ "author": "Amir Beck and Marc Teboulle.",
188
+ "venue": "SIAM journal on imaging sciences, 2(1):183\u2013202, 2009.",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "7": {
194
+ "title": "Uniformly convex functions on Banach spaces.",
195
+ "author": "J. Borwein, A. J. Guirao, P. H\u00e1jek, and J. Vanderwerff.",
196
+ "venue": "Proceedings of the AMS, 137(3):1081\u20131091, 2009.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "8": {
202
+ "title": "An homotopy method for regression provably beyond\nself-concordance and in input-sparsity time.",
203
+ "author": "S\u00e9bastien Bubeck, Michael B Cohen, Yin Tat Lee, and Yuanzhi Li.",
204
+ "venue": "In Proc. ACM STOC\u201918, 2018.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "9": {
210
+ "title": "Metric Characterization of Random Variables and Random\nProcesses.",
211
+ "author": "V.V. Buldygin and Yu.V. Koza\u010denko.",
212
+ "venue": "Cross Cultural Communication. American Mathematical Society, 2000.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "10": {
218
+ "title": "Convergence analysis of a proximal-like minimization algorithm using\nbregman functions.",
219
+ "author": "Gong Chen and Marc Teboulle.",
220
+ "venue": "SIAM Journal on Optimization, 3(3):538\u2013543, 1993.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "11": {
226
+ "title": "Relative lipschitzness in extragradient methods and a direct recipe\nfor acceleration, 2020.",
227
+ "author": "Michael B. Cohen, Aaron Sidford, and Kevin Tian.",
228
+ "venue": null,
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "12": {
234
+ "title": "Regularized learning schemes in feature banach spaces.",
235
+ "author": "Patrick L Combettes, Saverio Salzo, and Silvia Villa.",
236
+ "venue": "Analysis and Applications, 16(01):1\u201354, 2018.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "13": {
242
+ "title": "Optimal affine-invariant smooth minimization algorithms.",
243
+ "author": "Alexandre d\u2019Aspremont, Crist\u00f3bal Guzm\u00e1n, and Martin Jaggi.",
244
+ "venue": "SIAM Journal on Optimization, 28(3):2384\u20132405, 2018.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "14": {
250
+ "title": "A stochastic smoothing algorithm for semidefinite programming, 2014.",
251
+ "author": "Alexandre d\u2019Aspremont and Noureddine El Karoui.",
252
+ "venue": null,
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "15": {
258
+ "title": "First-order methods of smooth convex optimization with inexact\noracle.",
259
+ "author": "Olivier Devolder, Fran\u00e7ois Glineur, and Yurii Nesterov.",
260
+ "venue": "Mathematical Programming, 146, 08 2013.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "16": {
266
+ "title": "Complementary composite minimization, small gradients in general\nnorms, and applications to regression problems.",
267
+ "author": "Jelena Diakonikolas and Crist\u00f3bal Guzm\u00e1n.",
268
+ "venue": "2021.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "17": {
274
+ "title": "Fast stochastic composite minimization and an accelerated frank-wolfe\nalgorithm under parallelization.",
275
+ "author": "Benjamin Dubois-Taine, Francis Bach, Quentin Berthet, and Adrien Taylor.",
276
+ "venue": "arXiv:2205.12751, 2022.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "18": {
282
+ "title": "A statistical view of some chemometrics regression tools.",
283
+ "author": "Ildiko E Frank and Jerome H Friedman.",
284
+ "venue": "Technometrics, 35(2):109\u2013135, 1993.",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "19": {
290
+ "title": "Optimal stochastic approximation algorithms for strongly convex\nstochastic composite optimization i: A generic algorithmic framework.",
291
+ "author": "Saeed Ghadimi and Guanghui Lan.",
292
+ "venue": "SIAM Journal on Optimization, 22(4):1469\u20131492, 2012.",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "20": {
298
+ "title": "Optimal stochastic approximation algorithms for strongly convex\nstochastic composite optimization, ii: Shrinking procedures and optimal\nalgorithms.",
299
+ "author": "Saeed Ghadimi and Guanghui Lan.",
300
+ "venue": "SIAM Journal on Optimization, 23(4):2061\u20132089, 2013.",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "21": {
306
+ "title": "Train faster, generalize better: Stability of stochastic gradient\ndescent.",
307
+ "author": "Moritz Hardt, Benjamin Recht, and Yoram Singer.",
308
+ "venue": "CoRR, abs/1509.01240, 2015.",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "22": {
314
+ "title": "Accelerated gradient methods for stochastic optimization and online\nlearning.",
315
+ "author": "Chonghai Hu, Weike Pan, and James Kwok.",
316
+ "venue": "In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and\nA. Culotta, editors, Advances in Neural Information Processing Systems,\nvolume 22. Curran Associates, Inc., 2009.",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "23": {
322
+ "title": "Large deviations of vector-valued martingales in 2-smooth normed\nspaces.",
323
+ "author": "Anatoli Juditsky and Arkadii S. Nemirovski.",
324
+ "venue": "arXiv, 2008:0809.0813.",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "24": {
330
+ "title": "Deterministic and stochastic primal-dual subgradient algorithms for\nuniformly convex minimization.",
331
+ "author": "Anatoli Juditsky and Yurii Nesterov.",
332
+ "venue": "Stochastic Systems, 4(1):44 \u2013 80, 2014.",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "25": {
338
+ "title": "Sparsity in penalized empirical risk minimization.",
339
+ "author": "Vladimir Koltchinskii.",
340
+ "venue": "Annales de l\u2019Institut Henri Poincar\u00e9, Probabilit\u00e9s et\nStatistiques, 45(1):7 \u2013 57, 2009.",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "26": {
346
+ "title": "An optimal method for stochastic composite optimization.",
347
+ "author": "Guanghui Lan.",
348
+ "venue": "Mathematical Programming, 133:1\u201333, 06 2012.",
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "27": {
354
+ "title": "First-order and Stochastic Optimization Methods for Machine\nLearning.",
355
+ "author": "Guanghui Lan.",
356
+ "venue": "Springer Series in the Data Sciences. Springer International\nPublishing, 2020.",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "28": {
362
+ "title": "Foundations of Machine Learning.",
363
+ "author": "Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar.",
364
+ "venue": "Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA,\n2 edition, 2018.",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "29": {
370
+ "title": "Efficient methods in convex programming, 1995.",
371
+ "author": "Arkadi Nemirovski.",
372
+ "venue": null,
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "30": {
378
+ "title": "Robust stochastic approximation approach to stochastic programming.",
379
+ "author": "Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and And Shapiro.",
380
+ "venue": "Society for Industrial and Applied Mathematics, 19:1574\u20131609,\n01 2009.",
381
+ "url": null
382
+ }
383
+ },
384
+ {
385
+ "31": {
386
+ "title": "Problem Complexity and Method Efficiency in Optimization.",
387
+ "author": "A.S. Nemirovski\u012d, D.B. Yudin, and E.R. Dawson.",
388
+ "venue": "A Wiley-Interscience publication. Wiley, 1983.",
389
+ "url": null
390
+ }
391
+ },
392
+ {
393
+ "32": {
394
+ "title": "A method for solving the convex programming problem with convergence\nrate .",
395
+ "author": "Yurii Nesterov.",
396
+ "venue": "Proceedings of the USSR Academy of Sciences, 269:543\u2013547,\n1983.",
397
+ "url": null
398
+ }
399
+ },
400
+ {
401
+ "33": {
402
+ "title": "Gradient methods for minimizing composite functions.",
403
+ "author": "Yurii Nesterov.",
404
+ "venue": "Mathematical Programming, 140(1):125\u2013161, 2013.",
405
+ "url": null
406
+ }
407
+ },
408
+ {
409
+ "34": {
410
+ "title": "Introductory Lectures on Convex Optimization: A Basic Course.",
411
+ "author": "Yurii Nesterov.",
412
+ "venue": "Springer Publishing Company, Incorporated, 1 edition, 2014.",
413
+ "url": null
414
+ }
415
+ },
416
+ {
417
+ "35": {
418
+ "title": "Universal gradient methods for convex optimization problems.",
419
+ "author": "Yurii Nesterov.",
420
+ "venue": "Mathematical Programming, 152, 05 2014.",
421
+ "url": null
422
+ }
423
+ },
424
+ {
425
+ "36": {
426
+ "title": "On first-order algorithms for /nuclear norm minimization.",
427
+ "author": "Yurii Nesterov and Arkadi Nemirovski.",
428
+ "venue": "Acta Numerica, 22:509\u2013575, 2013.",
429
+ "url": null
430
+ }
431
+ },
432
+ {
433
+ "37": {
434
+ "title": "A modern introduction to online learning.",
435
+ "author": "Francesco Orabona.",
436
+ "venue": "CoRR, abs/1912.13213, 2019.",
437
+ "url": null
438
+ }
439
+ },
440
+ {
441
+ "38": {
442
+ "title": "Guaranteed minimum-rank solutions of linear matrix equations via\nnuclear norm minimization.",
443
+ "author": "Benjamin Recht, Maryam Fazel, and Pablo A Parrilo.",
444
+ "venue": "SIAM review, 52(3):471\u2013501, 2010.",
445
+ "url": null
446
+ }
447
+ },
448
+ {
449
+ "39": {
450
+ "title": "Understanding Machine Learning - From Theory to Algorithms.",
451
+ "author": "Shai Shalev-Shwartz and Shai Ben-David.",
452
+ "venue": "Cambridge University Press, 2014.",
453
+ "url": null
454
+ }
455
+ },
456
+ {
457
+ "40": {
458
+ "title": "Optimization for Machine Learning.",
459
+ "author": "Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright.",
460
+ "venue": "The MIT Press, 2011.",
461
+ "url": null
462
+ }
463
+ },
464
+ {
465
+ "41": {
466
+ "title": "On the stability of inverse problems.",
467
+ "author": "Andrey Nikolayevich Tikhonov.",
468
+ "venue": "In Dokl. Akad. Nauk SSSR, volume 39, pages 195\u2013198, 1943.",
469
+ "url": null
470
+ }
471
+ },
472
+ {
473
+ "42": {
474
+ "title": "On uniformly convex functionals.",
475
+ "author": "AA Vladimirov, Yu E Nesterov, and Yu N Chekanov.",
476
+ "venue": "Vestnik Moskov. Univ. Ser. XV Vychisl. Mat. Kibernet, 3:12\u201323,\n1978.",
477
+ "url": null
478
+ }
479
+ },
480
+ {
481
+ "43": {
482
+ "title": "Mirror descent strikes again: Optimal stochastic convex optimization\nunder infinite noise variance.",
483
+ "author": "Nuri Mert Vural, Lu Yu, Krishna Balasubramanian, Stanislav Volgushev, and\nMurat A Erdogdu.",
484
+ "venue": "In Po-Ling Loh and Maxim Raginsky, editors, Proceedings of\nThirty Fifth Conference on Learning Theory, volume 178 of Proceedings\nof Machine Learning Research, pages 65\u2013102. PMLR, 02\u201305 Jul 2022.",
485
+ "url": null
486
+ }
487
+ },
488
+ {
489
+ "44": {
490
+ "title": "High-Dimensional Statistics: A Non-Asymptotic Viewpoint.",
491
+ "author": "M.J. Wainwright.",
492
+ "venue": "Cambridge Series in Statistical and Probabilistic Mathematics.\nCambridge University Press, 2019.",
493
+ "url": null
494
+ }
495
+ },
496
+ {
497
+ "45": {
498
+ "title": "On norms in some class of exponential type orlicz spaces of random\nvariables.",
499
+ "author": "Krzysztof Zajkowski.",
500
+ "venue": "Positivity, 24(5):1231\u20131240, 2020.",
501
+ "url": null
502
+ }
503
+ },
504
+ {
505
+ "46": {
506
+ "title": "On uniformly convex functions.",
507
+ "author": "C. Zalinescu.",
508
+ "venue": "J. Math. Anal. Appl., 95:344\u2013374, 1983.",
509
+ "url": null
510
+ }
511
+ }
512
+ ],
513
+ "url": "http://arxiv.org/html/2211.01758v2"
514
+ }
20240123/2211.04625v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2211.06598v3.json ADDED
@@ -0,0 +1,327 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Enhancing Resource Utilization of Non-terrestrial Networks Using Temporal Graph-based Deterministic Routing",
3
+ "abstract": "Deterministic routing has emerged as a promising technology for future non-terrestrial networks (NTNs), offering the potential to enhance service performance and optimize resource utilization. However, the dynamic nature of network topology and resources poses challenges in establishing deterministic routing. These challenges encompass the intricacy of jointly scheduling transmission links and cycles, as well as the difficulty of maintaining stable end-to-end (E2E) routing paths. To tackle these challenges, our work introduces an efficient temporal graph-based deterministic routing strategy. Initially, we utilize a time-expanded graph (TEG) to represent the heterogeneous resources of an NTN in a time-slotted manner. With TEG, we meticulously define each necessary constraint and formulate the deterministic routing problem. Subsequently, we transform this nonlinear problem equivalently into solvable integer linear programming (ILP), providing a robust yet time-consuming performance upper bound. To address the considered problem with reduced complexity, we extend TEG by introducing virtual nodes and edges. This extension facilitates a uniform representation of heterogeneous network resources and traffic transmission requirements. Consequently, we propose a polynomial-time complexity algorithm, enabling the dynamic selection of optimal transmission links and cycles on a hop-by-hop basis. Simulation results validate that the proposed algorithm yields significant performance gains in traffic acceptance, justifying its additional complexity compared to existing routing strategies.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Non-terrestrial networks (NTNs) have emerged as a promising solution for global high-speed Internet access, thanks to their extensive coverage and robust bandwidth capabilities [1 ###reference_1###]. The continuous progress in aerial and space technologies, coupled with reduced manufacturing and launch costs, has notably hastened the development of NTNs. This acceleration is exemplified by the rapid establishment of mega-constellations like Starlink, OneWeb, and Telesat, highlighting the growing significance of NTNs in the future connectivity landscape. The 3rd Generation Partnership Project (3GPP) is actively dedicated to evolving 5G systems to support NTNs. Since 2017, 3GPP has released a series of documents focusing on network architecture, system configuration, and radio access [2 ###reference_2###]. Concurrently, the Internet Engineering Task Force (IETF) network working group diligently analyzes the requirements of satellite constellations in the future Internet. Their analysis identifies efficient routing as a key enabler to enhance service performance and resource utilization within NTNs [3 ###reference_3###].\nNevertheless, designing efficient routing strategies for NTNs is challenging due to the dynamics of their network topologies and resources [4 ###reference_4###]. Over the years, researchers have explored and implemented diverse routing strategies tailored to specific NTNs. One widely adopted approach is the shortest path routing algorithm (SPR), designed to facilitate routing for pre-defined remote sensing transmission missions [5 ###reference_5###]. SPR models the network topology as a static graph over the mission duration, identifying the end-to-end (E2E) path with the minimum delay or hops. However, this approach lacks adaptability to changing network conditions and traffic demands. The snapshot graph-based routing algorithm (STR) extends SPR by employing a series of static snapshots to model the time-varying network topology and calculates E2E routing in each snapshot [6 ###reference_6###]. However, STR may not be able to determine feasible routing paths within a single snapshot when contacts or resources are scarce. Another routing strategy, the contact graph routing algorithm (CGR), incorporates the caching-and-forwarding capability of satellites in routing decisions, enabling multi-hop transmissions in disruption-tolerant scenarios [7 ###reference_7###]. However, CGR prioritizes establishing the earliest connected E2E routing, potentially compromising optimal delay performance. Furthermore, the aforementioned strategies base routing decisions on bandwidth requirements over a long time duration, without allowing for the precise specification of traffic transmission times. Consequently, micro-bursts occur frequently, leading to uncertain delays and congestion.\nDeterministic routing holds considerable promise within NTNs, offering the potential to enhance service performance and optimize resource utilization [8 ###reference_8###]. This technology facilitates precise scheduling of transmission links and cycles at each hop along the routing path, ensuring strict adherence to E2E delay and jitter requirements. Moreover, it enables dynamic allocation of network resources within each cycle on demand, thereby improving the overall resource utilization. Despite the commendable efforts of the Institute of Electrical and Electronics Engineers (IEEE) Time-Sensitive Networking (TSN) and the Internet Engineering Task Force (IETF) Deterministic Networking (DetNet) committees [9 ###reference_9###], the implementation of deterministic routing in NTNs remains challenging. This challenge stems from two primary factors: i) the high complexity associated with solving integer linear programming (ILP) for joint routing and scheduling falls short of meeting real-time processing requirements, and ii) the dynamic nature of network topology and resources complicates the identification of stable E2E routing paths.\nIn response to these challenges, we introduce a temporal graph-based strategy to efficiently address the problem.\nInitially, we utilize a time-expanded graph (TEG) [10 ###reference_10###] to represent the heterogeneous resources of an NTN in a time-slotted manner, including contact topology, link capacity, node storage, link delay, and storage delay.\nWith TEG, we formulate the deterministic routing problem comprehensively, incorporating a set of crucial constraints. Subsequently, we transform this nonlinear problem equivalently into a solvable ILP format. This transformation involves the linearization of cross-cycle propagation and caching constraints arising from long link delay and potential storage delay, thereby providing a robust yet time-consuming performance upper bound.\nTo address the problem with reduced complexity, we construct an extended TEG (ETEG) to uniformly represent heterogeneous network resources and traffic transmission requirements. With ETEG, we propose a polynomial-time complexity algorithm for determining optimal transmission links and cycles on a hop-by-hop basis.\nFurthermore, we analyze the optimality and complexity of the proposed algorithm, followed by an implementation framework based on segment routing to facilitate its feasibility within large-scale NTNs. Simulation results demonstrate the superior performance of our proposal over SPR, STR, and CGR in terms of traffic acceptance. Additionally, it exhibits a significantly lower running time than the ILP-based strategy (referred to as ILPS)."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II System model and problem formulation",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A System model",
21
+ "text": "In Fig. 1(a), we analyze an NTN, comprising satellites denoted by the set . These satellites are interconnected through time-varying yet predictable transmission links. To enable precise delay control, the time window of each satellite is finely divided into consecutive cycles of equal duration , where . This division allows for treating the network topology as static within each cycle. Additionally, we consider a time-critical (TC) traffic demand, denoted as , characterized by a period of and a per-period size of . Assume that is injected into the NTN at time and needs to be delivered from a source satellite, , to a destination satellite, , within an upper bound of E2E delay, . Then, we select the planning horizon, , spanning from the start-cycle, , to the end-cycle, , where and fall within and , respectively111To enable deterministic transmission in all traffic periods, we establish deterministic routing for in the first period and evolve it through repetition with cycle offset or necessary revisions.. As illustrated in Fig. 1(c), we utilize a TEG, denoted as , to model heterogeneous network resources in a time-slotted manner. Specifically, includes:\n###figure_1### A set of nodes, denoted as , where each signifies a satellite within cycle .\nA set of edges, denoted as , encompasses both transmission edges and storage edges . Herein, denotes the transmission links between satellites and in . Additionally, depicts the capability of each satellite to cache data across adjacent cycles (e.g., from to ).\nA capacity set, denoted as , comprises two distinct subsets: a link capacity subset, , and a node storage subset, . Herein, signifies the maximum amount of data that can be transmitted on each transmission edge , measured in megabytes (Mb). Furthermore, represents the on-board storage resources of each satellite during any cycle , measured in Mb.\nA delay set, denoted as , encompasses two distinct subsets: a link delay subset, , and and a storage delay subset, . Specifically, represents the propagation delay on each transmission edge , measured in milliseconds (ms). Additionally, depicts the cross-cycle caching delay (e.g., from to ) at each satellite , measured in ms. Without loss of generality, we can set for any ."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Constraint establishment",
27
+ "text": "The deterministic routing problem focuses on effectively scheduling E2E transmission for the TC traffic demand . This entails a judicious selection of satellites and links, along with the identification of suitable cycles for the transmission and caching of at each satellite. To facilitate a comprehensive problem formulation, we define binary-valued variables for all edges in , denoted as , accounting for two distinct cases: for a transmission edge , indicates that is transmitted from satellite to satellite within the cycle ; otherwise, . Concerning a storage edge , if is cached by satellite from cycle to cycle , then ; otherwise, ."
28
+ },
29
+ {
30
+ "section_id": "2.2.1",
31
+ "parent_section_id": "2.2",
32
+ "section_name": "II-B1 E2E transmission constraint",
33
+ "text": "is required to initiate from the source satellite, , and be transmitted to the destination satellite, , within the planning horizon, , i.e.,"
34
+ },
35
+ {
36
+ "section_id": "2.2.2",
37
+ "parent_section_id": "2.2",
38
+ "section_name": "II-B2 Lossless forwarding constraint",
39
+ "text": "For any given satellite (excluding and ), it must forward the received within , expressed as:"
40
+ },
41
+ {
42
+ "section_id": "2.2.3",
43
+ "parent_section_id": "2.2",
44
+ "section_name": "II-B3 Capacity constraint",
45
+ "text": "For any transmission edge selected to transmit , its link capacity should be no less than the traffic size , expressed as:"
46
+ },
47
+ {
48
+ "section_id": "2.2.4",
49
+ "parent_section_id": "2.2",
50
+ "section_name": "II-B4 Storage constraint",
51
+ "text": "For any storage edge to cache , its node storage should be no less than , formulated as:"
52
+ },
53
+ {
54
+ "section_id": "2.2.5",
55
+ "parent_section_id": "2.2",
56
+ "section_name": "II-B5 Cross-cycle propagation and caching constraints",
57
+ "text": "When is transmitted from satellite to satellite , the arrival time of at should occur later than the transmission cycle of but no later than the transmission cycle of , i.e.,\nHere,\nrepresents the transmission cycle222If , it indicates that does not transmit . of ,\nindicates whether is transmitted from to ,\nrepresents the total propagation delay from to ,\nrepresents the total caching delay to , together with the caching delay within ,\nrepresents the propagation delay from and within the transmission cycle of , and represents a very large positive constant."
58
+ },
59
+ {
60
+ "section_id": "2.2.6",
61
+ "parent_section_id": "2.2",
62
+ "section_name": "II-B6 Transmission timing constraint",
63
+ "text": "The time when satellite sends satellite should fall within the transmission cycle of , i.e.,"
64
+ },
65
+ {
66
+ "section_id": "2.2.7",
67
+ "parent_section_id": "2.2",
68
+ "section_name": "II-B7 Caching timing constraint",
69
+ "text": "When satellite needs to cache received from satellite , the feasible cycles for caching should be no earlier that the cycle within which arrives at but earlier than the transmission cycle of , expressed as:\nwhere represents a very small positive constant."
70
+ },
71
+ {
72
+ "section_id": "2.3",
73
+ "parent_section_id": "2",
74
+ "section_name": "II-C Problem formulation",
75
+ "text": "The objective of is to minimize the E2E delay, encompassing both the total propagation delay and caching delay during the delivery of from to . Assuming the objective value is less than , the solution to provides efficient transmission scheduling for with deterministic guarantees. However, due to the presence of the term in constraints (5a), (6), and (7), where and involve variable-dependent summation and undergoes multivariable multiplication with both, these constraints become nonlinear. Consequently, is unsolvable using existing ILP solvers. To address this challenge, we linearize these constraints by introducing auxiliary binary-valued variables along with a set of linear constraints.\nInitially, we transform the variable-dependent summation in and into a summation term for the product of independent variables. This is achieved by introducing an auxiliary binary-valued variable, , and the following linear constraints:\nand\nHere, . Using , and turn to be\nand\nSubsequently, we deal with multivariable multiplication in and . For simplicity, we establish a general transformation paradigm for terms with the form , where . Specifically, we introduce an auxiliary binary-valued variable, denoted as , for substitution, adhering to the following linear constraints:\nBased on the paradigm, we can respectively transform and into\nand\nwith the introduced variables and satisfying\nSubstituting (14) and (15) into (5a), (6), and (7), we can obtain the following linear constraints:\nand\nUltimately, can be reformulated as an ILP problem:\nwhere the set of decision variables is defined as . Notably, ILP solvers [11 ###reference_11###] can effectively handle , providing a robust performance upper bound for . Nevertheless, the computations involved in these solvers prove excessively time-consuming, failing to meet real-time processing requirements. Therefore, we introduce a graph-based method to address , aiming to reduce the running time while ensuring optimality."
76
+ },
77
+ {
78
+ "section_id": "3",
79
+ "parent_section_id": null,
80
+ "section_name": "III Temporal graph-based deterministic routing",
81
+ "text": ""
82
+ },
83
+ {
84
+ "section_id": "3.1",
85
+ "parent_section_id": "3",
86
+ "section_name": "III-A Extended time-expanded graph model",
87
+ "text": "To effectively address , we enhance the original graph to form an ETEG, denoted as , by introducing virtual nodes and edges to establish a uniform representation of heterogeneous network resources and traffic transmission requirements. The construction of is outlined as follows, as illustrated in Fig. 2:\n###figure_2### 1) To capture time-slotted network resources, we initialize the node set as , the edge set as , the capacity set as , and the delay set as .\n2) To signify the earliest cycle within which departs from the source satellite , we introduce a virtual source into and a virtual transmission edge into .\n3) To represent potential cycles within which is transmitted to the destination satellite , we introduce a virtual destination into and a set of virtual aggregation edges into .\nBy designating as the unique destination, we can avoid traversing potential routing to the destination satellite within each cycle (i.e., , where ), thus significantly reducing the computational complexity of deterministic routing.\n4) To indicate the capacity requirement of , we set the capacity metrics of all introduced virtual transmission edges and virtual aggregation edges as . Since these edges lack physical counterparts, we assign a delay metric of to them, thus not affecting the overall E2E delay."
88
+ },
89
+ {
90
+ "section_id": "3.2",
91
+ "parent_section_id": "3",
92
+ "section_name": "III-B ETEG-based deterministic routing algorithm",
93
+ "text": "Based on ETEG, we identify that the deterministic routing problem is equivalent to a path-finding one, rather than directly solving the ILP in . Using this insight, we propose an ETEG-based deterministic routing algorithm with low complexity (as detailed in Algorithm 1). The proposed algorithm jointly utilizes both link capacity and node storage, facilitating cross-cycle propagation and caching of . Consequently, it dynamically selects optimal links and cycles on a hop-by-hop basis to establish a time-featured path (Definition 1) that minimizes the E2E delay while meeting resource requirements.\nA time-featured path, denoted as , can be represented by a node sequence in the ETEG, adhering to the condition: if , then and ; otherwise and .\nIn Algorithm 1, we execute the path-finding process based on . To facilitate this process, we define two essential parameters at each node : the pre-node , indicating the previous hop of in the time-featured path , and the node delay , representing the propagation delay and caching delay along from to . Additionally, we introduce a priority queue, denoted as , for maintaining nodes awaiting the determination of their node delay. During initialization (in step 2), we set , for any node , , for any node , and . In each iteration (from steps 3 to 15), we extract with the minimum node delay from . Subsequently, we update each node adjacent to , provided that the resources (i.e., link capacity or node storage) are sufficient, and the node delay of can be reduced via the relay by . Due to cross-cycle propagation and caching constraints, the updated node might be with . Consequently, we have and designate . The above iteration continues until is extracted from . Finally, if holds, we can obtain by backtracking from to (from steps 17 to 21); otherwise, no feasible exists.\nFig. 3 illustrates an application. Initiated from , the proposed algorithm updates the node delay of its sole neighbor, , to and set . Next, we pop from and traverse all its neighbors, yielding the following: , due to insufficient node storage of ; and , owing to cross-cycle propagation from to and , respectively. In iteration 3, is extracted from , and is updated to through cross-cycle caching from to . The proposed algorithm then selects and update its sole neighbor with . Subsequent steps involve updating to through cross-cycle propagation from to , which also serves as the ultimate node delay at , i.e., . Through backtracking, a feasible time-featured path is obtained.\n###figure_3###"
94
+ },
95
+ {
96
+ "section_id": "3.3",
97
+ "parent_section_id": "3",
98
+ "section_name": "III-C Complexity and optimality analyses",
99
+ "text": "The ETEG-based deterministic routing algorithm is capable of calculating a time-featured path with minimum E2E delay.\nFor Algorithm 1, we demonstrate that each node extracted from has been determined with the minimum node delay. This assertion remains valid for with . Moving on to the -th extracted node, denoted as , with a node delay , we evaluate whether can be further reduced through relaying by any node in , such as . If so, either holds for cross-cycle propagation, or holds for cross-cycle caching. However, due to step 4, holds, along with and , thereby contradicting the aforementioned inequalities. Conversely, is minimized when is extracted from , also holding for . The proof is completed.\n\u220e\nThe time complexity of the ETEG-based deterministic routing algorithm is , where and denote the number of nodes and edges in the input , respectively.\nFor Algorithm 1, assume that the and are stored in adjacency lists and binary heaps, respectively. The initialization in step 2 takes time. During each iteration from steps 3 to 15, it requires time to extract with the minimum node delay from and time to update . Furthermore, updating all nodes adjacent to takes at most , where represents the out-degree of in . At worst, we must traverse all nodes in once before extracting from . Therefore, the time complexity reaches . Additionally, the backtracking process takes at most . Thus, the total time complexity is , as for a connected graph . The proof is completed.\n\u220e"
100
+ },
101
+ {
102
+ "section_id": "3.4",
103
+ "parent_section_id": "3",
104
+ "section_name": "III-D Algorithm implementation",
105
+ "text": "Following [12 ###reference_12###], we propose an implementation framework based on segment routing [13 ###reference_13###] for our algorithm. Within the NTN scenario in Fig. 1(a), we present the key aspects of the framework as follows:"
106
+ },
107
+ {
108
+ "section_id": "3.4.1",
109
+ "parent_section_id": "3.4",
110
+ "section_name": "III-D1 Parameter maintenance",
111
+ "text": "The network operations control center (NOCC) continuously acquires link status information from satellites through low propagation delay satellite-to-ground links. It extracts essential network parameters for deterministic routing decisions, including link capacity, link delay, and node storage."
112
+ },
113
+ {
114
+ "section_id": "3.4.2",
115
+ "parent_section_id": "3.4",
116
+ "section_name": "III-D2 Routing decision",
117
+ "text": "Leveraging the maintained network parameters and traffic information from the terrestrial sending user, the NOCC constructs the ETEG and determines optimal deterministic routing for the TC traffic demand. Additionally, the NOCC not only reserves the required resources for the demand by updating network parameters but also configures the deterministic forwarding table sent to the specified source satellite."
118
+ },
119
+ {
120
+ "section_id": "3.4.3",
121
+ "parent_section_id": "3.4",
122
+ "section_name": "III-D3 Routing deployment",
123
+ "text": "Following the deterministic forwarding table, the source satellite modifies TC traffic demand packets injected by the sending user. This modification involves encapsulating per-hop transmission link and cycle information into the packets\u2019 headers, directing them across the network transmission until reaching the destination satellite. Then, the packets are decapsulated and downlinked to the terrestrial receiving user."
124
+ },
125
+ {
126
+ "section_id": "3.4.4",
127
+ "parent_section_id": "3.4",
128
+ "section_name": "III-D4 Routing evolution",
129
+ "text": "The NOCC consistently evaluates the feasibility of preceding deterministic routing for upcoming traffic periods. If feasible, the source satellite simply introduces a period-size cycle offset when deploying the deterministic forwarding table; otherwise, the NOCC re-executes the routing decision in 2) and the routing deployment in 3).\nNotably, a cross-domain decision architecture [14 ###reference_14###] can be alternatively deployed when a single NOCC is insufficient to handle all TC traffic demands across the NTN, or when long propagation delays emerge as a primary concern."
130
+ },
131
+ {
132
+ "section_id": "4",
133
+ "parent_section_id": null,
134
+ "section_name": "IV Simulations",
135
+ "text": ""
136
+ },
137
+ {
138
+ "section_id": "4.1",
139
+ "parent_section_id": "4",
140
+ "section_name": "IV-A Simulation setup",
141
+ "text": "We conduct simulations on a partial Starlink constellation comprising 168 satellites selected from S1 [15 ###reference_15###]. These satellites are distributed across 12 orbits, each accommodating 14 satellites, positioned at a height of 550 km with an inclination of . Employing the Satellite Toolkit (STK) simulator, we generate a time-varying NTN scenario with parameters in Table I. Consider that TC traffic demands continuously enter the NTN within the initial 120 seconds (s), following a Poisson process. The source-destination satellite pairs of these demands are randomly specified across the constellation. Each demand, representing applications like high-quality video telephony [16 ###reference_16###], operates with a period of 33.33 ms (equivalent to a video frame rate of 30 frames per second) and has an active duration varying from 60 to 180 s. Furthermore, the per-cycle size of each demand follows a uniform distribution between 0.05 Mb and 0.6 Mb, with an E2E delay upper bound set at 75 ms.\nSimulations are executed on a Hewlett-Packard Z620 tower workstation (Intel Core i9-13900H [email protected], 32G RAM, Windows 11 x64) with a C++ environment.\n###table_1### Throughout the entire 300-second horizon, we evaluate the performance of our proposed algorithm (referred to as DetR) and four benchmark strategies: SPR, STR, CGR, and ILPS, using the following metrics:\nTraffic acceptance : The total size of demands with E2E deterministic transmission guarantees, providing insights into network resource utilization.\nAverage running time : The average time to process a single demand.\nAverage E2E path delay : The average delay of routing paths associated with demands possessing E2E deterministic transmission guarantees."
142
+ },
143
+ {
144
+ "section_id": "4.2",
145
+ "parent_section_id": "4",
146
+ "section_name": "IV-B Simulation results",
147
+ "text": "###figure_4### We first evaluate the traffic acceptance, , by varying the arrival rate from 1 to 100 demands per second (demands/s), as shown in Fig. 4. As expected, values for all algorithms increase as the arrival rate increases, but the increase rate gradually slows down. This happens as the increasing demands occupy most of the available network resources. Notably, DetR is comparable to the optimal ILPS and surpasses CGR, STR, and SPR. In particular, when the arrival rate reaches 100 demands/s, is improved by more than 50%. This significant enhancement is attributed to the ability of DetR and ILPS to jointly utilize link capacity and node storage in different cycles, enabling conflict-free routing paths for a larger number of demands. In contrast, SPR records the lowest performance due to its neglect of time-varying network characteristics, focusing solely on routing paths within a static graph. Consequently, these paths may encounter interruptions and congestion. Notably, STR employs a series of time-evolving snapshots to model the NTN, while CGR introduces connectivity between adjacent snapshots, expanding the solution space compared to SPR. However, since the routing paths are determined based on average bandwidth requirements, micro-bursts occur frequently among demands, limiting traffic acceptance.\n###figure_5### Fig. 5 illustrates the average running time, , under varying arrival rates of demands. Notably, values for four graph-based algorithms are significantly lower than that of ILPS, primarily attributed to not traversing the entire solution space for routing decisions. DetR is the slowest among the four, with the gap not exceeding 80 microseconds (us). The increased complexity arises because DetR determines optimal transmission links and cycles for demands within a time-expanded solution space. Together with Fig. 4, it becomes evident that the commendable enhancement achieved by DetR in traffic acceptance justifies its increased complexity compared to SPR, STR, and CGR. Furthermore, DetR\u2019s exhibits a gradual increase with rising arrival rates since DetR facilitates deterministic routing evolution by introducing cycle offsets or performing re-execution. These actions become more frequent as demands increase, thus increasing .\n###figure_6### We also evaluate the average E2E path delay, , for both DetR and ILPS. As shown in Fig. 6, the of DetR aligns with that of ILPS, gradually increasing as the arrival rate of demands rises. This trend is expected, as network resources become limited under heavy demand loads. To accommodate more demands while maintaining conflict-free deterministic routing paths, essential cross-cycle propagation and caching are included in these paths. Although increases, it remains below the upper bound of 75 ms. Notably, the average E2E path delay of CGR, STR, and SPR is not presented, since their traffic acceptance is far lower than that of DetR, enabling them to find routing paths with lower delay under lightly loaded network conditions. Consequently, direct numerical comparisons among all algorithms lack meaningful insight."
148
+ },
149
+ {
150
+ "section_id": "5",
151
+ "parent_section_id": null,
152
+ "section_name": "Conclusion",
153
+ "text": "This study focuses on addressing the deterministic routing problem within NTNs. Leveraging the TEG, we meticulously formulated and equivalently transformed this intricate problem into a solvable ILP problem, providing a robust yet time-consuming performance upper bound. To enhance time efficiency, we introduced an ETEG-based deterministic routing algorithm with polynomial time complexity. This algorithm enables the joint utilization of link capacity and node resources, facilitating cross-cycle propagation and caching of the TC traffic demands. Consequently, it can determine optimal transmission links and cycles on a hop-by-hop basis. Simulation results demonstrated that our proposal outperforms SPR, STR, and CGR in terms of traffic acceptance, thereby justifying its additional complexity. Furthermore, it exhibits significantly reduced running time compared to ILPS."
154
+ }
155
+ ],
156
+ "appendix": [],
157
+ "tables": {
158
+ "1": {
159
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:70%;\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Network Parameters</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S4.T1.4\">\n<tr class=\"ltx_tr\" id=\"S4.T1.4.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.4.5.1\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.5.1.1\" style=\"font-size:70%;\">\u00a0\u00a0Cycle duration (ms)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.5.2\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.5.2.1\" style=\"font-size:70%;\">Link capacity (Mb)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.5.3\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.5.3.1\" style=\"font-size:70%;\">Node storage (Mb)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.4.5.4\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\"><span class=\"ltx_text\" id=\"S4.T1.4.5.4.1\" style=\"font-size:70%;\">Link delay (ms)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.1.1.1.1\" style=\"font-size:70%;\">\u00a0\u00a0</span><span class=\"ltx_text\" id=\"S4.T1.1.1.1.2\" style=\"font-size:70%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.2.2.2\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.2.2.2.1\" style=\"font-size:70%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.3.3.3\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.3.3.3.1\" style=\"font-size:70%;\"></span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S4.T1.4.4.4\" style=\"padding-top:0.7pt;padding-bottom:0.7pt;\">\n<span class=\"ltx_text\" id=\"S4.T1.4.4.4.1\" style=\"font-size:70%;\"></span>\n</td>\n</tr>\n</table>\n</figure>",
160
+ "capture": "TABLE I: Network Parameters"
161
+ }
162
+ },
163
+ "image_paths": {
164
+ "1": {
165
+ "figure_path": "2211.06598v3_figure_1.png",
166
+ "caption": "Figure 1: Modeling a typical NTN using a TEG.",
167
+ "url": "http://arxiv.org/html/2211.06598v3/x1.png"
168
+ },
169
+ "2": {
170
+ "figure_path": "2211.06598v3_figure_2.png",
171
+ "caption": "Figure 2: An ETEG model.",
172
+ "url": "http://arxiv.org/html/2211.06598v3/x2.png"
173
+ },
174
+ "3": {
175
+ "figure_path": "2211.06598v3_figure_3.png",
176
+ "caption": "Figure 3: An application of the ETEG-based deterministic routing algorithm.",
177
+ "url": "http://arxiv.org/html/2211.06598v3/x3.png"
178
+ },
179
+ "4": {
180
+ "figure_path": "2211.06598v3_figure_4.png",
181
+ "caption": "Figure 4: Evaluation of traffic acceptance.",
182
+ "url": "http://arxiv.org/html/2211.06598v3/x4.png"
183
+ },
184
+ "5": {
185
+ "figure_path": "2211.06598v3_figure_5.png",
186
+ "caption": "Figure 5: Evaluation of average running time.",
187
+ "url": "http://arxiv.org/html/2211.06598v3/x5.png"
188
+ },
189
+ "6": {
190
+ "figure_path": "2211.06598v3_figure_6.png",
191
+ "caption": "Figure 6: Evaluation of average E2E path delay.",
192
+ "url": "http://arxiv.org/html/2211.06598v3/x6.png"
193
+ }
194
+ },
195
+ "validation": true,
196
+ "references": [
197
+ {
198
+ "1": {
199
+ "title": "\u201cNon-terrestrial Networks in the 6G Era: Challenges and Opportunities,\u201d",
200
+ "author": "M. Giordani et al.,",
201
+ "venue": "IEEE Netw., vol. 35, no. 2, pp. 244\u2013251, Apr. 2021.",
202
+ "url": null
203
+ }
204
+ },
205
+ {
206
+ "2": {
207
+ "title": "\u201c5G from Space: An Overview of 3GPP Non-terrestrial Networks,\u201d",
208
+ "author": "X. Lin et al.,",
209
+ "venue": "IEEE Commun. Stds. Mag., vol. 5, no. 4, pp. 147\u2013153, 2021.",
210
+ "url": null
211
+ }
212
+ },
213
+ {
214
+ "3": {
215
+ "title": "\u201cProblems and Requirements of Satellite Constellation for Internet,\u201d",
216
+ "author": "L. Han et al.,",
217
+ "venue": "Internet Engineering Task Force, Internet-Draft draft-lhan-problems-requirements-satellite-net-03, Jul. 2022.",
218
+ "url": null
219
+ }
220
+ },
221
+ {
222
+ "4": {
223
+ "title": "\u201cEnhancing Earth Observation Throughput Using Inter-satellite Communication,\u201d",
224
+ "author": "P. Wang et al.,",
225
+ "venue": "IEEE Trans. Wireless Commun., vol. 21, no. 10, pp. 7990-\u20138006, 2022.",
226
+ "url": null
227
+ }
228
+ },
229
+ {
230
+ "5": {
231
+ "title": "\u201cOPSPF: Orbit Prediction Shortest Path First Routing for Resilient LEO Satellite Networks,\u201d",
232
+ "author": "T. Pan et al.,",
233
+ "venue": "In Proc. IEEE Int. Conf. Commun., 2019, pp. 1\u2013-6.",
234
+ "url": null
235
+ }
236
+ },
237
+ {
238
+ "6": {
239
+ "title": "\u201cA Dynamic Routing Concept for ATM-based Satellite Personal Communication Networks,\u201d",
240
+ "author": "M. Werner,",
241
+ "venue": "IEEE J. Sel. Areas Commun., vol. 15, no. 8, pp. 1636\u20131648, Oct. 1997.",
242
+ "url": null
243
+ }
244
+ },
245
+ {
246
+ "7": {
247
+ "title": "\u201cContact Graph Routing in NTN Space Networks: Overview, Enhancements and Performance,\u201d",
248
+ "author": "G. Araniti et al.,",
249
+ "venue": "IEEE Commun. Mag., vol. 53, no. 3, pp. 38\u201346, Mar. 2015.",
250
+ "url": null
251
+ }
252
+ },
253
+ {
254
+ "8": {
255
+ "title": "\u201cTowards Large-scale Deterministic IP Networks,\u201d",
256
+ "author": "B. Liu et al.,",
257
+ "venue": "In IFIP Networking Conf., Jun. 2021, pp. 1\u20139.",
258
+ "url": null
259
+ }
260
+ },
261
+ {
262
+ "9": {
263
+ "title": "\u201cRobustness and Reliability Provided by Deterministic Packet Networks (TSN and DetNet),\u201d",
264
+ "author": "B. Varga et al.,",
265
+ "venue": "IEEE Trans. Netw. Service Manag., early access, doi: 10.1109/TNSM.2023.3284590.",
266
+ "url": null
267
+ }
268
+ },
269
+ {
270
+ "10": {
271
+ "title": "\u201cTime-expanded Graph-based Energy-efficient Delay-bounded Multicast over Satellite Networks,\u201d",
272
+ "author": "K. Shi et al.,",
273
+ "venue": "IEEE Trans. Veh. Technol., vol. 69, no. 9, pp. 10380\u201310384, Apr. 2020.",
274
+ "url": null
275
+ }
276
+ },
277
+ {
278
+ "11": {
279
+ "title": "\u201cGurobi Optimizer Reference Manual,\u201d",
280
+ "author": "I. Gurobi Optimization,",
281
+ "venue": "2016. [Online]. Available: http://www.gurobi.com.",
282
+ "url": null
283
+ }
284
+ },
285
+ {
286
+ "12": {
287
+ "title": "\u201cSoftware-defined Multicast Using Segment Routing in LEO Satellite Networks,\u201d",
288
+ "author": "M. Hu, et al.,",
289
+ "venue": "IEEE Trans. Mob. Comput., vol. 23, no. 1, pp. 835\u2013849, Jan. 2024.",
290
+ "url": null
291
+ }
292
+ },
293
+ {
294
+ "13": {
295
+ "title": "\u201cSegment Routing in Software-defined Networks: A Survey,\u201d",
296
+ "author": "Z. N. Abdullah, et al.,",
297
+ "venue": "IEEE Commun. Surv. Tutor., vol. 21, no. 1, pp. 464\u2013486, Firstquarter 2019.",
298
+ "url": null
299
+ }
300
+ },
301
+ {
302
+ "14": {
303
+ "title": "\u201cA Cross-domain SDN Architecture for Multi-layered Space-terrestrial Integrated Networks,\u201d",
304
+ "author": "Y. Shi, et al.,",
305
+ "venue": "IEEE Netw., vol. 33, no. 1, pp. 29\u201335, 2019.",
306
+ "url": null
307
+ }
308
+ },
309
+ {
310
+ "15": {
311
+ "title": "\u201cLaser Intersatellite Links in a Starlink Constellation: A Classification and Analysis,\u201d",
312
+ "author": "A. U. Chaudhry, et al.,",
313
+ "venue": "IEEE Veh. Technol. Mag., vol. 16, no. 2, pp. 48\u201356, Jun. 2021.",
314
+ "url": null
315
+ }
316
+ },
317
+ {
318
+ "16": {
319
+ "title": "\u201cOnRL: Improving Mobile Video Telephony via Online Reinforcement Learning,\u201d",
320
+ "author": "H. Zhang et al.,",
321
+ "venue": "In Proc. 26th Annu. Int. Conf. Mobile Comput. Netw., Sep. 2020, pp. 1\u201314.",
322
+ "url": null
323
+ }
324
+ }
325
+ ],
326
+ "url": "http://arxiv.org/html/2211.06598v3"
327
+ }
20240123/2211.08262v4.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "A mixed-categorical correlation kernel for Gaussian process",
3
+ "abstract": "Recently, there has been a growing interest for mixed-categorical meta-models based on Gaussian process (GP) surrogates. In this setting, several existing approaches use different strategies either by using continuous kernels (e.g., continuous relaxation and Gower distance based GP) or by using a direct estimation of the correlation matrix.\nIn this paper, we present a kernel-based approach that extends continuous exponential kernels to handle mixed-categorical variables. The proposed kernel leads to a new GP surrogate that generalizes both the continuous relaxation and the Gower distance based GP models. We demonstrate, on both analytical and engineering problems, that our proposed GP model gives a higher likelihood and a smaller residual error than the other kernel-based state-of-the-art models. Our method is available in the open-source software SMT.",
4
+ "sections": [],
5
+ "appendix": [],
6
+ "tables": {},
7
+ "image_paths": {},
8
+ "validation": true,
9
+ "references": [],
10
+ "url": "http://arxiv.org/html/2211.08262v4"
11
+ }
20240123/2212.13069v3.json ADDED
@@ -0,0 +1,819 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Homophily modulates double descent generalization in graph convolution networks",
3
+ "abstract": "Graph neural networks (GNNs) excel in modeling relational data such as biological, social, and transportation networks, but the underpinnings of their success are not well understood. Traditional complexity measures from statistical learning theory fail to account for observed phenomena like the double descent or the impact of relational semantics on generalization error. Motivated by experimental observations of \u201ctransductive\u201d double descent in key networks and datasets, we use analytical tools from statistical physics and random matrix theory to precisely characterize generalization in simple graph convolution networks on the contextual stochastic block model. Our results illuminate the nuances of learning on homophilic versus heterophilic data and predict double descent whose existence in GNNs has been questioned by recent work. We show how risk is shaped by the interplay between the graph noise, feature noise, and the number of training labels. Our findings apply beyond stylized models, capturing qualitative trends in real-world GNNs and datasets. As a case in point, we use our analytic insights to improve performance of state-of-the-art graph convolution networks on heterophilic datasets.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Motivation: empirical results",
9
+ "text": "Given an -vertex graph with an adjacency matrix and features , a node classification GNN is a function insensitive to vertex ordering: for any node permutation , . We are interested in the behavior of train and test risk,\nwith and a loss metric such as the mean-squared error (MSE) or the cross-entropy. The optimal network parameters are obtained by minimizing the regularized loss\nwhere is a regularizer."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "A precise analysis of node classification on CSBM with a simple graph convolution network",
15
+ "text": "Motivated by the above discussions, we turn to a theoretical study of the performance of GCNs on random community graphs where we can understand the influence of all the involved parameters. We have seen in Section 1 ###reference_### that the generalization behavior in this setting qualitatively matches generalization on real data.\nGraph convolution networks are composed of graph convolution filters and nonlinear activations. Removing the activations results in a so-called simple GCN (42 ###reference_42###) or a spectral GNN (43 ###reference_43###, 44 ###reference_44###). For a graph with adjacency matrix and features that live on the nodes ,\nwhere are trainable parameters and is the filter support size in terms of hops on the graph. We treat the neighborhood weights at different hops as hyperparameters. We let so that the model (3 ###reference_###) reduces to ordinary linear regression when .\nIn standard feed-forward networks, removing the activations results in a linear end-to-end mapping. Surprisingly, GCNs without activations (such as SGC (42 ###reference_42###)) or with activations only in the output (such as FSGNN (40 ###reference_40###) and GPRGNN (12 ###reference_12###)) achieve state-of-the-art performance in many settings.444GCNs without activations are sometimes called \u201clinear\u201d in analogy with feed-forward networks, but that terminology is misleading. In graph learning, both and are bona fide parts of the input and a function which depends on their multiplication is a nonlinear function. What is more, in many applications is constructed deterministically from a dataset , for example as a neighborhood graph, resulting in even stronger nonlinearity.\nWe will derive test risk expressions for the above graph convolution network in two shallow cases: and . We will also state a universality conjecture for general polynomial filters. Starting with this conjecture, we can in principle extend the results to all polynomial filters using routine but tedious computation. We provide an example for the training error of a two-hop network in SI Appendix C ###reference_###. As we will show, this analytic behavior closely resembles the motivational empirical findings from Section 1 ###reference_###."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Phenomenology of generalization in GCNs",
21
+ "text": "We focus on the behavior of the test risk under various levels of graph homophily, emphasizing two main aspects: i) different levels of homophily lead to different types of double descent; ii) self-loops, standard in GCNs, create an imbalance between heterophilic and homophilic datasets; negative self-loops improve the handling of heterophilic datasets."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Discussion",
27
+ "text": "Before delving into the details of the analytical methods in Section 5 ###reference_### and conceptual connections between GNNs and spin glasses, we discuss the various interpretations of our results in the context of related work."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Generalization in GCNs via statistical physics",
33
+ "text": "The optimization problem (4 ###reference_###) has a unique minimizer as long as . Since it is a linear least-squares problem in , we can write down a closed-form solution,\nwhere\nAnalyzing generalization is, in principle, as simple as substituting the closed-form expression (11 ###reference_###) into (5 ###reference_###) and (6 ###reference_###) and calculating the requisite averages. The procedure is, however, complicated by the interaction between the graph and the features and the fact that is a random binary adjacency matrix. Further, for a symmetric , is correlated with even in a shallow GCN (and certainly in a deep one)."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Conclusion",
39
+ "text": "We analyzed generalization in graph neural networks by making an analogy with a system of interacting particles: particles correspond to the data points and the interactions are specified by the adjacency relation and the learnable weights. The latter can be interpreted as defining the \u201cinteraction physics\u201d of the problem. The best weights correspond to the most plausible interaction physics, coupled in turn with the network formation mechanism.\nThe setting that we analyzed is maybe the simplest combination of a graph convolution network and data distribution which exhibits interesting, realistic behavior. In order to theoretically capture a broader spectrum of complexity in graph learning we need to work on new ideas in random matrix theory and its neural network counterparts (76 ###reference_76###). While very deep GCNs are known to suffer from oversmoothing, there exists an interesting intermediate-depth regime beyond a single layer (77 ###reference_77###). Our techniques should apply simply by replacing by any polynomial before solving the saddle point equation, but we will need a generalization of existing random matrix theory results for HCIZ integrals.\nFinally, it is likely that these generalized results could be made fully rigorous if \u201cuniversality\u201d in Conjecture 1 ###reference_ecture1### could be established formally.\nWe thank the anonymous reviewers for suggestions on how to improve presentation. Cheng Shi and Liming Pan would like to thank Zhenyu Liao (HUST) and Ming Li (ZJNU) for valuable discussions about RMT. Cheng Shi and Ivan Dokmani\u0107 were supported by the European Research Council (ERC) Starting Grant 852821\u2014SWING. Liming Pan would like to acknowledge support from National Natural Science Foundation of China (NSFC) under Grant No. 62006122 and 42230406."
40
+ }
41
+ ],
42
+ "appendix": [
43
+ {
44
+ "section_id": "Appendix 1",
45
+ "parent_section_id": null,
46
+ "section_name": "Appendix A Sketch of the derivation",
47
+ "text": "We now outline the replica-based derivations. We first consider one-layer GCN to show the main idea. The training and test risks in this case is given by (5 ###reference_###) in the main text. We further extend the analysis when self-loops are included in Appendix B ###reference_###.\nWe begin by defining the augmented partition function,\nwhere is the inverse temperature. The Hamiltonian in the above equation reads\nwhich is the loss (4 ###reference_###) scaled by . The \u201cobservables\u201d and (the scaled training and test risks) are the quantities we are interested in:\nWhen the inverse temperature is small, the Gibbs measure is diffuse; when , the Gibbs measure converges to an atomic measure concentrated on the unique solution of (4 ###reference_###). That is to say, for , we can write\nThe idea is to compute the values of the observables in the large system limit at a finite temperature and then take the limit . To this end, we define the free energy density corresponding to the augmented partition function,\nThe expected risks can be computed as\nAlthough concentrates for large , a direct computation of the quenched average (20 ###reference_###) is intractable. We now use the replica trick which takes the expectation inside logarithm, or replaces the quenched average by annealed average in physics jargon:\nThe main idea of the replica method is to first compute for integer by interpreting as the product of partition functions for independent configurations . Then we obtain by taking the limit in (22 ###reference_###), even though the formula for is valid only for integer . The expectation of replicated partition function reads\nwhere we denote . We first keep fixed and take the expectation over . Directly computing the expectation over the binary graph matrix is non-trivial. To make progress, we average over (instead cf. (15 ###reference_###)). In Appendix A.2 ###reference_### we show that this Gaussian substitution does not change the free energy density, ultimately yielding the same risks and accuracies as detailed in Conjecture 1 ###reference_ecture1###.\nLetting , the elements of are now jointly Gaussian for any fixed and can be computed by multivariate Gaussian integration. It is not hard to see that depends only on a vector and a matrix defined as\nIn statistical physics these quantities are called the order parameters. We then define\nUsing the Fourier representation of the Dirac delta function , we have\nso that (23 ###reference_###) becomes\nwhere\nIn (27 ###reference_###) and (28 ###reference_###), we apply the change of variables , .\nNote that while (25 ###reference_###) still depends on , we do not average over it. As and appear as a product in (22 ###reference_###), it is not straightforward to average over them simultaneously. We average over first to arrive at (27 ###reference_###), and then find that (25 ###reference_###)-(28 ###reference_###) are self-averaging, meaning that they concentrate around their expectation as . Ultimately, this allows us to isolate the randomness in and . It also allows the possibility to adapt our framework to other graph filters by simply replacing by in (25 ###reference_###). Since (4 ###reference_###) has a unique solution, we take the replica symmetry assumption where the order parameters are structured as\nIn the limit , , we are only interested in the leading order contributions to so we write (with a small abuse of notation),\nSimilarly, we have\nwhere\nWe give the details of the derivation of in Appendix A.2 ###reference_### and of in Appendix A.3 ###reference_###.\nWhen , we have , and\nWe can now compute (27 ###reference_###) by integrating only on parameters: , , , , , and . For , the integral can be computed via the saddle point method:\nThe stationary point satisfies\nWhen , the stationary point exists only if scale as\nWe thus reparameterize them as\nIgnoring the small terms which vanish when , and denoting , we get\nwhere\nand\n.\nWe denote by , , , , , the stationary point in (36 ###reference_###) and substitute into (21 ###reference_###) yields the risks in 17 ###reference_###.\nWe analyze the connection between the stationary point and .\nAs is the unique solution in (4 ###reference_###), and from the definition of order parameters (24 ###reference_###), stationarity in (36 ###reference_###) implies that\nLet be the selection of rows from corresponding to -th row for all and be the selection of rows corresponding to -th row for all . The neural network output for the test nodes reads , where . Since we work with a non-symmetric Gaussian random matrix as our graph matrix, is independent of and (Note depends on ). Therefore, for any fixed and but random , the network outputs for test nodes are jointly Gaussian,\nCombining this with the results from (37 ###reference_###), we obtain the test accuracy as\nRecall that in (25 ###reference_###) we define\nIn this section, we begin by computing in (30 ###reference_###) where denotes the distribution of non-symmetric Gaussian spiked matrices (15 ###reference_###). We then show that the symmetry does not influence the value of (39 ###reference_###), i.e., when and . Finally, we show that the Gaussian substitution for the binary adjacency matrix does not influence the corresponding free energy density, which ultimately leads to the same risks and accuracies under different adjacency matrices (Conjecture 1 ###reference_ecture1###).\nLet\u2019s first concatenate as\nThen we can rewrite (39 ###reference_###) in vector form\nwhere is the Kronecker product. By the central limit theorem, when , the vectors for , , and all converge in distribution to Gaussian random vectors. Letting and be the mean and the covariance of , we get\nwhere . The vanishing lower-order term comes from the tails in the central limit theorem and it is thus absent when . In this case we have\nwith and defined in (24 ###reference_###). Leveraging the replica symmetric assumption (29 ###reference_###), we compute the determinant term in (41 ###reference_###) as\nThe in the third line comes from the approximation . The last two terms in (43 ###reference_###) do not increase with and can thus be neglected in the limit when computing : they give rise to in (30 ###reference_###). The rest terms in (41 ###reference_###) can be computed as\nCollecting everything we get in (30 ###reference_###). Note that , and we are going to show that .\nFor and , we find the means and covariances of as\nwhere with entries are order parameters analogous to in (24 ###reference_###).\nSubstituting (44 ###reference_###) into (41 ###reference_###), we see that the perturbation in leads to a perturbation in , while the perturbation of in and leads to a perturbation in .\nIn and , there is a bias . By the replica symmetric assumption (29 ###reference_###), we have\n\nIt is easy to show that a critical point in the saddle point equation with and exists only when for . Therefore, the term will not influence the value of free energy density when . It further implies that the elements of are symmetrically distributed around zero. This is analogous to the vanishing average magnetization for the Sherrington-Kirkpatrick model (72 ###reference_72###).\nTo summarize this section, as long as , averaging over , , and are equivalent in (34 ###reference_###) when for a one-layer GCN with .888In general it does not hold that nor that . The equivalence stems from the fact that are symmetrically distributed, but for any fixed with , we have and .\nWe recall (28 ###reference_###) and denote , then\nWe can compute the determinant term and the exponential term in (45 ###reference_###) separately when . Denoting by the -th largest eigenvalue of , we have\nFor the same reasons as in (43 ###reference_###), the two logarithmic terms in the last line of (46 ###reference_###) can be ignored since they do not grow with . We denote . The first term in the RHS of (46 ###reference_###) is then\nwhich is obtained by integrating over the Marchenko\u2013Pastur distribution.\nThe term inside the exponential in (45 ###reference_###) can be computed as\nin which we first perturb by in the second line and then compute by the Woodbury matrix identity."
48
+ },
49
+ {
50
+ "section_id": "Appendix 2",
51
+ "parent_section_id": null,
52
+ "section_name": "Appendix B Self-loop computation",
53
+ "text": "When , we still follow the replica pipeline from Section A ###reference_### but with different (more complicated) and in (27 ###reference_###).\nWe replace by in (25 ###reference_###). Now the expectation over depends not only on and , but also\nFor , since is a Gaussian with mean and covariance\nwe can compute (41 ###reference_###) directly. Leveraging the replica symmetric assumption, i.e.,\nwe have\nwhere , , , and .\nAfter applying the Fourier representation method (26 ###reference_###) to the order parameters (48 ###reference_###), we obtain dual variables . We then get a new (28 ###reference_###) as\nIntegrating over yields\nwhere and . By replica symmetry, we have\nSimilarly as in Appendix A.3 ###reference_###, we can compute the determinant term and the exponential term separately when . For the sake of simplicity, we only consider and in what follows. Denoting and , we write the determinant term in (49 ###reference_###) in the limit as\nThe exponential term in (49 ###reference_###) can be computed similarly, with noticing that and are rotationally invariant since ,\nBoth (51 ###reference_###) and (52 ###reference_###) involve the same quantity,\nwhich can be computed by random matrix free convolution (78 ###reference_78###). The Green\u2019s function (also called the Cauchy function in the mathematical literature) is defined via the Stieltjes transform\nwhich in turn yields the spectrum transform\n\nThe corresponding Voiculescu S\u2013transform reads\nWe get the multiplicative free convolution as\nAfter computing and , we obtain the expression for in (53 ###reference_###). For example, when and , the eigenvalue distributions of asymptotically follow the Marchenko\u2013Pastur distribution as\nWe then get\nOnce and are computed, we have all the ingredients of the saddle point equation (34 ###reference_###) with 12 variables,\nA critical point of this saddle point equation gives the explicit formulas for the risks in Section A ###reference_###."
54
+ },
55
+ {
56
+ "section_id": "Appendix 3",
57
+ "parent_section_id": null,
58
+ "section_name": "Appendix C A random matrix theory approach",
59
+ "text": "As mentioned in the main paper, if we start with the Gaussian adjacency matrices defined before Conjecture 1 ###reference_ecture1### we can obtain some of the results described above. For simplicity, we outline this approach for the full observation case , that is, for , and compute the empirical risk. The partial observation case follows the same strategy but involves more complicated calculations. We let ,\nand rescale variables as , . Following Conjecture 1 ###reference_ecture1###, we replace the binary symmetric adjacency matrix by the Gaussian random matrix with a rank-one spike so that\nThe ridge loss reads\nand has the unique minimum\nwith . We need to compute the empirical risk,\nas well as the empirical loss\nwhere we set .\nWe first define four thin matrices,\nwhere . Then\nUsing Woodbury matrix identity, and denoting , we have\nNow (57 ###reference_###) can be computed as\nThe curly braces indicate random variables which all concentrate around their means (they are self-averaging in statistical physics terminology). Their expectations can be computed as follows:\n:\nThe first term of the RHS of 60 ###reference_### is a special case discussed in Section 3.2.1\n(76 ###reference_76###) and also has been discussed in (79 ###reference_79###). Recalling , we first compute the Green function 54 ###reference_### of as\nwhere is the solution of\nSince is rotationally invariant, we have\nwhere is the real solution of\nIt is easy to check that when , we have for .\nand : we use (58 ###reference_###) and recall the definition of again we get\nas well as\n: we find the entries of are self-averaging, and we again use (58 ###reference_###) to average ,\nwhere\nNow we have all the ingredients in the RHS of (60 ###reference_###)/(57 ###reference_###). Putting them together gives\nThe full expressions for quantities in (62 ###reference_###) are complicated. We thus analytically study the ridgeless limit in which the following hold:\nSubstituting into (63 ###reference_###) yields\nFinally, reverse the rescaling of , , we get the same expressions for as in (18 ###reference_###) for .\n###figure_1### If we assume Conjecture 1 ###reference_ecture1### and begin with Gaussian adjacency matrices, this approach can be easily extended to multi-hop by defining and computing corresponding . We can then obtain a closed-form expression via (63 ###reference_###) after a longer computation. For example, when (two hops), , we get the training loss as\nThe accurate matching between the numerical and theoretical results in Figure 10 ###reference_### also supports Conjecture 1 ###reference_ecture1###."
60
+ },
61
+ {
62
+ "section_id": "Appendix 4",
63
+ "parent_section_id": null,
64
+ "section_name": "Appendix D A signal processing interpretation of self-loops",
65
+ "text": "We now show a simple interpretation of negative self-loops based on a graph signal processing intuition (80 ###reference_80###, 81 ###reference_81###). In homophilic graphs the labels change slowly on the graph: they are a low-pass signal (81 ###reference_81###, 12 ###reference_12###) with most of their energy concentrated on the eigenvectors of the graph Laplacian which correspond to small eigenvalues or small \u201cfrequencies\u201d. Equivalently, they correspond to large eigenvalues of the adjacency matrix since .999If node degrees are all the same the eigenvectors of the adjacency matrix and the Laplacian coincide. On heterophilic graphs the labels usually change across an edge, which corresponds to a high-frequency signal concentrated on the small-eigenvalue eigenvectors of the adjacency matrix. A graph Fourier transform can be defined via the Laplacian but also via the adjacency matrix (81 ###reference_81###). The matrix product is a delay-like filter, diagonal in the graph Fourier domain with basis functions which are the eigenvectors of . We have , where is the -th smallest eigenvalue of .\nFigure 11 ###reference_### illustrates the spectra of homophilic and heterophilic labels and graphs. A homophilic graph101010More precisely, a homophilic graph\u2013label pair. has a low-pass spectrum while a heterophilic graph has a high-pass spectrum. A self-loop shifts the spectrum of so that it becomes either a lowpass filter for positive or a highpass filter for negative . As a result, the corresponding GCNs better suppress noise and enhance signal for the corresponding graph types. In particular, assuming that the label-related signal in lives between eigenvalues and (say, negative, so we are in a heterophilic situation), we can quantify the distortion induced by the filter as which is close to 1 for large .\n###figure_2###"
66
+ },
67
+ {
68
+ "section_id": "Appendix 5",
69
+ "parent_section_id": null,
70
+ "section_name": "Appendix E Double descent in various GNNs",
71
+ "text": "In Figure 12 ###reference_### we experiment with node classification on the citeeer dataset and some popular GNN architectures: the graph attention network (41 ###reference_41###), GraphSAGE (9 ###reference_9###), and Chebyshev graph network (7 ###reference_7###). The architectures of these GNNs incorporate various strategies to mitigate overfitting. As a result there is no clear double descent in the test accuracy curves, but we still observe non-monotonicity in the test risk.\n###figure_3### ###figure_4### ###figure_5### ###figure_6###"
72
+ },
73
+ {
74
+ "section_id": "Appendix 6",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix F Experimental Details",
77
+ "text": "In this section we provide more details for the experiments in the main text.\nIn the real-world data experiments, all GCNs in Figure 1 ###reference_### are trained by the ADAM optimizer with learning rate and weight decay . We run ADAM for iterations and select the model with minimal training loss. In each trial, the training and test nodes are selected uniformly randomly. We sample training nodes separately for each label to avoid the pathology where a label has few or zero samples, which can happen at extremely low training ratios. We average different trials for each point; the error bars show their standard deviation. The standard deviation in the figures is mainly due to the train-test splits in the different trials; The fluctuations due to random initialization and stochastic optimization training are comparatively small. We do not normalize features and reprocess the data. All results in this paper are fully reproducible; code available at https://github.com/DaDaCheng/SMGCN ###reference_###.\nFor the CSBM experiments in Fig. 3 ###reference_###,5 ###reference_### and 6 ###reference_###, we calculate by (11 ###reference_###) and then compute (5 ###reference_###) and (5 ###reference_###). In Fig. 3 ###reference_### and 5 ###reference_###, we use symmetric binary adjacency matrix set ; In Fig. 6 ###reference_### we use non-symmetric binary adjacency matrix as defined in Conjecture 1 ###reference_ecture1###. The theoretical results in Fig. 3 ###reference_###, 4 ###reference_###,5 ###reference_### and 6 ###reference_### are obtained by computing the extreme values in 36 ###reference_###."
78
+ }
79
+ ],
80
+ "tables": {
81
+ "1": {
82
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.3\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.3.3\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S3.T1.3.3.4\" style=\"padding-top:1pt;padding-bottom:1pt;\"></td>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.1.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.1.1.1.1\">GCN</span> ()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.2.2.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.2.2.2.1\">GCN</span> ()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.3.5\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.3.5.1\">FSGNN</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.3.3.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.3.3.1\">FSGNN</span> ()</th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.4.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S3.T1.3.4.1.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.4.1.1.1\">Chameleon</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.1.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">75.81\u00b11.69</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.1.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">76.29\u00b11.22</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.1.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">78.27\u00b11.28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.3.4.1.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">78.96\u00b11.05</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.3.5.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S3.T1.3.5.2.1\" style=\"padding-top:1pt;padding-bottom:1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S3.T1.3.5.2.1.1\">Squirrel</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.2.2\" style=\"padding-top:1pt;padding-bottom:1pt;\">67.19\u00b11.48</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.2.3\" style=\"padding-top:1pt;padding-bottom:1pt;\">68.62\u00b12.13</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.2.4\" style=\"padding-top:1pt;padding-bottom:1pt;\">74.10\u00b11.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.3.5.2.5\" style=\"padding-top:1pt;padding-bottom:1pt;\">74.34\u00b11.21</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S3.T1.5.1.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S3.T1.6.2\" style=\"font-size:90%;\">Comparison of test accuracy when negative self-loop is absent (first and third column) or present (second and fourth column). The datasets and splits are the same as Fig.\u00a0<a class=\"ltx_ref\" href=\"#S3.F7\" title=\"Figure 7 \u2023 Negative self-loops in state-of-the-art GCNs \u2023 3 Phenomenology of generalization in GCNs \u2023 Homophily modulates double descent generalization in graph convolution networks\"><span class=\"ltx_text ltx_ref_tag\">7</span></a>.</span></figcaption>\n</figure>",
83
+ "capture": "Table 1: Comparison of test accuracy when negative self-loop is absent (first and third column) or present (second and fourth column). The datasets and splits are the same as Fig.\u00a07."
84
+ },
85
+ "2": {
86
+ "table_html": "<figure class=\"ltx_table\" id=\"A6.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A6.T2.4\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A6.T2.4.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A6.T2.4.5.1.1\">Datasets</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.5.1.2\">Cora</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.5.1.3\">Citeseer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.5.1.4\">Squirrel</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.5.1.5\">Chameleon</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.5.1.6\">Texas</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.1.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A6.T2.1.1.1\">Features ()</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A6.T2.1.1.2\">1433</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A6.T2.1.1.3\">3703</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A6.T2.1.1.4\">2089</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A6.T2.1.1.5\">2325</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A6.T2.1.1.6\">1703</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.2.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A6.T2.2.2.1\">Nodes ()</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.2.2.2\">2708</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.2.2.3\">3327</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.2.2.4\">5201</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.2.2.5\">2277</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.2.2.6\">183</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.4.6.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A6.T2.4.6.2.1\">Edges</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.6.2.2\">5278</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.6.2.3\">4552</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.6.2.4\">198353</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.6.2.5\">31371</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.6.2.6\">279</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.3.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A6.T2.3.3.1\">Inverse relative model complexity ()</th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.3.3.2\">1.89</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.3.3.3\">0.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.3.3.4\">2.49</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.3.3.5\">0.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.3.3.6\">0.11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T2.4.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A6.T2.4.4.1\"></th>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.4.2\">0.825</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.4.3\">0.718</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.4.4\">0.217</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.4.5\">0.247</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T2.4.4.6\">0.057</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"A6.T2.8.2.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"A6.T2.6.1\" style=\"font-size:90%;\">Benchmark dataset properties and statistics. is the level of homophily defined in <cite class=\"ltx_cite ltx_citemacro_cite\">(<a class=\"ltx_ref\" href=\"#bib.bib11\" title=\"\">11</a>)</cite>.</span></figcaption>\n</figure>",
87
+ "capture": "Table 2: Benchmark dataset properties and statistics. is the level of homophily defined in (11)."
88
+ }
89
+ },
90
+ "image_paths": {
91
+ "1": {
92
+ "figure_path": "2212.13069v3_figure_1.png",
93
+ "caption": "Figure 1: Double descent generalization for different GNNs, different losses, with and without explicit regularization, on datasets with varying levels of noise. We plot both the test error (red) and the test accuracy (black) against different training label ratios \u03c4\ud835\udf0f\\tauitalic_\u03c4 on the abscissa on a logarithmic scale. First column: one linear layer trained by MSE loss; second column: a two-layer GCN with ReLU activations and MSE loss; third column: a two-layer GCN with ReLU activation function, dropout and MSE loss; fourth column: a two-layer GCN with ReLU activations, dropout and cross-entropy loss; Each experimental data point is averaged over 10 random train\u2013test splits; the shadow area represents the standard deviation.\nThe right ordinate axis shows classification accuracy; we suppress the left-axis ticks due to different numerical ranges.\nWe observe that double descent is ubiquitous across datasets and architectures when varying the ratio of training labels: there often exists a regime where more labels impair generalization.",
94
+ "url": "http://arxiv.org/html/2212.13069v3/x1.png"
95
+ },
96
+ "2": {
97
+ "figure_path": "2212.13069v3_figure_2.png",
98
+ "caption": "Figure 2: Test error with different training label ratios for different GCNs on chameleon (heterophilic) datasets. A: FSGNN(40); B: two- layer GCN with ReLU activations and cross-entropy loss; C: one layer GCN with cross entropy loss; (D): one layer GCN with MSE loss.\nWe interpolate between the original dataset shown in blue (0%percent00\\%0 % noise), and an Erd\u0151s\u2013R\u00e9nyi random graph shown in red (100%percent100100\\%100 % noise) by adding noise in increments of 20%percent2020\\%20 %. Noise is introduced by first randomly removing a given proportion of edges and then adding the same number of new random edges. The node features are kept the same. Each data point is averaged ten times, and the abscissa is on a logarithmic scale. We see that graph noise accentuates double descent, which is consistent with our theoretical results (see Fig. 3B). Similarly, better GNNs attenuate the effect where additional labels hurt generalization.",
99
+ "url": "http://arxiv.org/html/2212.13069v3/x2.png"
100
+ },
101
+ "3": {
102
+ "figure_path": "2212.13069v3_figure_3.png",
103
+ "caption": "Figure 3: Theoretical results computed by the replica method (solid line) versus experimental results (solid circles) on CSBM, with \ud835\udc77\u2062(\ud835\udc68)=\ud835\udc68\ud835\udc77\ud835\udc68\ud835\udc68\\bm{{P}}(\\bm{{A}})=\\bm{{A}}bold_italic_P ( bold_italic_A ) = bold_italic_A, for varying training label ratios \u03c4\ud835\udf0f\\tauitalic_\u03c4. A: training and test risks with \u03bb=\u03bc=1\ud835\udf06\ud835\udf071\\lambda=\\mu=1italic_\u03bb = italic_\u03bc = 1, \u03b3=5\ud835\udefe5\\gamma=5italic_\u03b3 = 5 and r=0\ud835\udc5f0r=0italic_r = 0. (For \u03c4<0.2\ud835\udf0f0.2\\tau<0.2italic_\u03c4 < 0.2, we use the pseudoinverse in (11) in numerics and r=10\u22125\ud835\udc5fsuperscript105r=10^{-5}italic_r = 10 start_POSTSUPERSCRIPT - 5 end_POSTSUPERSCRIPT for the theoretical curves). We further study the impact of varying \u03bb\ud835\udf06\\lambdaitalic_\u03bb in B and r\ud835\udc5fritalic_r in C. We set r=0.02\ud835\udc5f0.02r=0.02italic_r = 0.02, \u03b3=2\ud835\udefe2\\gamma=2italic_\u03b3 = 2, \u03bc=1\ud835\udf071\\mu=1italic_\u03bc = 1 in B and \u03bb=3\ud835\udf063\\lambda=3italic_\u03bb = 3, \u03bc=1\ud835\udf071\\mu=1italic_\u03bc = 1, \u03b3=2\ud835\udefe2\\gamma=2italic_\u03b3 = 2 in C. In all experiments we set N=5000\ud835\udc415000N=5000italic_N = 5000 and d=30\ud835\udc5130d=30italic_d = 30. We work with the symmetric binary adjacency matrix ensemble \ud835\udc9cbssuperscript\ud835\udc9cbs\\mathcal{A}^{\\text{bs}}caligraphic_A start_POSTSUPERSCRIPT bs end_POSTSUPERSCRIPT. Each experimental data point is averaged over 10101010 independent trials; their standard deviation is shown by vertical bars. The theoretical curves agree perfectly with experiments but also qualitatively with the phenomena we observed on real data in Section 1.",
104
+ "url": "http://arxiv.org/html/2212.13069v3/x3.png"
105
+ },
106
+ "4": {
107
+ "figure_path": "2212.13069v3_figure_4.png",
108
+ "caption": "Figure 4: Test risk as a function of relative model complexity \u03b1=\u03b3\u22121\ud835\udefcsuperscript\ud835\udefe1\\alpha=\\gamma^{-1}italic_\u03b1 = italic_\u03b3 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT: different levels of homophily lead to distinct types of double descent in CSBM. Plots from left to right (with increasing \u03bb\ud835\udf06\\lambdaitalic_\u03bb) show curves for graphs of decreasing randomness. Varying model complexity in GNNs yields non-monotonic curves similar to those in the earlier studies of double descent studies in supervised (inductive) learning. Note that the overall shape of the curve is strongly modulated by the degree of homophily in the graph.",
109
+ "url": "http://arxiv.org/html/2212.13069v3/x4.png"
110
+ },
111
+ "5": {
112
+ "figure_path": "2212.13069v3_figure_5.png",
113
+ "caption": "Figure 5: Four typical generalization curves in CSBM model. The solid lines represent theoretical results of test risk (black) and accuracy (red) computed via (17). We also plot the mean and variance of test output \ud835\udc89i\u2062(\ud835\udc98*)subscript\ud835\udc89\ud835\udc56superscript\ud835\udc98\\bm{{h}}_{i}(\\bm{{w}}^{*})bold_italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( bold_italic_w start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ) where i\u2208Vtest\ud835\udc56subscript\ud835\udc49testi\\in V_{\\text{test}}italic_i \u2208 italic_V start_POSTSUBSCRIPT test end_POSTSUBSCRIPT. This illustrates how the tradeoff of Mean-Variance leads to different double descent curves. Note we only display results for nodes with label \ud835\udc9ai=1subscript\ud835\udc9a\ud835\udc561\\bm{{y}}_{i}=1bold_italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 1; the result for the \ud835\udc9ai=\u22121subscript\ud835\udc9a\ud835\udc561\\bm{{y}}_{i}=-1bold_italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = - 1 class simply has opposite mean and identical variance.\nA: monotonic ACCACC\\mathrm{ACC}roman_ACC (increasing) and Rtestsubscript\ud835\udc45testR_{\\text{test}}italic_R start_POSTSUBSCRIPT test end_POSTSUBSCRIPT (decreasing) when regularization r\ud835\udc5fritalic_r is large; B: A typical double descent with small regularization r\ud835\udc5fritalic_r;\nC slight double descent with relative model complexity \u03b1\ud835\udefc\\alphaitalic_\u03b1 close to 1111;\nD (near-monotonically) decreasing ACCACC\\mathrm{ACC}roman_ACC and increasing Rtestsubscript\ud835\udc45testR_{\\text{test}}italic_R start_POSTSUBSCRIPT test end_POSTSUBSCRIPT with large relative model complexity \u03b1=1/\u03b3\ud835\udefc1\ud835\udefe\\alpha=1/\\gammaitalic_\u03b1 = 1 / italic_\u03b3. The parameters are chosen as A: \u03bc=1,\u03bb=2,\u03b3=5,r=2formulae-sequence\ud835\udf071formulae-sequence\ud835\udf062formulae-sequence\ud835\udefe5\ud835\udc5f2\\mu=1,\\lambda=2,\\gamma=5,r=2italic_\u03bc = 1 , italic_\u03bb = 2 , italic_\u03b3 = 5 , italic_r = 2; B: \u03bc=1,\u03bb=2,\u03b3=5,r=0.1formulae-sequence\ud835\udf071formulae-sequence\ud835\udf062formulae-sequence\ud835\udefe5\ud835\udc5f0.1\\mu=1,\\lambda=2,\\gamma=5,r=0.1italic_\u03bc = 1 , italic_\u03bb = 2 , italic_\u03b3 = 5 , italic_r = 0.1;\nC: \u03bc=1,\u03bb=2,\u03b3=1.2,r=0.05formulae-sequence\ud835\udf071formulae-sequence\ud835\udf062formulae-sequence\ud835\udefe1.2\ud835\udc5f0.05\\mu=1,\\lambda=2,\\gamma=1.2,r=0.05italic_\u03bc = 1 , italic_\u03bb = 2 , italic_\u03b3 = 1.2 , italic_r = 0.05;\nD: \u03bc=5,\u03bb=1,\u03b3=0.1,r=0.005formulae-sequence\ud835\udf075formulae-sequence\ud835\udf061formulae-sequence\ud835\udefe0.1\ud835\udc5f0.005\\mu=5,\\lambda=1,\\gamma=0.1,r=0.005italic_\u03bc = 5 , italic_\u03bb = 1 , italic_\u03b3 = 0.1 , italic_r = 0.005.\nThe solid circles and vertical bars represent the mean and standard deviation of risk and accuracy from experiment results.\nEach experimental data point is averaged over 10101010 independent trials; the standard deviation is indicated by vertical bars. We use N=5000\ud835\udc415000N=5000italic_N = 5000 and d=30\ud835\udc5130d=30italic_d = 30 for A, B and C, and N=500\ud835\udc41500N=500italic_N = 500 and d=20\ud835\udc5120d=20italic_d = 20 for D. In all case we use the symmetric binary adjacency matrix ensemble \ud835\udc9cbssuperscript\ud835\udc9cbs\\mathcal{A}^{\\text{bs}}caligraphic_A start_POSTSUPERSCRIPT bs end_POSTSUPERSCRIPT.",
114
+ "url": "http://arxiv.org/html/2212.13069v3/x5.png"
115
+ },
116
+ "6": {
117
+ "figure_path": "2212.13069v3_figure_6.png",
118
+ "caption": "Figure 6: Train and test risks on CSBM for different intensities of self loops. A: train and test risk for \u03c4=0.8\ud835\udf0f0.8\\tau=0.8italic_\u03c4 = 0.8 and \u03bb=\u22121\ud835\udf061\\lambda=-1italic_\u03bb = - 1 (heterophilic). B: test risks for \u03b3=0.8\ud835\udefe0.8\\gamma=0.8italic_\u03b3 = 0.8, \u03c4=0.8\ud835\udf0f0.8\\tau=0.8italic_\u03c4 = 0.8, \u03bc=0\ud835\udf070\\mu=0italic_\u03bc = 0 under different \u03bb\ud835\udf06\\lambdaitalic_\u03bb. C: training risk for different \u03bc\ud835\udf07\\muitalic_\u03bc when \u03c4=\u03bb=1\ud835\udf0f\ud835\udf061\\tau=\\lambda=1italic_\u03c4 = italic_\u03bb = 1. Each data point is averaged over 10101010 independent trials with N=5000\ud835\udc415000N=5000italic_N = 5000, r=0\ud835\udc5f0r=0italic_r = 0, and d=30\ud835\udc5130d=30italic_d = 30. We use the non-symmetric binary adjacency matrix ensemble \ud835\udc9cbnsuperscript\ud835\udc9cbn\\mathcal{A}^{\\text{bn}}caligraphic_A start_POSTSUPERSCRIPT bn end_POSTSUPERSCRIPT. The solid lines are the theoretical results predicted by the replica method. In B we see that the optimal generalization performance requires adapting the self-loop intensity c\ud835\udc50citalic_c to the degree of homophily.",
119
+ "url": "http://arxiv.org/html/2212.13069v3/x6.png"
120
+ },
121
+ "7": {
122
+ "figure_path": "2212.13069v3_figure_7.png",
123
+ "caption": "Figure 7: Test accuracy (black) and test error (red) in node classification with GCNs on real heterophilic graphs with different self-loop intensities. We implement a two-layer ReLU GCN with 128128128128 hidden neurons and an additional self-loop with strength c\ud835\udc50citalic_c. Each setting is averaged over different training\u2013test splits taken from (11) (60% training, 20% validation, 20% test). The relatively large standard deviation (vertical bars) is mainly due to the randomness of the splits. The randomness from model initialization and training is comparatively small. The optimal test accuracy for these two datasets is obtained for self-loop intensity \u22120.5<c*<\u221210.5superscript\ud835\udc501-0.5<c^{*}<-1- 0.5 < italic_c start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT < - 1.",
124
+ "url": "http://arxiv.org/html/2212.13069v3/x7.png"
125
+ },
126
+ "8": {
127
+ "figure_path": "2212.13069v3_figure_8.png",
128
+ "caption": "Figure 8: Training label ratio when a one-layer GCN matches the performance of unsupervised belief propagation at \u03bc=\u03bb=\u03b3=1\ud835\udf07\ud835\udf06\ud835\udefe1\\mu=\\lambda=\\gamma=1italic_\u03bc = italic_\u03bb = italic_\u03b3 = 1. The black solid line denotes the information-theoretic detection threshold in the unsupervised setting where no label information is available ( i.e., when we use only \ud835\udc68\ud835\udc68\\bm{{A}}bold_italic_A, \ud835\udc7f\ud835\udc7f\\bm{{X}}bold_italic_X). If given a small number of labels, a simple, generally sub-optimal estimator matches the performance of the optimal unsupervised estimator.",
129
+ "url": "http://arxiv.org/html/2212.13069v3/x8.png"
130
+ },
131
+ "9": {
132
+ "figure_path": "2212.13069v3_figure_9.png",
133
+ "caption": "Figure 9: Numerical validation of Conjecture 1. In A & D: we show training and test risks with different numbers of nodes for P\u2062(A)=A\ud835\udc43\ud835\udc34\ud835\udc34P(A)=Aitalic_P ( italic_A ) = italic_A. The parameters are set to \u03b3=\u03bb=\u03bc=2,r=0.01,\u03c4=0.8formulae-sequence\ud835\udefe\ud835\udf06\ud835\udf072formulae-sequence\ud835\udc5f0.01\ud835\udf0f0.8\\gamma=\\lambda=\\mu=2,r=0.01,\\tau=0.8italic_\u03b3 = italic_\u03bb = italic_\u03bc = 2 , italic_r = 0.01 , italic_\u03c4 = 0.8 and d=N/2\ud835\udc51\ud835\udc412d=\\sqrt{N}/2italic_d = square-root start_ARG italic_N end_ARG / 2. In B & E, we show the absolute difference of the risks between binary and Gaussian adjacency as a function of N\ud835\udc41Nitalic_N, using the same data in A & D. The solid lines correspond to a linear fit in the logarithmic scale, which shows that the error scales as |\u0394|\u223cN\u22120.5similar-to\u0394superscript\ud835\udc410.5|\\Delta|\\sim N^{-0.5}| roman_\u0394 | \u223c italic_N start_POSTSUPERSCRIPT - 0.5 end_POSTSUPERSCRIPT. In C & F we show the training and test risks when \ud835\udc77\u2062(\ud835\udc68)=\ud835\udc682\ud835\udc77\ud835\udc68superscript\ud835\udc682\\bm{{P}}(\\bm{{A}})=\\bm{{A}}^{2}bold_italic_P ( bold_italic_A ) = bold_italic_A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT under different average node degrees d\ud835\udc51ditalic_d. Other parameters are set to \u03bb=\u03bc=1,\u03b3=2,N=2000,\u03c4=0.8formulae-sequence\ud835\udf06\ud835\udf071formulae-sequence\ud835\udefe2formulae-sequence\ud835\udc412000\ud835\udf0f0.8\\lambda=\\mu=1,\\gamma=2,N=2000,\\tau=0.8italic_\u03bb = italic_\u03bc = 1 , italic_\u03b3 = 2 , italic_N = 2000 , italic_\u03c4 = 0.8 and r=0.01\ud835\udc5f0.01r=0.01italic_r = 0.01. In these settings, the conjecture empirically holds up to scrutiny.",
134
+ "url": "http://arxiv.org/html/2212.13069v3/x9.png"
135
+ },
136
+ "10": {
137
+ "figure_path": "2212.13069v3_figure_10.png",
138
+ "caption": "Figure 10: Theoretical results (solid line) vs. experimental results (solid circles) for varying homophily of graphs (\u03bb\ud835\udf06\\lambdaitalic_\u03bb). We compare the one-hop case (P\u2062(\ud835\udc68)=\ud835\udc68\ud835\udc43\ud835\udc68\ud835\udc68P(\\bm{{A}})=\\bm{{A}}italic_P ( bold_italic_A ) = bold_italic_A) and two-hops case (P\u2062(\ud835\udc68)=\ud835\udc682\ud835\udc43\ud835\udc68superscript\ud835\udc682P(\\bm{{A}})=\\bm{{A}}^{2}italic_P ( bold_italic_A ) = bold_italic_A start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT) for non-symmetric CSBM with \u03bc=0,\u03c4=1,N=2000formulae-sequence\ud835\udf070formulae-sequence\ud835\udf0f1\ud835\udc412000\\mu=0,\\tau=1,N=2000italic_\u03bc = 0 , italic_\u03c4 = 1 , italic_N = 2000 and d=30\ud835\udc5130d=30italic_d = 30. We use non-symmetric binary adjacency matrix \ud835\udc9cbnsuperscript\ud835\udc9cbn\\mathcal{A}^{\\text{bn}}caligraphic_A start_POSTSUPERSCRIPT bn end_POSTSUPERSCRIPT. Each experimental data point is averaged over 10101010 independent trials and the standard deviation is indicated by vertical lines.",
139
+ "url": "http://arxiv.org/html/2212.13069v3/x10.png"
140
+ },
141
+ "11": {
142
+ "figure_path": "2212.13069v3_figure_11.png",
143
+ "caption": "Figure 11: A spectral perspective understanding of self-loops in GCNs. The first (blue) column shows a projection of the true signal into the graph spectral domain. The following three columns illustrate the process of graph signal filtering \ud835\udc68\u2062\ud835\udc99=\ud835\udc89\ud835\udc68\ud835\udc99\ud835\udc89\\bm{{A}}\\bm{{x}}=\\bm{{h}}bold_italic_A bold_italic_x = bold_italic_h in the graph spectrum domain. The second column shows the eigenvalues for \ud835\udc68\ud835\udc68\\bm{{A}}bold_italic_A and \ud835\udc68\u2212\ud835\udc70\ud835\udc68\ud835\udc70\\bm{{A}}-\\bm{{I}}bold_italic_A - bold_italic_I. The third column shows the signal \ud835\udc99\ud835\udc99\\bm{{x}}bold_italic_x in the spectral domain, and the forth column show the corresponding filtered signal in the spectral domain. The signal \ud835\udc99=\ud835\udc9a+\ud835\udf43\ud835\udc99\ud835\udc9a\ud835\udf43\\bm{{x}}=\\bm{{y}}+\\bm{\\xi}bold_italic_x = bold_italic_y + bold_italic_\u03be is noisy, and it in general becomes closer to the target signal \ud835\udc9a\ud835\udc9a\\bm{{y}}bold_italic_y after been filtered. In the homophilic case, the signal been filtered by \ud835\udc68\u2062\ud835\udc99\ud835\udc68\ud835\udc99\\bm{{A}}\\bm{{x}}bold_italic_A bold_italic_x is closer to the true signal compared to (\ud835\udc68+\ud835\udc70)\u2062\ud835\udc99\ud835\udc68\ud835\udc70\ud835\udc99(\\bm{{A}}+\\bm{{I}})\\bm{{x}}( bold_italic_A + bold_italic_I ) bold_italic_x; while in the heterophilic case, (\ud835\udc68\u2212\ud835\udc70)\u2062\ud835\udc99\ud835\udc68\ud835\udc70\ud835\udc99(\\bm{{A}}-\\bm{{I}})\\bm{{x}}( bold_italic_A - bold_italic_I ) bold_italic_x is better than \ud835\udc68\u2062\ud835\udc99\ud835\udc68\ud835\udc99\\bm{{A}}\\bm{{x}}bold_italic_A bold_italic_x. In all the figures, the spectral basis are arranged in the order of increasing frequency.",
144
+ "url": "http://arxiv.org/html/2212.13069v3/x11.png"
145
+ },
146
+ "12(a)": {
147
+ "figure_path": "2212.13069v3_figure_12(a).png",
148
+ "caption": "A\nFigure 12: Test error and classification accuracy at different training ratios for Chebyshev GNN (ChebNet), Graph Attention Network (GAT), the Graph Sample and Aggregate Network (SAGE), Topology Adaptive Graph Convolutional Networks (TAGCN), on the Citeeer dataset. All models have two layers with ReLU activations, and are trained by ADAM with the cross-entropy loss.",
149
+ "url": "http://arxiv.org/html/2212.13069v3/x12.png"
150
+ },
151
+ "12(b)": {
152
+ "figure_path": "2212.13069v3_figure_12(b).png",
153
+ "caption": "B\nFigure 12: Test error and classification accuracy at different training ratios for Chebyshev GNN (ChebNet), Graph Attention Network (GAT), the Graph Sample and Aggregate Network (SAGE), Topology Adaptive Graph Convolutional Networks (TAGCN), on the Citeeer dataset. All models have two layers with ReLU activations, and are trained by ADAM with the cross-entropy loss.",
154
+ "url": "http://arxiv.org/html/2212.13069v3/x13.png"
155
+ },
156
+ "12(c)": {
157
+ "figure_path": "2212.13069v3_figure_12(c).png",
158
+ "caption": "C\nFigure 12: Test error and classification accuracy at different training ratios for Chebyshev GNN (ChebNet), Graph Attention Network (GAT), the Graph Sample and Aggregate Network (SAGE), Topology Adaptive Graph Convolutional Networks (TAGCN), on the Citeeer dataset. All models have two layers with ReLU activations, and are trained by ADAM with the cross-entropy loss.",
159
+ "url": "http://arxiv.org/html/2212.13069v3/x14.png"
160
+ },
161
+ "12(d)": {
162
+ "figure_path": "2212.13069v3_figure_12(d).png",
163
+ "caption": "D\nFigure 12: Test error and classification accuracy at different training ratios for Chebyshev GNN (ChebNet), Graph Attention Network (GAT), the Graph Sample and Aggregate Network (SAGE), Topology Adaptive Graph Convolutional Networks (TAGCN), on the Citeeer dataset. All models have two layers with ReLU activations, and are trained by ADAM with the cross-entropy loss.",
164
+ "url": "http://arxiv.org/html/2212.13069v3/x15.png"
165
+ }
166
+ },
167
+ "validation": true,
168
+ "references": [
169
+ {
170
+ "1": {
171
+ "title": "\\JournalTitlearXiv preprint arXiv:2212.12794 (2022).",
172
+ "author": "R Lam, et al., GraphCast: Learning skillful medium-range global weather forecasting.",
173
+ "venue": null,
174
+ "url": null
175
+ }
176
+ },
177
+ {
178
+ "2": {
179
+ "title": "\\JournalTitleNature Communications 13, 4424 (2022).",
180
+ "author": "R Mandal, C Casert, P Sollich, Robust prediction of force chains in jammed solids using graph neural networks.",
181
+ "venue": null,
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "3": {
187
+ "title": "\\JournalTitleAdvances in neural information processing systems 32 (2019).",
188
+ "author": "J Ingraham, V Garg, R Barzilay, T Jaakkola, Generative models for graph-based protein design.",
189
+ "venue": null,
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "4": {
195
+ "title": "\\JournalTitleNature communications 12, 3168 (2021).",
196
+ "author": "V Gligorijevi\u0107, et al., Structure-based protein function prediction using graph convolutional networks.",
197
+ "venue": null,
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "5": {
203
+ "title": "\\JournalTitleNature 596, 583\u2013589 (2021).",
204
+ "author": "J Jumper, et al., Highly accurate protein structure prediction with AlphaFold.",
205
+ "venue": null,
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "6": {
211
+ "title": "(2014).",
212
+ "author": "JE Bruna, W Zaremba, A Szlam, Y LeCun, Spectral networks and deep locally connected networks on graphs in International Conference on Learning Representations.",
213
+ "venue": null,
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "7": {
219
+ "title": "\\JournalTitleAdvances in neural information processing systems 29 (2016).",
220
+ "author": "M Defferrard, X Bresson, P Vandergheynst, Convolutional neural networks on graphs with fast localized spectral filtering.",
221
+ "venue": null,
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "8": {
227
+ "title": "(2017).",
228
+ "author": "TN Kipf, M Welling, Semi-supervised classification with graph convolutional networks in International Conference on Learning Representations.",
229
+ "venue": null,
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "9": {
235
+ "title": "\\JournalTitleAdvances in neural information processing systems 30 (2017).",
236
+ "author": "W Hamilton, Z Ying, J Leskovec, Inductive representation learning on large graphs.",
237
+ "venue": null,
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "10": {
243
+ "title": "\\JournalTitleAdvances in neural information processing systems 33, 7793\u20137804 (2020).",
244
+ "author": "J Zhu, et al., Beyond homophily in graph neural networks: Current limitations and effective designs.",
245
+ "venue": null,
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "11": {
251
+ "title": "(2020).",
252
+ "author": "H Pei, B Wei, KCC Chang, Y Lei, B Yang, Geom-GCN: Geometric graph convolutional networks in International Conference on Learning Representations.",
253
+ "venue": null,
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "12": {
259
+ "title": "(2021).",
260
+ "author": "E Chien, J Peng, P Li, O Milenkovic, Adaptive universal generalized PageRank graph neural network in International Conference on Learning Representations.",
261
+ "venue": null,
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "13": {
267
+ "title": "(2020).",
268
+ "author": "K Oono, T Suzuki, Graph neural networks exponentially lose expressive power for node classification in International Conference on Learning Representations.",
269
+ "venue": null,
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "14": {
275
+ "title": "\\JournalTitleJournal of Statistical Mechanics: Theory and Experiment 2021, 124003 (2021).",
276
+ "author": "P Nakkiran, et al., Deep double descent: Where bigger models and more data hurt.",
277
+ "venue": null,
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "15": {
283
+ "title": "\\JournalTitleProceedings of the National Academy of Sciences 110, 2460\u20132465 (2013).",
284
+ "author": "YY Liu, JJ Slotine, AL Barab\u00e1si, Observability of complex systems.",
285
+ "venue": null,
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "16": {
291
+ "title": "\\JournalTitleAdvances in Neural Information Processing Systems 34, 8898\u20138912 (2021).",
292
+ "author": "L Chen, Y Min, M Belkin, A Karbasi, Multiple descent: Design your own generalization curve.",
293
+ "venue": null,
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "17": {
299
+ "title": "\\JournalTitleSIAM Journal on Mathematics of Data Science 2, 1167\u20131180 (2020).",
300
+ "author": "M Belkin, D Hsu, J Xu, Two models of double descent for weak features.",
301
+ "venue": null,
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "18": {
307
+ "title": "\\JournalTitleAnnual review of sociology pp. 415\u2013444 (2001).",
308
+ "author": "M McPherson, L Smith-Lovin, JM Cook, Birds of a feather: Homophily in social networks.",
309
+ "venue": null,
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "19": {
315
+ "title": "\\JournalTitlearXiv preprint arXiv:2207.11311 (2022).",
316
+ "author": "R Wei, H Yin, J Jia, AR Benson, P Li, Understanding non-linearity in graph neural networks from the Bayesian-inference perspective.",
317
+ "venue": null,
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "20": {
323
+ "title": "\\JournalTitlearXiv preprint arXiv:2305.10391 (2023).",
324
+ "author": "A Baranwal, A Jagannath, K Fountoulakis, Optimality of message-passing architectures for sparse graphs.",
325
+ "venue": null,
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "21": {
331
+ "title": "(PMLR), pp. 3419\u20133430 (2020).",
332
+ "author": "V Garg, S Jegelka, T Jaakkola, Generalization and representational limits of graph neural networks in International Conference on Machine Learning.",
333
+ "venue": null,
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "22": {
339
+ "title": "(2021).",
340
+ "author": "R Liao, R Urtasun, R Zemel, A PAC-Bayesian approach to generalization bounds for graph neural networks in International Conference on Learning Representations.",
341
+ "venue": null,
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "23": {
347
+ "title": "\\JournalTitleAdvances in Neural Information Processing Systems 34, 27043\u201327056 (2021).",
348
+ "author": "P Esser, L Chennuru Vankadara, D Ghoshdastidar, Learning theory can (sometimes) explain generalisation in graph neural networks.",
349
+ "venue": null,
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "24": {
355
+ "title": "\\JournalTitleAdvances in Neural Information Processing Systems 31 (2018).",
356
+ "author": "Y Deshpande, S Sen, A Montanari, E Mossel, Contextual stochastic block models.",
357
+ "venue": null,
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "25": {
363
+ "title": "\\JournalTitleReviews of Modern Physics 65, 499 (1993).",
364
+ "author": "TL Watkin, A Rau, M Biehl, The statistical mechanics of learning a rule.",
365
+ "venue": null,
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "26": {
371
+ "title": "\\JournalTitlearXiv preprint arXiv:1710.09553 (2017).",
372
+ "author": "CH Martin, MW Mahoney, Rethinking generalization requires revisiting old ideas: statistical mechanics approaches and complex learning behavior.",
373
+ "venue": null,
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "27": {
379
+ "title": "(OpenReview.net), (2017).",
380
+ "author": "C Zhang, S Bengio, M Hardt, B Recht, O Vinyals, Understanding deep learning requires rethinking generalization in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.",
381
+ "venue": null,
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "28": {
387
+ "title": "(Springer) Vol. 2, (2009).",
388
+ "author": "T Hastie, R Tibshirani, JH Friedman, JH Friedman, The elements of statistical learning: data mining, inference, and prediction.",
389
+ "venue": null,
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "29": {
395
+ "title": "\\JournalTitleJournal of Physics A: Mathematical and General 23, L581 (1990).",
396
+ "author": "M Opper, W Kinzel, J Kleinz, R Nehl, On the ability of the optimal perceptron to generalise.",
397
+ "venue": null,
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "30": {
403
+ "title": "(Cambridge University Press), (2001).",
404
+ "author": "A Engel, C Van den Broeck, Statistical Mechanics of Learning.",
405
+ "venue": null,
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "31": {
411
+ "title": "\\JournalTitlePhysical review A 45, 6056 (1992).",
412
+ "author": "HS Seung, H Sompolinsky, N Tishby, Statistical mechanics of learning from examples.",
413
+ "venue": null,
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "32": {
419
+ "title": "\\JournalTitlePhysical review letters 72, 2113 (1994).",
420
+ "author": "M Opper, Learning and generalization in a two-layer neural network: The role of the Vapnik\u2013Chervonvenkis dimension.",
421
+ "venue": null,
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "33": {
427
+ "title": "\\JournalTitleProceedings of the National Academy of Sciences 116, 15849\u201315854 (2019).",
428
+ "author": "M Belkin, D Hsu, S Ma, S Mandal, Reconciling modern machine-learning practice and the classical bias\u2013variance trade-off.",
429
+ "venue": null,
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "34": {
435
+ "title": "\\JournalTitleAdvances in Neural Information Processing Systems 33, 13939\u201313950 (2020).",
436
+ "author": "Z Liao, R Couillet, MW Mahoney, A random matrix analysis of random Fourier features: beyond the gaussian kernel, a precise phase transition, and the corresponding double descent.",
437
+ "venue": null,
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "35": {
443
+ "title": "\\JournalTitleNature communications 12, 2914 (2021).",
444
+ "author": "A Canatar, B Bordelon, C Pehlevan, Spectral bias and task-model alignment explain generalization in kernel regression and infinitely wide neural networks.",
445
+ "venue": null,
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "36": {
451
+ "title": "Vol. 32, (2018).",
452
+ "author": "Q Li, Z Han, XM Wu, Deeper insights into graph convolutional networks for semi-supervised learning in Proceedings of the AAAI conference on artificial intelligence.",
453
+ "venue": null,
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "37": {
459
+ "title": "\\JournalTitleAdvances in Neural Information Processing Systems 34, 18722\u201318733 (2021).",
460
+ "author": "Y Yang, et al., Taxonomizing local versus global structure in neural network loss landscapes.",
461
+ "venue": null,
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "38": {
467
+ "title": "\\JournalTitleAI magazine 29, 93\u201393 (2008).",
468
+ "author": "P Sen, et al., Collective classification in network data.",
469
+ "venue": null,
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "39": {
475
+ "title": "\\JournalTitleJournal of Complex Networks 9, cnab014 (2021).",
476
+ "author": "B Rozemberczki, C Allen, R Sarkar, Multi-scale attributed node embedding.",
477
+ "venue": null,
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "40": {
483
+ "title": "\\JournalTitlearXiv preprint arXiv:2105.07634 (2021).",
484
+ "author": "SK Maurya, X Liu, T Murata, Improving graph neural networks with simple architecture design.",
485
+ "venue": null,
486
+ "url": null
487
+ }
488
+ },
489
+ {
490
+ "41": {
491
+ "title": "(2018).",
492
+ "author": "P Veli\u010dkovi\u0107, et al., Graph attention networks in International Conference on Learning Representations.",
493
+ "venue": null,
494
+ "url": null
495
+ }
496
+ },
497
+ {
498
+ "42": {
499
+ "title": "(PMLR), pp. 6861\u20136871 (2019).",
500
+ "author": "F Wu, et al., Simplifying graph convolutional networks in International conference on machine learning.",
501
+ "venue": null,
502
+ "url": null
503
+ }
504
+ },
505
+ {
506
+ "43": {
507
+ "title": "(PMLR), pp. 23341\u201323362 (2022).",
508
+ "author": "X Wang, M Zhang, How powerful are spectral graph neural networks in International Conference on Machine Learning.",
509
+ "venue": null,
510
+ "url": null
511
+ }
512
+ },
513
+ {
514
+ "44": {
515
+ "title": "\\JournalTitleAdvances in Neural Information Processing Systems 34, 14239\u201314251 (2021).",
516
+ "author": "M He, Z Wei, H Xu, , et al., Bernnet: Learning arbitrary graph spectral filters via Bernstein approximation.",
517
+ "venue": null,
518
+ "url": null
519
+ }
520
+ },
521
+ {
522
+ "45": {
523
+ "title": "\\JournalTitleJournal of the American Statistical Association 82, 8\u201319 (1987).",
524
+ "author": "YJ Wang, GY Wong, Stochastic blockmodels for directed graphs.",
525
+ "venue": null,
526
+ "url": null
527
+ }
528
+ },
529
+ {
530
+ "46": {
531
+ "title": "\\JournalTitlePhysics reports 533, 95\u2013142 (2013).",
532
+ "author": "FD Malliaros, M Vazirgiannis, Clustering and community detection in directed networks: A survey.",
533
+ "venue": null,
534
+ "url": null
535
+ }
536
+ },
537
+ {
538
+ "47": {
539
+ "title": "(2021).",
540
+ "author": "W Lu, Learning guarantees for graph convolutional networks on the stochastic block model in International Conference on Learning Representations.",
541
+ "venue": null,
542
+ "url": null
543
+ }
544
+ },
545
+ {
546
+ "48": {
547
+ "title": "(PMLR), pp. 684\u2013693 (2021).",
548
+ "author": "A Baranwal, K Fountoulakis, A Jagannath, Graph convolution for semi-supervised classification: improved linear separability and out-of-distribution generalization in International Conference on Machine Learning.",
549
+ "venue": null,
550
+ "url": null
551
+ }
552
+ },
553
+ {
554
+ "49": {
555
+ "title": "\\JournalTitleCommunications on Pure and Applied Mathematics 75, 667\u2013766 (2022).",
556
+ "author": "S Mei, A Montanari, The generalization error of random features regression: Precise asymptotics and the double descent curve.",
557
+ "venue": null,
558
+ "url": null
559
+ }
560
+ },
561
+ {
562
+ "50": {
563
+ "title": "(PMLR), pp. 13242\u201313256 (2022).",
564
+ "author": "X Li, et al., Finding global homophily in graph neural networks when meeting heterophily in International Conference on Machine Learning.",
565
+ "venue": null,
566
+ "url": null
567
+ }
568
+ },
569
+ {
570
+ "51": {
571
+ "title": "\\JournalTitleAdvances in neural information processing systems 35, 1362\u20131375 (2022).",
572
+ "author": "S Luan, et al., Revisiting heterophily for graph neural networks.",
573
+ "venue": null,
574
+ "url": null
575
+ }
576
+ },
577
+ {
578
+ "52": {
579
+ "title": "\\JournalTitlearXiv preprint arXiv:1810.05997 (2018).",
580
+ "author": "J Gasteiger, A Bojchevski, S G\u00fcnnemann, Predict then propagate: Graph neural networks meet personalized pagerank.",
581
+ "venue": null,
582
+ "url": null
583
+ }
584
+ },
585
+ {
586
+ "53": {
587
+ "title": "\\JournalTitlearXiv preprint arXiv:2003.04078 (2020).",
588
+ "author": "R Sato, A survey on the expressive power of graph neural networks.",
589
+ "venue": null,
590
+ "url": null
591
+ }
592
+ },
593
+ {
594
+ "54": {
595
+ "title": "(2022).",
596
+ "author": "F Geerts, JL Reutter, Expressiveness and approximation properties of graph neural networks in International Conference on Learning Representations.",
597
+ "venue": null,
598
+ "url": null
599
+ }
600
+ },
601
+ {
602
+ "55": {
603
+ "title": "(2019).",
604
+ "author": "K Xu, W Hu, J Leskovec, S Jegelka, How powerful are graph neural networks? in International Conference on Learning Representations.",
605
+ "venue": null,
606
+ "url": null
607
+ }
608
+ },
609
+ {
610
+ "56": {
611
+ "title": "(PMLR), pp. 1263\u20131272 (2017).",
612
+ "author": "J Gilmer, SS Schoenholz, PF Riley, O Vinyals, GE Dahl, Neural message passing for quantum chemistry in International conference on machine learning.",
613
+ "venue": null,
614
+ "url": null
615
+ }
616
+ },
617
+ {
618
+ "57": {
619
+ "title": "\\JournalTitleTheory of Probability & Its Applications 16, 264\u2013280 (1971).",
620
+ "author": "V Vapnik, AY Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities.",
621
+ "venue": null,
622
+ "url": null
623
+ }
624
+ },
625
+ {
626
+ "58": {
627
+ "title": "(Springer science & business media), (1999).",
628
+ "author": "V Vapnik, The nature of statistical learning theory.",
629
+ "venue": null,
630
+ "url": null
631
+ }
632
+ },
633
+ {
634
+ "59": {
635
+ "title": "\\JournalTitleNeural Networks 108, 248\u2013259 (2018).",
636
+ "author": "F Scarselli, AC Tsoi, M Hagenbuchner, The Vapnik\u2013Chervonenkis dimension of graph and recursive neural networks.",
637
+ "venue": null,
638
+ "url": null
639
+ }
640
+ },
641
+ {
642
+ "60": {
643
+ "title": "(IEEE), pp. 1002\u20131009 (2013).",
644
+ "author": "S Oymak, C Thrampoulidis, B Hassibi, The squared-error of generalized lasso: A precise analysis in 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).",
645
+ "venue": null,
646
+ "url": null
647
+ }
648
+ },
649
+ {
650
+ "61": {
651
+ "title": "\\JournalTitleIEEE Transactions on Information Theory 64, 5592\u20135628 (2018).",
652
+ "author": "C Thrampoulidis, E Abbasi, B Hassibi, Precise error analysis of regularized -estimators in high dimensions.",
653
+ "venue": null,
654
+ "url": null
655
+ }
656
+ },
657
+ {
658
+ "62": {
659
+ "title": "\\JournalTitleFoundations and Trends in Machine learning 3, 1\u2013122 (2011).",
660
+ "author": "S Boyd, et al., Distributed optimization and statistical learning via the alternating direction method of multipliers.",
661
+ "venue": null,
662
+ "url": null
663
+ }
664
+ },
665
+ {
666
+ "63": {
667
+ "title": "\\JournalTitleIEEE Transactions on Information Theory (2022).",
668
+ "author": "H Hu, YM Lu, Universality laws for high-dimensional learning with random features.",
669
+ "venue": null,
670
+ "url": null
671
+ }
672
+ },
673
+ {
674
+ "64": {
675
+ "title": "(PMLR), pp. 410\u2013438 (2018).",
676
+ "author": "A El Alaoui, MI Jordan, Detection limits in the high-dimensional spiked rectangular model in Conference On Learning Theory.",
677
+ "venue": null,
678
+ "url": null
679
+ }
680
+ },
681
+ {
682
+ "65": {
683
+ "title": "\\JournalTitleAdvances in Neural Information Processing Systems 33, 14915\u201314926 (2020).",
684
+ "author": "J Barbier, N Macris, C Rush, All-or-nothing statistical and computational phase transitions in sparse spiked matrix estimation.",
685
+ "venue": null,
686
+ "url": null
687
+ }
688
+ },
689
+ {
690
+ "66": {
691
+ "title": "(PMLR), pp. 6874\u20136883 (2020).",
692
+ "author": "F Mignacco, F Krzakala, Y Lu, P Urbani, L Zdeborov\u00e1, The role of regularization in classification of high-dimensional noisy Gaussian mixture in International Conference on Machine Learning.",
693
+ "venue": null,
694
+ "url": null
695
+ }
696
+ },
697
+ {
698
+ "67": {
699
+ "title": "\\JournalTitleAnnual Review of Condensed Matter Physics 11, 501\u2013528 (2020).",
700
+ "author": "Y Bahri, et al., Statistical mechanics of deep learning.",
701
+ "venue": null,
702
+ "url": null
703
+ }
704
+ },
705
+ {
706
+ "68": {
707
+ "title": "\\JournalTitleInformation and Inference: A Journal of the IMA 6, 125\u2013170 (2017).",
708
+ "author": "Y Deshpande, E Abbe, A Montanari, Asymptotic mutual information for the balanced binary stochastic block model.",
709
+ "venue": null,
710
+ "url": null
711
+ }
712
+ },
713
+ {
714
+ "69": {
715
+ "title": "\\JournalTitleCombinatorica 38, 665\u2013708 (2018).",
716
+ "author": "E Mossel, J Neeman, A Sly, A proof of the block model threshold conjecture.",
717
+ "venue": null,
718
+ "url": null
719
+ }
720
+ },
721
+ {
722
+ "70": {
723
+ "title": "\\JournalTitlearXiv preprint arXiv:2306.07948 (2023).",
724
+ "author": "O Duranthon, L Zdeborov\u00e1, Optimal inference in contextual stochastic block models.",
725
+ "venue": null,
726
+ "url": null
727
+ }
728
+ },
729
+ {
730
+ "71": {
731
+ "title": "\\JournalTitlePhysical Review E 90, 052802 (2014).",
732
+ "author": "P Zhang, C Moore, L Zdeborov\u00e1, Phase transitions in semisupervised clustering of sparse networks.",
733
+ "venue": null,
734
+ "url": null
735
+ }
736
+ },
737
+ {
738
+ "72": {
739
+ "title": "(World Scientific Publishing Company) Vol. 9, (1987).",
740
+ "author": "M M\u00e9zard, G Parisi, MA Virasoro, Spin glass theory and beyond: An introduction to the replica method and its applications.",
741
+ "venue": null,
742
+ "url": null
743
+ }
744
+ },
745
+ {
746
+ "73": {
747
+ "title": "\\JournalTitleRandom Structures & Algorithms 21, 197\u2013204 (2002).",
748
+ "author": "M Talagrand, Gaussian averages, Bernoulli averages, and Gibbs\u2019 measures.",
749
+ "venue": null,
750
+ "url": null
751
+ }
752
+ },
753
+ {
754
+ "74": {
755
+ "title": "(Elsevier), Vol. 42, pp. 215\u2013222 (2006).",
756
+ "author": "P Carmona, Y Hu, Universality in Sherrington\u2013Kirkpatrick\u2019s spin glass model in Annales de l\u2019Institut Henri Poincare (B) Probability and Statistics.",
757
+ "venue": null,
758
+ "url": null
759
+ }
760
+ },
761
+ {
762
+ "75": {
763
+ "title": "(Springer Science & Business Media), (2013).",
764
+ "author": "D Panchenko, The Sherrington-Kirkpatrick model.",
765
+ "venue": null,
766
+ "url": null
767
+ }
768
+ },
769
+ {
770
+ "76": {
771
+ "title": "\\JournalTitleAdvances in neural information processing systems 30 (2017).",
772
+ "author": "J Pennington, P Worah, Nonlinear random matrix theory for deep learning.",
773
+ "venue": null,
774
+ "url": null
775
+ }
776
+ },
777
+ {
778
+ "77": {
779
+ "title": "(2022).",
780
+ "author": "N Keriven, Not too little, not too much: A theoretical analysis of graph (over)smoothing in Advances in Neural Information Processing Systems, eds. AH Oh, A Agarwal, D Belgrave, K Cho.",
781
+ "venue": null,
782
+ "url": null
783
+ }
784
+ },
785
+ {
786
+ "78": {
787
+ "title": "(American Mathematical Soc.) No. 1, (1992).",
788
+ "author": "DV Voiculescu, KJ Dykema, A Nica, Free random variables.",
789
+ "venue": null,
790
+ "url": null
791
+ }
792
+ },
793
+ {
794
+ "79": {
795
+ "title": "\\JournalTitlearXiv preprint arXiv:1401.7802 (2014).",
796
+ "author": "T Dupic, IP Castillo, Spectral density of products of Wishart dilute random matrices. part i: the dense case.",
797
+ "venue": null,
798
+ "url": null
799
+ }
800
+ },
801
+ {
802
+ "80": {
803
+ "title": "\\JournalTitleIEEE signal processing magazine 30, 83\u201398 (2013).",
804
+ "author": "DI Shuman, SK Narang, P Frossard, A Ortega, P Vandergheynst, The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains.",
805
+ "venue": null,
806
+ "url": null
807
+ }
808
+ },
809
+ {
810
+ "81": {
811
+ "title": "\\JournalTitleProceedings of the IEEE 106, 808\u2013828 (2018).",
812
+ "author": "A Ortega, P Frossard, J Kova\u010devi\u0107, JM Moura, P Vandergheynst, Graph signal processing: Overview, challenges, and applications.",
813
+ "venue": null,
814
+ "url": null
815
+ }
816
+ }
817
+ ],
818
+ "url": "http://arxiv.org/html/2212.13069v3"
819
+ }
20240123/2301.02424v2.json ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Conformal Loss-Controlling Prediction",
3
+ "abstract": "Conformal prediction is a learning framework controlling prediction coverage of prediction sets, which can be built on any learning algorithm for point prediction. This work proposes a learning framework named conformal loss-controlling prediction, which extends conformal prediction to the situation where the value of a loss function needs to be controlled. Different from existing works about risk-controlling prediction sets and conformal risk control with the purpose of controlling the expected values of loss functions, the proposed approach in this paper focuses on the loss for any test object, which is an extension of conformal prediction from miscoverage loss to some general loss. The controlling guarantee is proved under the assumption of exchangeability of data in finite-sample cases and the framework is tested empirically for classification with a class-varying loss and statistical postprocessing of numerical weather forecasting applications, which are introduced as point-wise classification and point-wise regression problems. All theoretical analysis and experimental results confirm the effectiveness of our loss-controlling approach.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Prediction sets convey uncertainty or confidence information for users, which is more preferred than prediction points, especially for sensitive applications such as medicine, finance and weather forecasting [1 ###reference_1###] [2 ###reference_2###] [3 ###reference_3###]. One example is constructing prediction intervals with confidence for regression problems, where the statistical guarantee is expected such that the true labels are covered in probability [4 ###reference_4###]. Nowadays, many researches have been proposed to build set predictors. Bayesian methods [5 ###reference_5###] and Gaussian process [6 ###reference_6###] are straightforward ways of producing prediction sets based on posterior distributions. However, their prediction sets can be misleading if the prior assumptions are not correct, which is often the case since the prior is usually unknown in applications [7 ###reference_7###] [8 ###reference_8###]. Other statistical methods such as bootstrap-based methods [9 ###reference_9###] and quantile regression [10 ###reference_10###] are also able to output prediction sets for test labels, but their coverage guarantees can only be obtained in the asymptotic setting, and the prediction sets may fail to cover the labels frequently in finite-sample cases. Different from these works, conformal prediction (CP), a promising non-parametric learning framework aiming to provide reliable prediction sets, can provide the finite-sample coverage guarantee only under the assumption of exchangeability of data samples [11 ###reference_11###]. This property of validity has been proved both theoretically and empirically in many works and applied to many areas [12 ###reference_12###] [13 ###reference_13###]. Besides, many researches extend CP to more general cases, such as conformal prediction for multi-label learning [14 ###reference_14###] [15 ###reference_15###], functional data [16 ###reference_16###] [17 ###reference_17###], few-shot learning [18 ###reference_18###], distribution shift [19 ###reference_19###] [20 ###reference_20###] and time series [21 ###reference_21###] [22 ###reference_22###].\nHowever, the researches about set predictors mentioned above mainly make promise about the coverage of prediction sets, i.e., they only control the miscoverage loss of set predictors, which can not be applied to other broad applications concerning controlling general losses. For example, consider classifying MRI images into several diagnostic categories [23 ###reference_23###], where different categories cause different consequence. In this setting, the loss of the true label being not included in the prediction set should be dependent on , which is the problem of classification with a class-varying loss. Another example is tumor segmentation [24 ###reference_24###]. Instead of making prediction sets to overly cover the pixels of tumor, one may care more about controlling other losses such as false negative rate. Other practical settings include controlling the projective distance for protein structure prediction, controlling a hierarchical distance for hierarchical classification and controlling F1-score for open-domain question answering [23 ###reference_23###] [24 ###reference_24###]. In these applications, the prediction sets with the coverage guarantee are not useful, as they are not constructed with controlling these general losses in mind.\nTo tackle this issue, two works for extending the finite-sample coverage guarantee of CP have been proposed recently. One is the work of conformal prediction sets with limited false positives (CPS-LFP) [25 ###reference_25###]. It employs DeepSets [26 ###reference_26###] to estimate the expected value or the cumulative distribution function of the number of false positives, and then uses calibration data to control the number of false positives of prediction sets. Conformal risk control (CRC) [24 ###reference_24###] extends CP to prediction tasks of controlling the expected value of a general loss based on finding the optimal parameter for nested prediction sets. The spirit is to employ calibration data to obtain the information of the upper bound of the expected value of the loss function at hand and control the expected value for the test object, whose main idea was originally proposed from their pioneer work named risk-controlling prediction sets (RCPS) [23 ###reference_23###]. CRC and RCPS aim to control the expected value instead of the value of a general loss for set predictors. By contrast, CPS-LFP can control the value of the loss related to false positives, but it is not general enough.\nIn some applications, controlling the value of a general loss can be more preferred than controlling the expected value, since one may only care about the loss value for a specific test object, just like the coverage guarantee made by CP and the -FP validity acheived by CPS-LFP. Therefore, this paper extends CP to the situation where the value of a general loss needs to be controlled, which has been not considered in the literature to our best knowledge. Our approach is similar to CRC with the main difference being that we focus on finding the optimal parameter for nested prediction sets to control the loss. Therefore, we also concentrate on inductive conformal prediction [27 ###reference_27###] or split conformal prediction [28 ###reference_28###] process like CRC.\nRecall that inductive conformal prediction makes the coverage guarantee as follows,\nwhere is the significance level preset by users, is the set predictor made by CP based on calibration data , is the test feature-response pair, and the randomness is from both and .\nBy comparison, conformal loss-controlling prediction (CLCP), the learning framework proposed in this paper, provides the controlling guarantee as follows,\nwhere is a loss function satisfying some monotonic conditions as in [24 ###reference_24###], is the preset level of loss, is a set predictor usually constructed by an underlying algorithm and a parameter . The optimal is obtained based on , and calibration data. The controlling guarantee needs two levels and to be chosen by users, which is similar with that in [23 ###reference_23###], i.e., CLCP guarantees that the prediction loss is not greater than with high probability when is small such as . If is defined based on false positives for multi-label classification, the controlling guarantee above can be seen as the ()-FP validity defined in Definition 4.2 in [25 ###reference_25###].\nWe prove the controlling guarantee for distribution-free and finite-sample settings with the assumption of exchangeability of data samples. The main idea is that we find the to make the quantile of the loss values on calibration data not greater than , which is inspired by CRC focusing on making the mean of the loss values not greater than . Since the property of the set predictors and loss functions used in CLCP is the same as that used in CRC, CLCP can also be applied to many applications concerning controlling general losses. These applications include not only the areas about classification and image segmentation, but also the field of graph signal processing [29 ###reference_29###] [30 ###reference_30###], for example, protein structure prediction.\nThe proposed CLCP is a novel learning framework compared to existing researches. Different from those aiming to control the value of the miscoverage loss, CLCP is a more general approach for the purpose of controlling the value of a general loss. Besides, CLCP can be widely used for many situations whereas CPS-LFP is specifically designed for controlling the loss related to false negatives. Also, CLCP differs from CRC and RCPS as their purpose is to control the expected value instead. Therefore, in the experimental section, we concentrate on designing the experiments to verify the theoretical conclusion for different applications, as the idea of controlling general losses for set predictors is original. To be specific, we test our proposed CLCP in classification with a class-varying loss introduced in [23 ###reference_23###], and postprocessing of numerical weather forecasts, which we consider as point-wise classification and point-wise regression problems. The experimental results empirically confirm the theoretical guarantee we prove in this paper.\nIn summary, the main contributions of this paper are:\nA learning framework named conformal loss-controlling prediction (CLCP) is proposed for controlling the prediction loss for the test object. The approach is simple to implement and can be built on any machine learning algorithm for point prediction.\nThe controlling guarantee is proved mathematically for finite-sample cases with the exchangeability assumption, without any further assumption for data distribution.\nThe controlling guarantee is empirically verified by classification with a class-varying loss and weather forecasting problems, which confirms the effectiveness of CLCP.\nThe rest of this paper is organized as follows. Section II reviews inductive conformal prediction and conformal risk control. Section III introduces conformal loss-controlling prediction and its theoretical guarantee. Section IV conducts experiments to test the proposed method and the conclusions are drawn in Section V."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Inductive Conformal Prediction and Conformal Risk Control",
15
+ "text": "This section reviews inductive conformal prediction and conformal risk control. Throughout this paper, denotes data drawn exchangeably from on , where is the calibration dataset and is the test object-response pair. We use lower-case letter to represent the realization of .\nThe set-valued function and loss function considered in this paper are the same as those in [24 ###reference_24###] and [23 ###reference_23###], which we formally introduce as follows.\nLet be a set-valued function with a parameter , where represents some space of sets and is the set of real numbers. Taking single-label classification for example, can be the power set of . For binary image segmentation, can be equal to as the space of all possible results of image segmentation, where the sets here stand for all of the pixels of positive class for the image.\nWe also introduce the nesting property for prediction sets and losses as in [23 ###reference_23###] as follows. For each realization of input object , we assume that satisfies the following nesting property:\nFurthermore, with and being two subsets of , we assume that is a loss function respecting the following nesting property for each realization of response :\nwhere is the upper bound of the loss function."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Inductive Conformal Prediction",
21
+ "text": "Inductive conformal prediction (ICP) is a computationally efficient version of the original conformal prediction approach. It starts with any measurable function named nonconformity measure and obtains nonconformity scores as\nfor .\nThen, with the exchangeable assumption and a preset , one can conclude that\nwhere is the quantile of [19 ###reference_19###]. Therefore, the prediction set made by ICP is\nwhich satisfies\nThe nonconformity measure is often defined based on a point prediction model learned from some other training samples, each of which is also drawn from .\nHere is an example of constructing prediction sets with CP. For a classification problem with classes, one can first train a classifier with the th output being the estimation of the probability of the th class, and calculate the nonconformity scores as\nwhere is the th output of , if stands for the th class. Therefore, the corresponding prediction set for an input object is\nwhich indicates that if the estimated probability of th class is not less than ."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Conformal Risk Control",
27
+ "text": "Different from conformal prediction, CRC starts with a set-valued function with the nesting property, whose approach is inspired by nested conformal prediction [31 ###reference_31###] and was first proposed in the researches about risk-controlling prediction sets.\nAssume one has a way of constructing a set-valued function with the nesting property of formula (1). Given a loss function with the nesting property of formula (2), the purpose of CRC is to find such that\ni.e., the expected loss or the risk is not greater than .\nTo do so, CRC first calculates as\nwith the fact that is a monotone decreasing function of based on the nesting properties. Then, CRC searches for using the following equation,\nwhere is an estimation of the risk on calibration data and is introduced to make the estimation not overconfident.\nThese two steps of CRC are too simple that one may surprise about its theoretical conclusion that with the assumption of exchangeability of data samples, the prediction set\n obtained by CRC satisfies formula (3), which has been also proved empirically in [24 ###reference_24###]. CRC extends CP from controlling the expected value of miscoverage loss to some general loss, which can be applied to the cases where is beyond real numbers or vectors, such as images, fields and even graphs.\nAfter tackling the theoretical issue, the problem for CRC is how to construct . Here, we also give an example of a classification problem with classes. In fact, with the same notations of the example in Section II-A, CRC can construct the prediction set as\nTherefore, as long as satisfies formula (2), such as is the indicator of miscoverage, CRC guarantees to control the risk as formula (3)."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III Conformal Loss-Controlling Prediction And Its Theoretical Analysis",
33
+ "text": "This section introduces the approach of CLCP and its theoretical analysis. CLCP also has two steps like CRC, and the main difference between them is that CLCP focuses on whether the estimation of the quantile of the losses is not greater than while CRC concentrates on whether the mean of the losses not greater than . The controlling of the quantile of the losses makes CLCP able to control the value of a general loss by employing the probability inequation derived from the exchangeability assumption, which is also employed by ICP if the loss is seen as the nonconformity score.\nSuppose one has a way of constructing a set-valued function with the nesting property of formula (1), which can be the same as that used in CRC. Here, we assume that the parameter is selected from a discrete set , such as from to with a step size , which avoids us from the assumption of right continuous for the loss function in theoretical analysis, and is also reasonable since we actually search for with some step size in practice [24 ###reference_24###] [23 ###reference_23###]. Besides, the latest paper about risk-controlling prediction also makes this discrete assumption for general cases [32 ###reference_32###]. After determining and , CLCP first calculates on calibration data as formula (4). Then, for any preset and , CLCP searches for such that\nwith being the quantile of . The approach of CLCP is summarised in Algorithm 1, which is easy to implement.\nNext, we introduce the definition of -loss-controlling set predictors and then prove our theoretical conclusion about CLCP.\nGiven a loss function and a random sample , a random set-valued function whose realization is in the space of functions is a -loss-controlling set predictor if it satisfies that\nwhere the randomness is both from and .\nAfter all these preparations, we can prove in Theorem 1 that constructed by CLCP is a -loss-controlling set predictor.\nSuppose are data drawn exchangeably from on , is a set-valued function satisfying formula (1) with the parameter taking values from a discrete set , is a loss function satisfying formula (2) and is defined as formula (4). For any preset , if also satisfies the following condition,\nthen for any , we have\nwhere is defined as formula (5).\nLet be the quantile of , and\ndefine as\nSimilarly, let be the quantile of , and\nwe have\nAs and formula (6) holds, and are well defined.\nSince is the upper bound of , by definition, we have\nwhich leads to\nas and satisfy the nesting properties of formula (1) and (2).\nSince is dependent on the whole dataset , are exchangeable variables, which leads to\nas is just the corresponding quantile (See the proof of Lemma 1 in [19 ###reference_19###]).\nCombining the definition of , formula (8) and (9), we have\nwhich completes the proof.\n\u220e\nAt the end of this section, we show that CP can be seen as a special case of CLCP from the following viewpoint. Suppose is constructed by a nonconformity score , which is defined as\nand is the miscoverage loss such that\nwhere is the indicator function. In this case, can only be or as the loss can only be these two numbers. Besides, only is meaningful, which means that one wants to control the miscoverage. For CLCP, let be an arithmetic sequence whose common difference, minimum and maximum are , and respectively and set . By definition, can be written as\nwhere is the nonconformity score of the th calibration data for CP.\nIn comparison, referring to [24 ###reference_24###], the optimal for CP is\nTherefore, if for each , we have\nwhich implies that the prediction sets of CP and CLCP are nearly the same if is small enough. In summary, if and have special forms and includes the upper and lower bounds of nonconformity scores with being small enough to be ignored, CP can be seen as a special case of CLCP."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "IV Experiments",
39
+ "text": "This section conducts the experiments to empirically test the approach of CLCP. First, we build CLCP for the classification problem with a class-varying loss introduced in [23 ###reference_23###]. Then, we focus on two types of weather forecasting applications, which can be seen as point-wise classification and point-wise regression problems respectively. All experiments were coded in Python [33 ###reference_33###]. The statistical learning methods used in Section IV-A were implemented using Scikit-learn [34 ###reference_34###] and the deep learning methods used in Section IV-B and Section IV-C were implemented with Pytorch [35 ###reference_35###]."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "IV-A CLCP for classification with a class-varying loss",
45
+ "text": "We collected binary or multiclass classification datasets from UCI repositories [36 ###reference_36###] whose information is summarized in Table I. The problem is to make the prediction sets of labels controlling the following loss\nwhere is the loss for being not in the prediction set . The loss for each label is generated uniformly on like [23 ###reference_23###]. Support vector machine (SVM) [37 ###reference_37###], neural network (NN) [38 ###reference_38###] and random forests (RF) [39 ###reference_39###] were employed as the underlying algorithms separately to construct prediction sets based on CLCP. The prediction set is constructed as\nwhere is the estimated probability of the observation being th class by the corresponding underlying algorithm. For each dataset, we used of the data for testing and and of the remaining data for training and calibration respectively. Based on the training data, we selected the meta-parameters with three-fold cross-validation and used the optimal meta-parameters to train the classifiers. The regularization parameter of SVM was selected from , and the learning rate and the epochs of NN were selected from and . The number of trees of RF were selected from and the partition criterion was either gini or entropy. After training, we used the trained classifiers and the calibration data to search for with Algorithm 1 and construct the final set predictors. All of the features were normalized to by min\u2013max normalization and for each dataset, the experiments were conducted times and the average results were recorded.\nThe bar plots in Fig. 1 and Fig. 2 show the experimental results for public datasets with and . The results in Fig. 1 concern about the frequency of the prediction losses being greater than on test set, which is the estimated probability of\nand should be near or lower than empirically due to formula (7).\nThe bar plots of Fig. 1 demonstrate that the frequency of the prediction losses being greater than is near or below , which verifies the conclusion of Theorem 1.\nThe bar plots of Fig. 2 show the average sizes of prediction sets for different , describing the informational efficiency of the prediction sets. Changing can effectively change the average size of prediction sets and changing may slightly change average size (such as the results for wine-quality-red). Although many prediction sets are meaningful with average sizes being near , the prediction sets for the dataset contrac may be not useful, since no matter how to change and , the average sizes of the prediction sets are all near or above , whereas the number of classes of contrac is . Thus, how to construct efficient prediction sets in the learning framework of CLCP is worth exploring for further researches.\nCombining Fig. 1 and Fig. 2, we observe that different classifiers can perform differently for different datasets, which indicates that the underlying algorithm affects the performance and the model selection approach is necessary for CLCP.\n###figure_1### ###figure_2###"
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "IV-B CLCP for high-impact weather forecasting",
51
+ "text": "###figure_3### ###figure_4### ###figure_5### The remaining experiments apply CLCP to weather forecasting problems. Here we concentrate on postprocessing of the forecasts made by numerical weather prediction (NWP) models [40 ###reference_40###] [41 ###reference_41###]. NWP models use equations of atmospheric dynamics and estimations of current weather conditions to do weather forecasting, which is the mainstream weather forecasting technique nowadays especially for forecasting beyond hours. Many errors affect the performance of NWP models, such as the estimation errors of initial conditions and the approximation errors of NWP models, leading to the research topic about postprocessing the forecasts of NWP models. Most postprocessing methods are built on some learning process, which takes the forecasts of NWP models as inputs and the observations of weather elements or events as outputs.\nIn this paper, we use CLCP to postprocess the ensemble forecasts with the control forecast and perturbed forecasts issued by the NWP model from European Centre for Medium-Range Weather Forecasts (ECMWF) [42 ###reference_42###], which are obtained from the THORPEX Interactive Grand Global Ensemble (TIGGE) dataset [43 ###reference_43###]. We focus on -m maximum temperature and minimum temperature between the forecast lead times of nd hour and th hour with the forecasts initialized at UTC. The forecast fields are grided with the resolution of and the corresponding label fields with the same resolution are extracted from the ERA5 reanalysis data [44 ###reference_44###].\nThe area ranges from E to E in longitude and from N to N in latitude, covering the main parts of North China, East China and Central China, whose grid size is . The ECMWF forecast data and ERA5 reanalysis data are collected from to ( years).\nWe first consider high-impact weather forecasting, which is to forecast whether a high-impact weather exists for each grid and can be seen as a point-wise classification problem or image segmentation problem for computer vision. The high-impact weather we consider is whether the -m maximum temperature is above or the -m minimum temperature is below for each grid. These two cases are treated as high temperature weather or low temperature weather in China, which make meteorological observatories issue high temperature warning or low temperature warning respectively.\nThe prediction sets and the loss function used for high-impact weather forecasting are the same as those for image segmentation in [24 ###reference_24###].\nTaking the ensemble forecast fields of the NWP model as input , the corresponding label is a set of grids having high-impact weather, which can be seen as a segmentation problem for high-impact weather. Therefore, we first train a segmentation neural network , where is the estimated probability of the grid having high-impact weather. Then the set-valued function can be constructed as\nand the loss function is\nwhich measures the ratio of the prediction sets failing to do the warning. We use CLCP with the prediction set and the loss function above to do high temperature and low temperature forecasting respectively."
52
+ },
53
+ {
54
+ "section_id": "4.2.1",
55
+ "parent_section_id": "4.2",
56
+ "section_name": "IV-B1 Dataset for high temperature forecasting",
57
+ "text": "The reanalysis fields of -m maximum temperature were collected from ERA5 and the label fields were calculated based on whether the -m maximum temperature is above . To make the loss function take finite values, we only collected the data whose label fields have at least one high temperature grid to do this empirical study, which resulted in samples in total, i.e., ensemble forecasts from the NWP model of ECMWF and corresponding label fields calculated from ERA5. We name this dataset as HighTemp."
58
+ },
59
+ {
60
+ "section_id": "4.2.2",
61
+ "parent_section_id": "4.2",
62
+ "section_name": "IV-B2 Dataset for low temperature forecasting",
63
+ "text": "The dataset for testing CLCP for low temperature weather forecasting was constructed in a similar way. The reanalysis fields of -m minimum temperature were collected from ERA5 and the label fields were calculated based on whether the -m minimum temperature is below . We only collected the data whose label fields have at least one low temperature grid to do this empirical study, which resulted in samples in total. We name this dataset as LowTemp.\nFor each dataset, the same process was used to conduct the experiment as Section IV-A , i.e., all forecasts from the NWP model were normalized to by min\u2013max normalization, and we used of the data for testing and and of the remaining data for training and calibration respectively. We employed two fully convolutional neural networks [45 ###reference_45###] for binary image segmentation as our underlying algorithms. One was U-Net [46 ###reference_46###] with the same structure as that in [47 ###reference_47###], whose numbers of hidden feature maps were all set to . The other was the naive deep neural network (nDNN) with the same encoder-decoder structure as the U-Net without skip-connections, i.e., the U-Net removing skip-connections. We use these two neural networks to show that the design of the underlying algorithm is necessary for better performance, as U-Net fuses multi-scale features and nDNN does not. The data for training U-Net and nDNN were further partitioned into the validation part () for model selection and proper training part () for updating the parameters. Adam optimization [48 ###reference_48###] was used for training. The learning rate was set to and the number of epochs was set to . After training epochs, the model with lowest binary cross entropy on validation data was used for formula (10) to construct prediction sets, where is searched from to with step size . The experiments of using CLCP for the loss function as formula (11) were conducted times and the prediction results on test set are shown in Fig. 3, Fig. 4 and Fig. 5.\nFig. 3 also shows the bar plots of the frequencies of the prediction losses being greater than for and . Four columns stand for the cases where and respectively. It can be seen that for the two datasets HighTemp and LowTemp, all bars are near or below the preset , which verifies formula (7) empirically. Fig. 4 further shows the distributions of the losses for different and different using boxen plots, which contain more information than box plots by drawing narrow boxes for tails. It can be seen that larger and lead to larger losses, which is reasonable since large and relax the constraint on prediction losses. We measure the informational efficiency of the prediction set using its normalized size defined as , where and are the numbers of the vertical and the horizontal grids respectively. The distributions of normalized sizes in Fig. 5 show that U-Net is more informationally efficient than nDNN, which indicates that design of the underlying algorithm is important for CLCP. Different and lead to different normalized sizes, implying the trade-off among the preset loss level , confidence level and informational efficiency of the prediction sets. By choosing and properly, the prediction sets of CLCP can have reasonable sizes. Also, we can see that forecasting low temperature is somehow easier than high temperature with the fact that for the same and , the normalized sizes of forecasting low temperature are distributed lower than the ones of forecasting high temperature, indicating the need of design of the underlying algorithms to improve performance for forecasting high temperature."
64
+ },
65
+ {
66
+ "section_id": "4.3",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-C CLCP for maximum temperature and minimum temperature forecasting",
69
+ "text": "###figure_6### ###figure_7### ###figure_8### This section focuses on using CLCP to forecast the -m maximum temperature or minimum temperature value for each grid, which is a point-wise regression problem or image-to-image regression problem. To construct the prediction sets, we follow the procedure proposed in [49 ###reference_49###] and train the neural network with output channels jointly predicting the point-wise , and quantiles of the fields using quantile regression [10 ###reference_10###] [49 ###reference_49###], which are denoted by , and . Then the prediction set is equal to\nwhere\nand is a point-wise operator making and at least . This prediction set is a prediction band for the output field, whose prediction interval at grid is\nwith the point-wise width being an increasing function of . This construction was proposed in [49 ###reference_49###] for image-to-image regression and we use the same loss function in [49 ###reference_49###] measuring miscoverage rate of a prediction band for a field , which can be formalized as\nwhere is the prediction interval at grid for prediction band .\nAll of the data collected from to were used, leading to samples for each forecasting application and the datasets are named as MaxTemp and MinTemp respectively.\nThe experimental design is the same as that in Section IV-B, except that we also normalized the label for each grid to by min\u2013max normalization, used quantile loss for model selection and we searched for with two steps. First we found two values and from such that and . Then we searched for from values starting with and ending with using a common step size. The experimental results are recorded in Fig. 6, Fig 7 and Fig 8.\nAlthough the set predictors and the loss function used in this section are different from those in Section IV-B, the experimental results and conclusions are similar. From Fig. 6, we can see that the frequencies of the prediction losses being greater than are controlled by , which also verifies formula (7) empirically. Larger and lead to larger losses, which is shown in Fig. 7.\nHere we use the following average interval length\nto measure the informational efficiency of the prediction set and Fig. 8 also depicts the trade-off among the preset loss level , confidence level and informational efficiency of the prediction sets and indicates that better design of underlying algorithms leads to better performance."
70
+ },
71
+ {
72
+ "section_id": "5",
73
+ "parent_section_id": null,
74
+ "section_name": "Conclusion",
75
+ "text": "This paper extends conformal prediction to the situation where the value of a loss function needs to be controlled, which is inspired by risk-controlling prediction sets and conformal risk control approaches. The loss-controlling guarantee is proved in theory with the assumption of exchangeability and is empirically verified for different kinds of applications including classification with a class-varying loss and weather forecasting. Different from conformal prediction, conformal loss-controlling prediction approach proposed in this paper has two preset parameters and , which guarantees that the prediction loss is not greater than with confidence . Both parameters impose restrictions on prediction sets and should be set based on specific applications. Despite loss-controlling guarantee, informational efficiency of the prediction sets built by conformal loss-controlling prediction is highly related to underlying algorithms, which has been shown in empirical studies. Since this is a rather new topic, the underlying algorithms and the way of constructing set predictors are inherited from conformal risk control. This leaves the important question on how to build informationally efficient set predictors in an optimal way, which is one of our further researches in the future."
76
+ }
77
+ ],
78
+ "appendix": [],
79
+ "tables": {
80
+ "1": {
81
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Datasets from UCI Repositories</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.1\" style=\"width:156.1pt;height:264.6pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-33.5pt,56.7pt) scale(0.7,0.7) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T1.1.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1.2\">Examples</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1.3\">Dimensionality</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1.4\">Classes</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T1.1.1.2.1.1\">bc-wisc-diag</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.2\">569</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.3\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.3.2.1\">car</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.2\">1728</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.3\">6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.4\">4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.4.3.1\">chess-kr-kp</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.2\">3196</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.3\">36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.5.4.1\">contrac</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.2\">1473</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.3\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.4\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.6.5.1\">credit-a</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.2\">690</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.3\">15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.7.6.1\">credit-g</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.2\">1000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.3\">20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.8.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.8.7.1\">ctg-10classes</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.2\">2126</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.3\">21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.4\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.9.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.9.8.1\">ctg-3classes</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.9.8.2\">2126</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.9.8.3\">21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.9.8.4\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.10.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.10.9.1\">haberman</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.10.9.2\">306</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.10.9.3\">3</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.10.9.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.11.10\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.11.10.1\">optical</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.2\">5620</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.3\">62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.4\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.12.11\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.12.11.1\">phishing-web</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.2\">11055</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.3\">30</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.13.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.13.12.1\">st-image</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.13.12.2\">2310</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.13.12.3\">18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.13.12.4\">7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.14.13\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.14.13.1\">st-landsat</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.14.13.2\">6435</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.14.13.3\">36</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.14.13.4\">6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.15.14\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.15.14.1\">tic-tac-toe</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.2\">958</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.3\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.16.15\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.16.15.1\">wall-following</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.2\">5456</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.3\">24</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.4\">4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.17.16\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.17.16.1\">waveform</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.17.16.2\">5000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.17.16.3\">21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.17.16.4\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.18.17\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.18.17.1\">waveform-noise</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.18.17.2\">5000</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.18.17.3\">40</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.18.17.4\">3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.19.18\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.19.18.1\">wilt</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.19.18.2\">4839</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.19.18.3\">5</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.19.18.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.20.19\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T1.1.1.20.19.1\">wine-quality-red</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.20.19.2\">1599</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.20.19.3\">11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.20.19.4\">6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.21.20\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T1.1.1.21.20.1\">wine-quality-white</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.1.21.20.2\">4898</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.1.21.20.3\">11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.1.21.20.4\">7</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
82
+ "capture": "TABLE I: Datasets from UCI Repositories"
83
+ }
84
+ },
85
+ "image_paths": {
86
+ "1": {
87
+ "figure_path": "2301.02424v2_figure_1.png",
88
+ "caption": "Figure 1: Bar plots of the frequencies of the prediction losses being greater than \u03b1\ud835\udefc\\alphaitalic_\u03b1 vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for classification with a class-varying loss. The first row corresponds to \u03b1=0.1\ud835\udefc0.1\\alpha=0.1italic_\u03b1 = 0.1 and the second row corresponds to \u03b1=0.2\ud835\udefc0.2\\alpha=0.2italic_\u03b1 = 0.2. Different columns represent different classifiers. All bars are near or below the preset \u03b4\ud835\udeff\\deltaitalic_\u03b4, which confirms the controlling guarantee of CLCP empirically.",
89
+ "url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/classification_validity.jpg"
90
+ },
91
+ "2": {
92
+ "figure_path": "2301.02424v2_figure_2.png",
93
+ "caption": "Figure 2: Bar plots of the average sizes of prediction sets vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for classification with a class-varying loss. The first row corresponds to \u03b1=0.1\ud835\udefc0.1\\alpha=0.1italic_\u03b1 = 0.1 and the second row corresponds to \u03b1=0.2\ud835\udefc0.2\\alpha=0.2italic_\u03b1 = 0.2. Different columns represent different classifiers. The plots demonstrate the information in prediction sets. In general, large \u03b4\ud835\udeff\\deltaitalic_\u03b4 leads to small average size and different classifiers have different informational efficiency.",
94
+ "url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/classification_efficiency.jpg"
95
+ },
96
+ "3": {
97
+ "figure_path": "2301.02424v2_figure_3.png",
98
+ "caption": "Figure 3: Bar plots of the frequencies of the prediction losses being greater than \u03b1\ud835\udefc\\alphaitalic_\u03b1 vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for high-impact weather forecasting. The first row corresponds to HighTemp and the second row corresponds to LowTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. All bars are near or below the preset \u03b4\ud835\udeff\\deltaitalic_\u03b4, which confirms the controlling guarantee of CLCP empirically.",
99
+ "url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_classification_err_rate.jpg"
100
+ },
101
+ "4": {
102
+ "figure_path": "2301.02424v2_figure_4.png",
103
+ "caption": "Figure 4: Boxen plots of the prediction losses vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for high-impact weather forecasting. The first row corresponds to HighTemp and the second row corresponds to LowTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. The loss distributions are controlled by \u03b1\ud835\udefc\\alphaitalic_\u03b1 and \u03b4\ud835\udeff\\deltaitalic_\u03b4 properly to obtain the empirical validity in Fig. 3.",
104
+ "url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_classification_loss.jpg"
105
+ },
106
+ "5": {
107
+ "figure_path": "2301.02424v2_figure_5.png",
108
+ "caption": "Figure 5: Boxen plots for the distributions of normalized sizes of prediction sets vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for high-impact weather forecasting. The first row corresponds to HighTemp and the second row corresponds to LowTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. U-Net performs better than nDNN, which indicates the importance of careful design of the underlying algorithm.",
109
+ "url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_classification_efficiency.jpg"
110
+ },
111
+ "6": {
112
+ "figure_path": "2301.02424v2_figure_6.png",
113
+ "caption": "Figure 6: Bar plots of the frequencies of the prediction losses being greater than \u03b1\ud835\udefc\\alphaitalic_\u03b1 vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for maximum temperature and minimum temperature forecasting. The first row corresponds to MaxTemp and the second row corresponds to MinTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. All bars are near or below the preset \u03b4\ud835\udeff\\deltaitalic_\u03b4, which confirms the controlling guarantee of CLCP empirically.",
114
+ "url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_regression_err_rate.jpg"
115
+ },
116
+ "7": {
117
+ "figure_path": "2301.02424v2_figure_7.png",
118
+ "caption": "Figure 7: Boxen plots of the prediction losses vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for maximum temperature and minimum temperature forecasting. The first row corresponds to MaxTemp and the second row corresponds to MinTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. The loss distributions are controlled by \u03b1\ud835\udefc\\alphaitalic_\u03b1 and \u03b4\ud835\udeff\\deltaitalic_\u03b4 properly to obtain the empirical validity in Fig. 6.",
119
+ "url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_regression_loss.jpg"
120
+ },
121
+ "8": {
122
+ "figure_path": "2301.02424v2_figure_8.png",
123
+ "caption": "Figure 8: Boxen plots for the distributions of average interval length vs. \u03b4=0.05,0.1,0.15,0.2\ud835\udeff0.050.10.150.2\\delta=0.05,0.1,0.15,0.2italic_\u03b4 = 0.05 , 0.1 , 0.15 , 0.2 on test data for maximum temperature and minimum temperature forecasting. The first row corresponds to MaxTemp and the second row corresponds to MinTemp. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. U-Net performs better than nDNN, which indicates the importance of careful design of the underlying algorithm.",
124
+ "url": "http://arxiv.org/html/2301.02424v2/extracted/5362843/pixel_regression_efficiency.jpg"
125
+ }
126
+ },
127
+ "validation": true,
128
+ "references": [],
129
+ "url": "http://arxiv.org/html/2301.02424v2"
130
+ }
20240123/2301.04378v3.json ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Loss-Controlling Calibration for Predictive Models",
3
+ "abstract": "We propose a learning framework for calibrating predictive models to make loss-controlling prediction for exchangeable data, which extends our recently proposed conformal loss-controlling prediction for more general cases. By comparison, the predictors built by the proposed loss-controlling approach are not limited to set predictors, and the loss function can be any measurable function without the monotone assumption. To control the loss values in an efficient way, we introduce transformations preserving exchangeability to prove finite-sample controlling guarantee when the test label is obtained, and then develop an approximation approach to construct predictors. The transformations can be built on any predefined function, which include using optimization algorithms for parameter searching. This approach is a natural extension of conformal loss-controlling prediction, since it can be reduced to the latter when the set predictors have the nesting property and the loss functions are monotone. Our proposed method is applied to selective regression and high-impact weather forecasting problems, which demonstrates its effectiveness for general loss-controlling prediction.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Predictive models built on modern machine learning techniques have been deployed for many areas due to their expressive power. However, many of the algorithms can not provide reliable information about the difference or distance between the prediction and the true label for a specific test object, which is essential for confidence prediction [1 ###reference_1###] and is important for high-risk applications [2 ###reference_2###]. If the prediction is a set of possible labels and the difference is the miscoverage loss for set predictors, the learning framework of conformal prediction (CP) can tackle this issue with its coverage guarantee under the assumption of exchangeability of data samples [3 ###reference_3###] [4 ###reference_4###] [5 ###reference_5###]. Furthermore, our recently proposed conformal loss-controlling prediction (CLCP) [6 ###reference_6###] extends CP from the miscoverage loss to the loss satisfying monotone conditions, which ensures that the prediction loss is not greater than a preset level for high confidence. These two existing frameworks are both limited to set predictors and non-general losses, leading to this work considering general forms of predictors and losses for loss-controlling prediction, which is a form of prediction with confidence beyond confidence sets with coverage guarantee.\nCLCP is inspired by risk-controlling prediction sets [7 ###reference_7###] and conformal risk control [8 ###reference_8###], and the purpose of CLCP is to build a set predictor such that\nwhere and are preset parameters for loss level and significance level respectively, and is a monotone loss function as in [7 ###reference_7###]. is the prediction set with the nesting property for the parameter , where is the discrete set of possible values of . is usually built on some underlying predictive model learned on training data. The optimal is obtained based on calibration data , and is the test feature-response pair. The randomness of the probability inequation above is from both and . The approach of CLCP needs to calculate the quantiles of losses on calibration data for all and search for based on the monotone conditions of loss functions. Although CLCP extends CP to more general cases, the forms of the set predictors and the loss functions used in CLCP are still limited.\nTo overcome this issue, one way is to use the learn then test process [9 ###reference_9###] to fuse multiple probability inequations like formula (1 ###reference_###) to maintain the controlling guarantee. However, this process can not be effectively applied to our loss-controlling approach. One example is to use the Bonferroni correction to obtain the family-wise loss-controlling guarantee, where one needs to calculate the quantile of losses for each possible , resulting in meaningless calculation if is large and the number of calibration data is not. For example, if and , we need to calculate the quantile of losses for each possible , which makes sense only if the number of calibration data is more than 10000.\nTherefore, to improve data efficiency, the loss-controlling calibration (LCC) approach proposed in this paper employs predefined searching functions and the transformations preserving exchangeability to avoid the multiple hypothesis testing process, whose approach is a natural extension of CLCP. Concretely, we aim to calibrate a predictive model to obtain the calibrated predictor such that\nwhere can be a point, set or any other form of predictor built on with the parameter . is a measurable loss function without the need of monotone conditions. The optimal is calculated by some predefined function and all data , i.e., the controlling guarantee of formula (2 ###reference_###) is only for the ideal case where one has the test label. However, we can approximately obtain using in practice and the controlling guarantee can still be hold empirically in our experiments. In other words, the LCC proposed in this paper sacrifices the theoretical guarantee to efficient calibration, and the approximation is sound for large in theory and in our empirical studies. In the experiments, we apply LCC to selective regression with single or multiple targets to calibrate point predictors to control one or multiple losses, and also apply LCC to high-impact weather forecasting applications to control the non-monotone loss related to false discovery. All of the experimental results confirm the effectiveness of our proposed LCC approach.\nIn summary, three contributions are made in this paper:\nA learning framework named loss-controlling calibration is proposed for calibrating predictive models to make general loss-controlling prediction. The approach is a natural extension of CLCP and is easy to implement.\nBy employing transformations preserving exchangeability, the distribution-free and finite-sample controlling guarantee is proved mathematically with the exchangeability assumption in the ideal condition where the test label is obtained, and a reasonable approximation approach is proposed for practice.\nThe proposed LCC is applied to selective regression and weather forecasting problems, which empirically demonstrates its effectiveness for loss-controlling prediction in general cases.\nThe remaining parts of this paper are organized as follows. Section II reviews inductive conformal prediction and conformal loss-controlling prediction and Section III proposes the loss-controlling calibration approach with its theoretical analysis. Section IV applies the proposed approach to selective regression and high-impact weather forecasting problems to empirically verify the loss-controlling guarantee. Finally, the conclusions of this paper are drawn in Section V."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II Inductive Conformal Prediction and Conformal Loss-Controlling Prediction",
15
+ "text": "This section reviews inductive conformal prediction and recently proposed conformal loss-controlling prediction. Throughout this paper, let be data drawn exchangeably from on . is the test object-response pair and the first samples are calibration data. The lower-case letter represents the realization of ."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "II-A Inductive Conformal Prediction",
21
+ "text": "Inductive conformal prediction (ICP) [10 ###reference_10###] is a variant of conformal prediction tackling the computational issue of the original conformal prediction approach. ICP starts with any measurable function called nonconformity measure, and calculates nonconformity scores as\nfor .\nDenote as the quantile of . With the assumption of exchangeability of data samples, for any preset , ICP makes promise that\nThus, ICP outputs the following set prediction\nwhich leads to\nThe nonconformity measure is usually designed based on a point prediction model trained on training samples drawn from and here is an example for a classification problem with classes. In this situation, based on training samples, one can train a classifier , whose th output is the estimated probability of the th class, and the corresponding nonconformity measure can be defined as\nwhich leads to the following prediction set for an input object ,"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "II-B Conformal Loss-Controlling Prediction",
27
+ "text": "Different from ICP, the purpose of CLCP is to build predictors with loss-controlling guarantee as formula (1 ###reference_###), whose approach is inspired by conformal risk control [8 ###reference_8###]. CLCP starts with a set-valued function with a parameter , where is a discrete set of possible real values of such as from to with step size of . denotes some space of sets. For example, can be the power set of for\nsingle-label classification and can be equal to for binary image segmentation. This set-valued function needs to satisfy the following nesting property introduced in [7 ###reference_7###]:\nHere we give an example of constructing the prediction set for classification problem with classes. With the same meanings of and mentioned in Section II-A, CLCP can construct the prediction set as\nwhich satisfies the nesting property of formula (3 ###reference_###).\nIn addition, for each realization of response , the loss function considered in CLCP should respect the following monotone property or nesting property:\nwhere is the upper bound.\nAfter determining and , for preset and , CLCP first calculates as\nfor , and then searches for such that\nwhere is the quantile of . The finally obtained set predictor satisfies the controlling guarantee of formula (1 ###reference_###), which is proved in theory for distribution-free and finite-sample conditions, and CLCP can be seen as an extension of CP for specific forms of and [6 ###reference_6###]."
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "III Loss-Controlling Calibration and Its Theoretical Analysis",
33
+ "text": "This section introduces the extension of CLCP to general cases with the proposed loss-controlling calibration, analyze it theoretically in the ideal case and promotes it to control multiple losses jointly."
34
+ },
35
+ {
36
+ "section_id": "3.1",
37
+ "parent_section_id": "3",
38
+ "section_name": "III-A Loss-Controlling Calibration",
39
+ "text": "CLCP needs nesting properties for and , which limits its applicability. Therefore, we propose loss-controlling calibration for general predictors and loss functions. We denote as a predictor built on a predictive model learned from training data, where is a parameter taking values from a discrete set . For LCC, we emphasize that can be any kind of predictor, i.e., does not have to be the set of label sets. Besides, can be any discrete set such as the set of multi-dimensional vectors as in [9 ###reference_9###]. Also, the loss function considered for LCC can be any measurable function bounded above by , i.e., for each object-response pair ,\nGiven these general conditions, one way of constructing loss-controlling guarantee is to use multiple hypothesis testing process developed in learn then test [9 ###reference_9###]. However, this may lead to calculating the quantiles of losses on calibration data, which may be meaningless for our loss-controlling approach when the number of calibration data is small or moderate. Thus, we propose to use a predefined function independent of to do the trick, where stands for searching since it can be defined as an optimization algorithm for parameter searching. The approach of LCC is very similar to CLCP and we first introduce it for comparison, leaving the analysis of it to the next section.\nAfter determining and , for preset and , LCC first calculates on calibration data as\nand then search for such that\nwhere is the predefined searching function defined on the power set of , whose output is an element of its input, and is the quantile of . The final predictor built by LCC is , which is very similar to satisfying the loss-controlling guarantee as formula (2 ###reference_###). The relation between and will be introduced in Section II-B.\nIt can be seen that LCC is exactly CLCP if , is a set predictor with nesting property as formula (3 ###reference_###), is monotone as formula (4 ###reference_###) and is the min function. Therefore, for LCC we also use the same notations of and as CLCP to represent similar concepts. Here we summarized LCC in Algorithm 1."
40
+ },
41
+ {
42
+ "section_id": "3.2",
43
+ "parent_section_id": "3",
44
+ "section_name": "III-B Theoretical Analysis of Loss-Controlling Calibration",
45
+ "text": "This section provides the theoretical insights of LCC.\nLet be the quantile of .\nDefine as\nwhich is very similar to especially for large , as and are nearly the same in that case.\nHere we introduce the definition of -loss-controlling predictors and then prove loss-controlling guarantee with based on the theorem about transformations preserving exchangeability developed in [11 ###reference_11###] and introduced in [12 ###reference_12###] as Theorem 3.\nGiven a loss function and a random sample , a random function whose realization is in the space of functions is a -loss-controlling predictor if it satisfies that\nwhere the randomness is both from and .\nNext we prove in Theorem 1 that is a -loss-controlling predictor.\nSuppose are data drawn exchangeably from on , is a function with the parameter taking values from a discrete set , is a loss function satisfying formula (5 ###reference_###) and is defined as formula (6 ###reference_###). Denote as any searching function defined on the power set of whose output is the element of its input. For any preset , if also satisfies the following conditions,\nthen for any , we have\nwhere is defined as formula (8 ###reference_###).\nFormula (9 ###reference_###) implies that is well defined. As is defined on the whole dataset with the function , one can treat as a transformation applied to , i.e.,\nBesides, based on Theorem 3 in [12 ###reference_12###], this transformation preserves exchangeability, since for each permutation , there exists a permutation , such that\nfor all possible .\nIt follows that\nas is just the corresponding quantile of the exchangeable variables .\n(See Lemma 1 in [13 ###reference_13###]).\nBy definition of the function , we have . This combining formula (10 ###reference_###) leads to\nwhich completes the proof.\n\u220e\nTheorem 1 shows the loss-controlling guarantee for the ideal case where is available, whose approach can be approximated using Algorithm 1 in practice. The conditions that and formula (9 ###reference_###) holds imply that is well defined, which makes us able to obtain based on the searching function . Also, by definition, one can conclude that\nwhich indicates that searching in the left set above is reasonable, especially for large . The proof in Theorem 1 only needs to be a predefined function independent of . Thus, one can define as an optimization algorithm based on another hold-out dataset for parameter searching."
46
+ },
47
+ {
48
+ "section_id": "3.3",
49
+ "parent_section_id": "3",
50
+ "section_name": "III-C Controlling Multiple Losses",
51
+ "text": "Due to the general forms of the calibrated predictors and the loss functions, one can consider using LCC to control multiple losses jointly. Suppose the th loss on th calibration sample with is , the loss level for th loss is and the number of losses is . One simple method is to search for such that,\nwhere is the quantile of . Therefore, to control multiple losses jointly, one may have to calculate the quantiles, which only makes sense when is small.\nTo show that searching with formula (11 ###reference_###) is reasonable, the following Corollary 1 is introduced to control multiple losses jointly when the test label is obtained, and concludes that one can search for such that\nwhere is the quantile of .\nAssume that is an -dimensional vector and for each , only depends on its th dimension , i.e.,\nFor any preset , if also satisfies the following conditions,\nthen for any , we have\nwhere is defined as formula (12 ###reference_###).\nThe conditions of formula (13 ###reference_###) and (14 ###reference_###) guarantee that is well defined, and with Theorem 1, we have\nfor each , which leads to the conclusion of Corollary 1.\n\u220e\nThe conditions that and formula (13 ###reference_###) and (14 ###reference_###) hold are assumed to make sure that both and exist, which can be replaced or relaxed in practice. However, the extra assumptions indicate the difficulty of jointly controlling multiple losses, since one needs to take into account many aspects to avoid from searching optimal in an empty set."
52
+ },
53
+ {
54
+ "section_id": "4",
55
+ "parent_section_id": null,
56
+ "section_name": "IV Experiments",
57
+ "text": "In this section, we first apply LCC to selective regression on public datasets for single-target regression, which tests the ability of LCC to calibrate point predictors. Then we conduct experiments on public datasets for selective regression with multiple targets to verify the approach of jointly controlling multiple losses using formula (11 ###reference_###). Finally, we introduce LCC to high-impact weather forecasting applications to control the non-monotone loss related to false discovery. We use Python [14 ###reference_14###] to conduct the experiments. The statistical learning methods in Section IV-A and IV-B were coded with Scikit-learn [15 ###reference_15###] and the deep neural nets in Section IV-C were coded with Pytorch [16 ###reference_16###]."
58
+ },
59
+ {
60
+ "section_id": "4.1",
61
+ "parent_section_id": "4",
62
+ "section_name": "IV-A LCC for Selective Regression with Single Target",
63
+ "text": "###figure_1### ###figure_2### ###figure_3### ###figure_4### ###figure_5### ###figure_6### ###figure_7### Selective regression is the selective prediction task for regression problems, which we introduce here referring to [17 ###reference_17###]. Selective regression model is a model with the ability to abstain from making prediction when lacking confidence. The model can be comprised of a prediction function and a hard selection function . For a given input object , a selective model predicts the label as if , and abstains from making prediction if . The hard selection function can be built on a soft selection function and a threshold , such that if and if . An example is to use a function related to estimated conditional variance as .\nThe objective of a selective regression model can be formalized as minimizing the risk in the condition that is not too low, where is a performance indicator called coverage. Therefore, we conduct experiments in this section to build selective regression models to control the prediction loss , whose informational efficiency is measured by , which is estimated using test data and is recorded as miscoverage in Fig. 2. Lower miscoverage means better performance for selective models.\nThe empirical studies were conducted on public datasets for single-target regression, which are from Delve [18 ###reference_18###], KEEL [19 ###reference_19###] and UCI [20 ###reference_20###] repositories and the information is summarized in TABLE I.\nWe employ bagging trees to build selective regression models, and the corresponding prediction function and soft selection function are constructed using the mean and the standard variance of the predictions made by tree members. The selective regression model for calibration in this paper is formalized as\nAll features and labels were normalized to with min-max normalization. For each dataset, of the data were used for testing and and of the remaining data were used for training and calibration. Random forests (RF) [21 ###reference_21###] and extremely randomized trees (ERT) [22 ###reference_22###] were used to build selective regression models with the default meta-parameters set by Scikit-learn. The data split process for each dataset was randomly conducted times and the average results were recorded in Fig. 1 and Fig. 2, where we set and . The is searched from to with step size being , and the search function is the max function, since we prefer low miscoverage for selective regression models.\nThe bar plots in Fig. 1 demonstrate that the frequency of the losses being above is near or below preset , which verifies the loss-controlling guarantee of LCC for point predictors empirically, since the frequency is an estimation of the following probability\nwhich we expect to be below .\nIn Fig. 2, we can observe that tuning and can change miscoverage of selective regression models, which is reasonable since high levels of loss and significance relax the constrains on the prediction losses. This indicates that one should set and properly for specific applications, making the trade-off between prediction loss and miscoverage."
64
+ },
65
+ {
66
+ "section_id": "4.2",
67
+ "parent_section_id": "4",
68
+ "section_name": "IV-B LCC for Selective Regression with Multiple Targets",
69
+ "text": "The purpose of this section is to test the approach with formula (11 ###reference_###) for controlling multiple losses. The datasets for multi-target regression are collected from Mulan library [23 ###reference_23###] and the information of each dataset is listed in TABLE II. For each dataset, the same normalization and partition processes as those in Section IV-A were conducted and we trained RF and ERT for multi-target regression with Scikit-learn using default meta-parameters. For -target regression, we obtain the -dimensional mean and standard variance function based on the tree members, which are also denoted as and respectively with and representing the th component. Therefore, the selective regression model with -dimensional parameter in this paper is an -dimensional function , whose th component is defined as\nwhere is the th element of . This model consists of single-target regressors and the losses in this empirical study are just individual losses considered in Section IV-A, i.e., the th loss is , where is the corresponding hard selection function of . We search for as formula (11 ###reference_###) with aiming to find maximum possible for each and combine them as , since we prefer low miscoverage for each target.\nWe set and set all as the same , which is taken from . To verify the loss-controlling guarantee for multiple losses, we use the test data to calculate the frequency of being above , since it is an estimation of the following probability\nwhich we expect to be below if the losses are jointly controlled. The experimental results on test data are shown in Fig. 3 and Fig. 4, where we denote MaxLoss as and Mean Miscoverage as the mean value of miscoverages for targets, which is a way of measuring informational efficiency for selective regression with multiple targets.\nThe bar plots in Fig. 3 empirically confirm the controlling guarantee implied by Corollary 1 and the results in Fig. 4 also indicates that tuning and can affect informational efficiency of the models. Since RF and ERT can build accurate prediction functions for rf1 and rf2, the frequencies of MaxLoss being above can be very low and the Mean Miscoverage is zero for each preset in the experiments, indicating the importance of designing accurate prediction functions for selective regression. Although the prediction functions for the other four datasets are not as accurate as those for rf1 and rf2, we can always tune and to change Mean Miscoverage under the loss-controlling guarantee, which demonstrates the flexibility of our approach. Also, this trade-off between the loss level , confidence level and Mean Miscoverage should be made based on specific applications."
70
+ },
71
+ {
72
+ "section_id": "4.3",
73
+ "parent_section_id": "4",
74
+ "section_name": "IV-C LCC for high-impact weather forecasting",
75
+ "text": "We apply LCC to high-impact weather forecasting, which is based on postprocessing of numerical weather prediction (NWP) models [24 ###reference_24###] [25 ###reference_25###] [26 ###reference_26###], i.e., learning a predictor whose inputs are forecasts made by NWP models and outputs are corresponding high-impact weather. We use LCC to postprocess the ensemble forecasts issued by European Centre for Medium-Range Weather Forecasts (ECMWF) [27 ###reference_27###]. The forecasts are obtained from the THORPEX Interactive Grand Global Ensemble (TIGGE) dataset [28 ###reference_28###]. We concentrate on -m maximum temperature and minimum temperature forecasts initialized at UTC with the forecast lead times from nd hour to th hour. The resolution of the forecast fields is and the corresponding label fields with the same resolution are calculated using the ERA5 reanalysis data [29 ###reference_29###]. The area covers the main parts of North China, East China and Central China, ranging from E to E in longitude and from N to N in latitude with the grid size being . The HighTemp and LowTemp datasets introduced in [6 ###reference_6###] are used for empirical studies. The inputs in HighTemp are -m maximum temperature forecasting fields and the corresponding label fields are whether the observed -m maximum temperature is above for each grid. Similarly, the inputs in LowTemp are -m minimum temperature forecasting fields and the corresponding label fields are whether the observed -m minimum temperature is below for each grid. The sample sizes of HighTemp and LowTemp are and respectively.\nThe experimental setting is similar to that in Section IV-B of [6 ###reference_6###]. For each dataset, all forecasts made by the NWP model were normalized to by min\u2013max normalization. of the data were used for testing and and of the remaining data were used for training and calibration respectively. The normalized ensemble fields forecast by the NWP model are taken as input and the set of grids having high-impact weather is the corresponding label , which can be seen as the image segmentation problem in computer vision. Thus, we employed two fully convolutional neural networks [30 ###reference_30###] as our underlying algorithms. One was U-Net [31 ###reference_31###] and the other is the naive deep neural network (nDNN), which is the U-Net removing skip-connections. The structures of the two networks are the same as those in [6 ###reference_6###]. To train the deep nets, we further partitioned the data for training to validation part () and proper training part (), which were used for model selection and parameter updating respectively. Adam optimization [32 ###reference_32###] with the learning rate being and the number of epochs being was employed for training, and the model whose binary cross entropy was the lowest on validation data was chosen as the predictive model needing calibration. The candidate calibrated predictor is defined as\nwhere is the estimated probability for high-impact weather existing at grid .\nThe loss function is\nwhich is a non-monotone loss function related to false discovery introduced in [9 ###reference_9###], and can be seen as one minus precision for each sample. The searching function we used is the min function, as we expect to detect more high-impact weather given the precision for each sample being controlled properly. We also tested other forms of searching functions such as the max function. However, although the controlling guarantee can be hold empirically, the constructed predictor may lose informational efficiency for applicability, implying that the forms of searching functions should be designed on a case-by-case basis. The final calibrated predictors were obtained with the proposed LCC approach and the experimental results are shown in Fig. 5, Fig. 6 and Fig. 7.\nThe frequencies of the prediction losses being more than are shown in Fig. 5 with bar plots for and . The columns represent the cases where and respectively. All bars are near or below the preset , which verifies loss-controlling guarantee empirically. The boxen plots of the losses for different and are shown in Fig. 6, which contain more information about tails by drawing narrower boxes than box plots. It can be observed that and result in larger losses, which should be preset based on specific applications. The informational efficiency of is measured using normalized size of the prediction set defined as in which the numbers of the vertical and the horizontal grids of prediction fields are denoted by and respectively.\nThe distributions of normalized sizes are shown in Fig. 7, indicating that different and cause different normalized sizes and there should be a trade-off among loss level , confidence level and informational efficiency of the predictions. Finally, all of the predictions have reasonable sizes using LCC, which demonstrates its effectiveness for high-impact weather forecasting."
76
+ },
77
+ {
78
+ "section_id": "5",
79
+ "parent_section_id": null,
80
+ "section_name": "Conclusion",
81
+ "text": "This paper proposes loss-controlling calibration, which extends conformal loss-controlling prediction to calibrating predictive models with more general forms of calibrated predictors and losses. The finite-sample and distribution-free loss-controlling guarantee is proved by introducing a searching function and the property of transformations preserving exchangeability in the ideal case. In addition, an approximation approach for practical calibration is proposed, whose main steps are the same as those of conformal loss-controlling prediction, i.e., the main difference between loss-controlling calibration and conformal loss-controlling prediction is whether the calibrated predictors and the loss functions satisfy specific conditions. The method is applied to selective regression and high-impact weather forecasting problems, and the loss-controlling guarantee is verified empirically in these cases. Further empirical studies with case-by-case design are needed to test the loss-controlling ability of the proposed calibration approach for a wider range of applications."
82
+ }
83
+ ],
84
+ "appendix": [],
85
+ "tables": {
86
+ "1": {
87
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE I: </span>Datasets for Single-Target Regression</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T1.1\" style=\"width:174.0pt;height:359.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-4.6pt,9.4pt) scale(0.95,0.95) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1.2\">Examples</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1.3\">Dimensionality</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.1.1.1.1.4\">Source</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.1.1.2.1.1\">abalone</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.2\">4177</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.3\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T1.1.1.2.1.4\">UCI</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.3.2.1\">bank8fh</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.3.2.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.4.3.1\">bank8fm</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.4.3.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.5.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.5.4.1\">bank8nh</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.5.4.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.6.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.6.5.1\">bank8nm</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.6.5.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.7.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.7.6.1\">boston</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.2\">506</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.3\">13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.7.6.4\">UCI</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.8.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.8.7.1\">cooling</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.2\">768</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.8.7.4\">UCI</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.9.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.9.8.1\">heating</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.9.8.2\">768</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.9.8.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.9.8.4\">UCI</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.10.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.10.9.1\">istanbul</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.10.9.2\">536</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.10.9.3\">7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.10.9.4\">UCI</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.11.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.11.10.1\">kin8fh</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.11.10.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.12.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.12.11.1\">kin8fm</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.12.11.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.13.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.13.12.1\">kin8nh</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.13.12.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.13.12.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.13.12.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.14.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.14.13.1\">kin8nm</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.14.13.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.14.13.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.14.13.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.15.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.15.14.1\">laser</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.2\">993</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.3\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.15.14.4\">KEEL</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.16.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.16.15.1\">puma8fh</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.16.15.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.17.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.17.16.1\">puma8fm</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.17.16.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.17.16.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.17.16.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.18.17\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.18.17.1\">puma8nh</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.18.17.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.18.17.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.18.17.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.19.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.19.18.1\">puma8nm</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.19.18.2\">8192</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.19.18.3\">8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.19.18.4\">Delve</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.20.19\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.1.1.20.19.1\">stock</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.20.19.2\">950</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.20.19.3\">9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T1.1.1.20.19.4\">KEEL</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.21.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_b\" id=\"S4.T1.1.1.21.20.1\">treasury</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.1.21.20.2\">1048</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.1.21.20.3\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T1.1.1.21.20.4\">Delve</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
88
+ "capture": "TABLE I: Datasets for Single-Target Regression"
89
+ },
90
+ "2": {
91
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">TABLE II: </span>Datasets for Multi-Target Regression</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.1\" style=\"width:177.3pt;height:126pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.0pt,0.0pt) scale(1.0,1.0) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T2.1.1.1.1.1\">Dataset</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.1.1.2\">Examples</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.1.1.3\">Dimensionality</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.1.1.1.1.4\">Targets</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.1.1.2.1.1\">enb</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.2\">768</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.3\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.1.1.2.1.4\">2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.3.2.1\">rf1</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.2\">9125</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.3\">64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.3.2.4\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.4.3.1\">rf2</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.2\">9125</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.3\">576</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.4.3.4\">8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.5.4.1\">scm1d</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.4.2\">9803</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.4.3\">280</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.5.4.4\">16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S4.T2.1.1.6.5.1\">scm20d</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.5.2\">8966</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.5.3\">61</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.6.5.4\">16</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.7.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b\" id=\"S4.T2.1.1.7.6.1\">scpf</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.1.7.6.2\">1137</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.1.7.6.3\">23</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.1.1.7.6.4\">3</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
92
+ "capture": "TABLE II: Datasets for Multi-Target Regression"
93
+ }
94
+ },
95
+ "image_paths": {
96
+ "1": {
97
+ "figure_path": "2301.04378v3_figure_1.png",
98
+ "caption": "Figure 1: Frequencies of the prediction losses being greater than \u03b1\ud835\udefc\\alphaitalic_\u03b1 vs. \u03b4=0.1,0.15,0.2\ud835\udeff0.10.150.2\\delta=0.1,0.15,0.2italic_\u03b4 = 0.1 , 0.15 , 0.2 on test data for selective single-target regression. The first row and the second row correspond to RF and ERT respectively. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. The bars are all near or below \u03b4\ud835\udeff\\deltaitalic_\u03b4, indicating the controlling guarantee of LCC empirically.",
99
+ "url": "http://arxiv.org/html/2301.04378v3/extracted/5362932/selective_regression_validity.jpg"
100
+ },
101
+ "2": {
102
+ "figure_path": "2301.04378v3_figure_2.png",
103
+ "caption": "Figure 2: Miscoverage of selective predictions vs. \u03b4=0.1,0.15,0.2\ud835\udeff0.10.150.2\\delta=0.1,0.15,0.2italic_\u03b4 = 0.1 , 0.15 , 0.2 on test data for selective single-target regression. The first row and the second row correspond to RF and ERT respectively. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. Tuning \u03b1\ud835\udefc\\alphaitalic_\u03b1 and \u03b4\ud835\udeff\\deltaitalic_\u03b4 can change Miscoverage, which indicates the trade-off between loss level and informational efficiency.",
104
+ "url": "http://arxiv.org/html/2301.04378v3/extracted/5362932/selective_regression_miscoverage.jpg"
105
+ },
106
+ "3": {
107
+ "figure_path": "2301.04378v3_figure_3.png",
108
+ "caption": "Figure 3: Frequencies of the maximum prediction losses being greater than \u03b1\ud835\udefc\\alphaitalic_\u03b1 vs. \u03b4=0.1,0.15,0.2\ud835\udeff0.10.150.2\\delta=0.1,0.15,0.2italic_\u03b4 = 0.1 , 0.15 , 0.2 on test data for selective multi-target regression. The first row and the second row correspond to RF and ERT respectively. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. The bars are all near or below \u03b4\ud835\udeff\\deltaitalic_\u03b4, indicating the controlling guarantee based on formula (11) empirically.",
109
+ "url": "http://arxiv.org/html/2301.04378v3/extracted/5362932/selective_regression_multioutput_validity.jpg"
110
+ },
111
+ "4": {
112
+ "figure_path": "2301.04378v3_figure_4.png",
113
+ "caption": "Figure 4: Mean Miscoverage of selective predictions vs. \u03b4=0.1,0.15,0.2\ud835\udeff0.10.150.2\\delta=0.1,0.15,0.2italic_\u03b4 = 0.1 , 0.15 , 0.2 on test data for selective multi-target regression. The first row and the second row correspond to RF and ERT respectively. Different columns represent different \u03b1\ud835\udefc\\alphaitalic_\u03b1. Tuning \u03b1\ud835\udefc\\alphaitalic_\u03b1 and \u03b4\ud835\udeff\\deltaitalic_\u03b4 can change Mean Miscoverage, which indicates the trade-off between loss level and informational efficiency.",
114
+ "url": "http://arxiv.org/html/2301.04378v3/extracted/5362932/selective_regression_multioutput_miscoverage.jpg"
115
+ },
116
+ "5": {
117
+ "figure_path": "2301.04378v3_figure_5.png",
118
+ "caption": "Figure 5: Frequencies of the prediction losses being greater than \u03b1\ud835\udefc\\alphaitalic_\u03b1 for different \u03b4\ud835\udeff\\deltaitalic_\u03b4 and \u03b1\ud835\udefc\\alphaitalic_\u03b1 on test data of HighTemp and LowTemp datasets. All bars being near or below the preset \u03b4\ud835\udeff\\deltaitalic_\u03b4 confirms the controlling guarantee of LCC empirically.",
119
+ "url": "http://arxiv.org/html/2301.04378v3/extracted/5362932/pixel_classification_err_rate_2nd_paper.jpg"
120
+ },
121
+ "6": {
122
+ "figure_path": "2301.04378v3_figure_6.png",
123
+ "caption": "Figure 6: Distributions of the prediction losses for different \u03b4\ud835\udeff\\deltaitalic_\u03b4 and \u03b1\ud835\udefc\\alphaitalic_\u03b1 on test data of HighTemp and LowTemp datasets. The losses are controlled by \u03b1\ud835\udefc\\alphaitalic_\u03b1 and \u03b4\ud835\udeff\\deltaitalic_\u03b4 properly to achieve the empirical validity in Fig. 1.",
124
+ "url": "http://arxiv.org/html/2301.04378v3/extracted/5362932/pixel_classification_loss_2nd_paper.jpg"
125
+ },
126
+ "7": {
127
+ "figure_path": "2301.04378v3_figure_7.png",
128
+ "caption": "Figure 7: Distributions of normalized sizes for different \u03b4\ud835\udeff\\deltaitalic_\u03b4 and \u03b1\ud835\udefc\\alphaitalic_\u03b1 on test data of HighTemp and LowTemp datasets. The predictions have reasonable sizes for both U-Net and nDNN for high-impact weather forecasting.",
129
+ "url": "http://arxiv.org/html/2301.04378v3/extracted/5362932/pixel_classification_efficiency_2nd_paper.jpg"
130
+ }
131
+ },
132
+ "validation": true,
133
+ "references": [],
134
+ "url": "http://arxiv.org/html/2301.04378v3"
135
+ }
20240123/2301.09217v5.json ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Multiplicative Auction Algorithm for Approximate Maximum Weight Bipartite Matching",
3
+ "abstract": "We present an auction algorithm using multiplicative instead of constant weight updates to compute a -approximate maximum weight matching (MWM) in a bipartite graph with vertices and edges in time , beating the running time of the fastest known approximation algorithm of Duan and Pettie [JACM \u201914] that runs in .\nOur algorithm is very simple and it can be extended to give a dynamic data structure that maintains a -approximate maximum weight matching under (1) one-sided vertex deletions (with incident edges) and (2) one-sided vertex insertions (with incident edges sorted by weight) to the other side.\nThe total time used is , where is the sum of the number of initially existing and inserted edges.111An earlier version of this paper appeared in the 24th International Conference on Integer Programming and Combinatorial Optimization (IPCO 2023) with a slightly slower algorithm running in time.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Let be an edge-weighted bipartite graph with vertices and edges where each edge with and has a non-negative weight .\nThe maximum weight matching (MWM) problem asks for a matching that attains the largest possible weight .\nThis paper will focus on approximate solutions to the MWM problem. More specifically, if we let denote a maximum weight matching of , our goal is to find a matching such that for any small constant .\nMatchings are a very well studied problem in combinatorial optimization.\nKuhn [Kuh55 ###reference_x17###] in 1955 published a paper that started algorithmic work in matchings, and presented what he called the \u201cHungarian algorithm\u201d which he attributed the work to K\u0151nig and Egerv\u00e1ry.\nMunkres [Mun57 ###reference_x21###] showed that this algorithm runs in time.\nThe running time for computing the exact MWM has been improved many times since then.\nRecently, Chen et al. [CKL22 ###reference_x10###] showed that it was possible to solve the more general problem of max flow in time.\nFor -approximation algorithms for MWM in bipartite graphs,\nGabow and Tarjan in 1989 showed an algorithm.\nSince then there were a number of results for different running times and different approximation ratios.\nThe prior best approximate algorithm is by Duan and Pettie [DP14 ###reference_x13###] which computes a -approximate maximum weight matching in time with a scaling algorithm.\nWe defer to their work for a more thorough survey of the history on the MWM problem.\nWe show in our work that the auction algorithm for matchings using multiplicative weights can give a -approximate maximum weight matching with a running time of for bipartite graphs. This is a modest improvement of a factor over the prior algorithm of Duan and Pettie [DP14 ###reference_x13###] which works in general graphs.\nHowever, in comparison to their rather involved algorithm, our algorithm is simple and only uses elementary data structures.\nFurthermore, we are able to use properties of the algorithm to support two dynamic operations, namely one where vertices are deleted from one side and one where vertices of the other side of the bipartite graph are inserted together with their incident edges.\nNo algorithm that allows both these operations with running time faster than recomputation from scratch was known prior."
10
+ },
11
+ {
12
+ "section_id": "1.1",
13
+ "parent_section_id": "1",
14
+ "section_name": "Dynamic matching algorithms.",
15
+ "text": "Dynamic weighted matching. There has been a large body of work on dynamic matching and many variants of the problem have been studied, e.g, the maximum, maximal, as well as -approximate setting for a variety of values of , both in the weighted as well as in the unweighted setting. See [HHS22 ###reference_x15###] for a survey of the current state of the art for the fully dynamic setting.\nFor any constant there is a conditional lower bound based on the OMv conjecture that shows that\nany dynamic algorithm that returns the exact value of a maximum cardinality matching in a bipartite graph with polynomial preprocessing time\ncannot take time per query and per edge update operation [HKNS15 ###reference_x16###].\nDynamic -approximate matchings \nFor general weighted graphs Gupta and Peng [GP13 ###reference_x14###] gave the first algorithm in the fully dynamic setting with edge insertions and deletions to maintain a -approximate matching in time, where the edges fall into the range .\nThere are also some results for bipartite graphs in partially dynamic settings. In the incremental setting, edges are only inserted, and decremental setting, edges are only deleted.\nFor unweighted bipartite graphs, the fastest known decremental algorithm is by Bernstein, Probst Gutenberg, and Saranurak [BGS20 ###reference_x6###] achieves update times of per edge deletion. For incremental algorithms Blikstad and Kiss [BK23 ###reference_x7###] achieve update times of time per edge insertion.\nThese results can be made to work in weighted graphs by a meta theorem of Bernstein, Dudeja, and Langley [BDL21 ###reference_x4###]. Their theorem states that any dynamic algorithm on an unweighted bipartite graph can be transformed into a dynamic algorithm on weighted bipartite graph at the expense of an extra factor.\nVertex updates. \nBy vertex update we refer to updates that are vertex insertion (resp. deletion) that also inserts (resp. deletes) all edges incident to the vertex.\nThere is no prior work on maintaining matchings in weighted graphs under vertex updates.\nHowever, vertex updates in the unweighted bipartite setting has been studied.\nBosek et al. [BLSZ14 ###reference_x9###] gave an algorithm that maintains the -approximate matching when vertices of one side are deleted in amortized time per changed edge. The algorithm can be adjusted to the setting where vertices of one side are inserted in the same running time, but it cannot handle both vertex insertions and deletions.\nLe et al. [LMSW22 ###reference_x20###] gave an algorithm for maintaining a maximal matching under vertex updates in constant amortized time per changed edge. They also presented an approximate algorithm for maximum matchings in an unweighted graph when vertex updates are only allowed on one side of a bipartite graph.\nWe give the first algorithm to maintain a -approximate maximum weight matching where vertices can undergo vertex deletions on one side and vertex insertions on the other side\nin total time , where is the sum of the number of initially existing and inserted edges. It assumes that the edges incident to an inserted vertex are given in sorted order by weight, otherwise, the running time increases by per inserted edge."
16
+ },
17
+ {
18
+ "section_id": "1.2",
19
+ "parent_section_id": "1",
20
+ "section_name": "Linear Program for MWM",
21
+ "text": "The MWM problem can be expressed as the following linear program (LP) where the variable denotes whether the edge is in the matching. It is well known [S03 ###reference_x23###] that the below LP is integral, that is the optimal solution has all variables .\nWe can also consider the dual problem of weighted vertex cover that aims to find dual weights and for every vertex and respectively."
22
+ },
23
+ {
24
+ "section_id": "1.3",
25
+ "parent_section_id": "1",
26
+ "section_name": "Multiplicative weight updates for packing LPs",
27
+ "text": "Packing LPs are LPs of the form\n\nfor , and\n.\nThe LP for MWM is a classical example of a packing LP.\nThe multiplicative weight update method (MWU) has been investigated extensively to provide faster algorithms for finding approximate solutions222\nBy approximate solution we mean a possibly fractional assignments of variables that obtains an approximately good LP objective.\nIf we find such an approximate solution to MWM, fractional solutions need to be rounded to obtain an actual matching. \nto packing LPs [You14 ###reference_x25###, KY14 ###reference_x18###, CQ18 ###reference_x11###, AO19 ###reference_x2###, WRM16 ###reference_x24###, Qua20 ###reference_x22###].\nTypically the running times for solving these LPs have a dependence on of ,\ne.g. the algorithm of Koufogiannakis and Young [KY14 ###reference_x18###] would obtain a running time of when applied to the matching LP.\nThe fastest multiplicative weight update algorithm for solving packing LPs by Allen-Zhu and Orecchia [AO19 ###reference_x2###] would obtain an running time for MWM.\nVery recently, work by Battacharya, Kiss, and Saranurak [BKS22 ###reference_x8###] extended the MWU for packing LPs to the partially dynamic setting. When restricted to the MWM problem means the weight of edges either only increase or only decrease.\nUsing similar ideas with MWUs, Assadi [Ass23 ###reference_x3###] recently derived a simple semi-streaming algorithm for bipartite matchings.\nHowever as packing LPs are more general than MWM, these algorithms are significantly more complicated and are slower by factors (and sometimes worse dependence on e.g. in [BKS22 ###reference_x8###]) when compared to our algorithms.\nWe remark that our algorithm, while it uses multiplicative weight updates, is unlike typical MWU algorithms as it has an additional monotonicity property.\nWe only increase dual variables on one side of the matching."
28
+ },
29
+ {
30
+ "section_id": "1.4",
31
+ "parent_section_id": "1",
32
+ "section_name": "Auction Algorithms",
33
+ "text": "Auction algorithms are a class of primal dual algorithms for solving the MWM problem that view as a set of goods to be sold, as a set of buyers. The goal of the auction algorithm is to find a welfare-maximizing allocation of goods to buyers.\nThe algorithm is attributed to Bertsekas [Ber81 ###reference_x5###], as well as to Demange, Gale, and Sotomayor [DGS86 ###reference_x12###].\nAn auction algorithm initializes the prices of all the goods with a price (our choice of is intentional, as prices correspond directly to dual variables), and has buyers initially unallocated.\nFor each buyer , the utility of that buyer upon being allocated is .\nThe auction algorithm proceeds by asking an unallocated buyer for the good they desire that maximizes their utility, i.e. for .\nIf , the buyer remains unallocated.\nOtherwise the algorithm allocates to , then increases the price to .\nThe algorithm terminates when all buyers are either allocated or for every unallocated buyer , it holds that .\nIf the maximum weight among all the edges is , then the auction algorithm terminates after rounds and outputs a matching that differs from the optimal by an additive factor of at most .\nThere have been a recent resurgence in interest in auction algorithms. Assadi, Liu, and Tarjan [ALT21 ###reference_x1###] used the auction algorithm for matchings in unweighted graphs in semi-streaming and massively parallel computing (MPC) settings. This work was generalized for weighted bipartite graphs in the same settings by Liu, Ke, and Kholler [LKK23 ###reference_x19###]."
34
+ },
35
+ {
36
+ "section_id": "1.5",
37
+ "parent_section_id": "1",
38
+ "section_name": "Our contribution",
39
+ "text": "We present the following modification of the auction algorithm:\nWhen is allocated , increase to instead of .\nNote that this decreases by at least a factor of as well as increases by at least a factor of .\nThus we will call algorithms with this modification multiplicative auction algorithms.\nSurprisingly, we were not able to find any literature on this simple modification.\nChanging the constant additive weight update to a multiplicative weight update\nhas the effect of taking much larger steps when the weights are large, and so we are able to show that the algorithm can have no dependence on the size of the weights.\nIn fact, we are able to improve the running time to , faster than the prior approximate matching algorithm of Duan and Pettie [DP14 ###reference_x13###] that ran in .\nWhile the algorithm of [DP14 ###reference_x13###] has the advantage that it works for general graphs and ours is limited to bipartite graphs,\nour algorithm is simpler as it avoids the scaling algorithm framework and is easier to implement.\nLet be a weighted biparitite graph and be a value such that . There is a multiplicative auction algorithm running in time that finds a -approximate maximum weight matching of .\nFurthermore, it is straightforward to extend our algorithm to a setting where vertices on one side are deleted and vertices on the other side are added with all incident edges given in sorted order of weight. When the inserted edges are not sorted by weight, the running time per inserted edge increases by an additive term of to sort the log of the weights of incident inserted edges.\nLet be a weighted bipartite graph.\nThere exists a dynamic data structure that maintains a -approximate maximum weight matching of and supports any arbitrary sequence of the following operations\nDeleting a vertex in\nAdding a new vertex into with all its incident edges sorted by weight\nin total time , where is sum of the number of initially existing and inserted edges."
40
+ },
41
+ {
42
+ "section_id": "2",
43
+ "parent_section_id": null,
44
+ "section_name": "The static algorithm",
45
+ "text": ""
46
+ },
47
+ {
48
+ "section_id": "2.1",
49
+ "parent_section_id": "2",
50
+ "section_name": "A slower algorithm",
51
+ "text": "For sake of exposition we will first present a slower algorithm that runs in near-linear time in the number of edges that will use the following update rule:\nWhen is allocated to , increase to\nWe assume that the algorithm is given as input some fixed , and the goal is to find a -approximate MWM.\nWe will also assume that , as a graph with edges has at most vertices that have at least one incident edge. If , then we may discard the isolated vertices and reduce .\nNotation \nFor sake of notation let be the set of neighbors of in , and similarly for for .\nPreprocessing of the weights. \nLet be the maximum weight edge of . For our static auction algorithm we may ignore any edge of weight less than as as taking of these small weight edges would not even contribute to the matching.\nThus, we only consider edges of weight at least , which allows us to\nrescale all edge weights by dividing them by . As a result we can assume (by slight abuse of notation) in the following that the minimum edge weight is and the largest edge weight equals .\nFurthermore, since we only care about approximations, we will also round down all edge weights to the nearest power of for some .\nWe define , and we will only care about weights after applying this operation.\nLet .\nLet be the smallest integer such that\n.\nObserve that as for it holds that\nThus we see that .\nAlgorithm. \nThe algorithm first builds for every a list of pairs for each edge and each value with and then sorts by decreasing value of . After, it calls the function\n on every .\nThe function\n matches to the item that maximizes its utility and updates the price according to our multiplicative update rule. While matching , another vertex originally matched to may become unmatched. If this happens, is called immediately after .\nAlgorithm 2.1: MultiplicativeAuction\nMatchR()\nData structure. \nWe store for each vertex the list as well as its currently matched edge if it exists. In the pseudocode we keep for each vertex a value corresponding to the highest weight threshold that we will consider.\nWe also keep a value which corresponds to the utility receives before we update the price when is matched to . Note that and are only needed in the analysis.\nRunning time. \nThe creation and sorting of the lists takes time if we\nuse bucket sort as there are only distinct weights. The running time of all\ncalls to is dominated by the size of , as each iteration in removes an element of and takes time. Thus, the total time is\nInvariants maintained by the algorithm. \nThe algorithm maintains five different invariants.\nFor all , and all , .\nThis clearly is true at the beginning, since is initialized to , and\nAs the algorithm proceeds, which equals only decreases as only increases. Thus, we only have to make sure that the condition holds whenever decreases.\nThe value only decreases from some value, say , to a new value , in MatchR and when this happens does not contain any pairs with anymore. Thus, there does not exist a neighbor of with . It follows that\nwhen decreases to for all it holds that .\n\u220e\nFor all and never decreases over the course of the algorithm. Furthermore, if is not matched, then .\nWe initialize to . If is never matched, we never change , so it stays . Throughout the algorithm, we only ever increase .\n\u220e\nFor all for which MatchR was called at least once,\nif is unmatched, then and is empty.\nFurthermore, for all we have that .\nMatchR terminates (i) after it matches and recurses or (ii) if is empty. Initially is unmatched and is set to 0. If is matched, it is possible that for some , , that becomes temporarily unmatched during MatchR and is set to 0, but MatchR will be immediately called again.\nThus, whenever is unmatched, .\nHence, if the last call to MatchR does not result in being matched, then\nthis means that must be empty and .\nSince is empty,\nthen for all ,\nwe must have .\nSince we rescaled weights so that , we know that .\nNote that where denotes the value of before was matched, and . Thus,\n\u220e\nIf is matched to , then for all , for as long as stays matched.\nNote that doesn\u2019t change as long as stays matched, and for all , can only increase by Invariant 2 ###reference_riant2###, so it suffices to prove right after was matched to .\nLet be the value of right before was matched to .\nNote that .\nFor , we know that , and, by Invariant 2 ###reference_riant2###, so .\nFor all other ,\nright before we updated , we had that\n and, by Invariant 1 ###reference_riant1###, .\nThus, , so that .\nAs by Invariant 2 ###reference_riant2### and , it follows that:\n\u220e\nIf is matched to , then for as long as remains matched to .\nNote that and don\u2019t change as long as remains matched to .\nLet denote the value of right before the update rule of line 5(b) in MatchR.\nThen observe , and . Thus,\n\u220e\nApproximation factor. \nWe will show the approximation factor of the matching found by the algorithm by primal dual analysis.\nWe remark that it is possible to show this result purely combinatorially as well.\nWe will show that this and a vector satisfy the complementary slackness condition up to a factor, which implies the approximation guarantee.\nThis was used by Duan and Pettie [DP14 ###reference_x13###] (the original lemma was for general matchings, we have specialized it here to bipartite matchings).\nLet be a matching and let be an assignment of the dual variables.\nSuppose is a complementary solution to in the following approximate sense:\nFor all ,\nFor all , ,\nThe -values of all unmatched vertices are zero.\nThen is a -approximate maximum weight matching.\nLet be the maximum weight matching.\n\u220e\nThis lemma along with our invariants is enough for us to prove the approximation factor of our algorithm.\nMultiplicativeAuction outputs a -approximate maximum weight matching of the bipartite graph .\nLet be a parameter depending on that we will choose later.\nWe begin by choosing an assignment of the dual variables for and for as exactly the values used by the algorithm at termination.\nIt remains to verify that we satisfy the conditions of Lemma 2.1 ###reference_heorem1###.\nProperty (i) is satisfied by Invariant 3 ###reference_riant3### or Invariant 4 ###reference_riant4### (depending on whether is matched or not) for .\nProperty (ii) is satisfied by Invariant 5 ###reference_riant5### for .\nProperty (iii) is satisfied by Invariant 2 ###reference_riant2### and Invariant 3 ###reference_riant3###.\nThus we have satisfied Lemma 2.1 ###reference_heorem1### with and . Setting gives us a -approximate maximum weight matching.\n\u220e\nWe have shown the following result that is weaker than what we have set out to prove by a factor of that we will show how to get rid of in Section 2.2 ###reference_###.\nLet be a weighted biparitite graph, and be a value such that . There exists a multiplicative auction algorithm running in time that finds a -approximate maximum weight matching of ."
52
+ },
53
+ {
54
+ "section_id": "2.2",
55
+ "parent_section_id": "2",
56
+ "section_name": "Improving the running time",
57
+ "text": "Variations to the update rule \nWe remark that there is some flexibility in choosing the update rule in line 5(a) of MatchR.\nTo compute an -approximate maximum weight matching\nthe update rule in line 5(a) of MatchR can be any of the following:\n, with ,\n, with ,\n, with .\nIt suffices to verify that all invariants hold for different update rules.\nInvariant 1 ###reference_riant1###, 2 ###reference_riant2###, 3 ###reference_riant3###, and 4 ###reference_riant4### all hold regardless of the update rule, as they only use the fact that is non-decreasing throughout the algorithm, so we will only focus on Invariant 5 ###reference_riant5###.\nWe proved that update rule (1) works in Section 2.1 ###reference_### for .\nNote that if we chose an , we would still have , and Invariant 5 ###reference_riant5### holds.\nTo prove update rule (2) works for ,\nlet be matched to , and be the value of right before the update rule. Observe that and . Furthermore as otherwise and would not be trying to match to . Thus,\nSince we have shown that either update rule (1) or (2) work, we can choose the larger of the two update rules, i.e. the update of adding is also a valid update rule.\nHowever, as , this means that , so (3) is also a valid update rule.\n\u220e\nRemarks. \nUpdate rule (2) offers an alternative way to implement the algorithm with a running time of .\nUpdate rule (3) shows that can update the value of at most times before becomes non-positive, so using update rule (3) results in at most total updates.\nFurthermore, a careful reader may have noticed that Invariant 3 ###reference_riant3### only requires for an edge that when we stop considering that edge, so it suffices to only consider edges in multiples of and stop considering an edge when it falls below a value of .\nImproved algorithm \nFor simplicity of the exposition we will assume is a positive integer (otherwise we can choose a slightly smaller ).\nTo improve the running time to , we use the observation in the above remark that every edge only needs to be updated times if we use update rule (3) and we only need to consider edges in multiples of .\nThus it suffices if we change line 1.(b) in MultiplicativeAuction\nto insert copies of an edge if it has weight of the form for some , after rounding down to the nearest power of .\nThis change implies that we insert items into for every .\nHowever, sorting for every vertex individually, would be too slow.\nWe will instead sort on all the rounded edge weights at once, as we have total copies of the edges that can take on values of integers between and .\nAs , we can actually use radix sort to sort all the edges in linear time.\nAfterwards, we can go through the weight classes in decreasing order to insert the pairs into the corresponding .\nWe explicitly give the pseudocode below as MultiplicativeAuction+.\nAlgorithm 2.2: MultiplicativeAuction+\nNew runtime. \nRadix sorting all pairs and initializing the sorted for all takes linear time in the number of pairs.\nThe total amount of work done in MatchR for a vertex is which also sums to .\nThus we get our desired running time and have proven our main theorem that we restate here.\nSee 1.1 ###reference_heorem1###"
58
+ },
59
+ {
60
+ "section_id": "3",
61
+ "parent_section_id": null,
62
+ "section_name": "Dynamic algorithm",
63
+ "text": "There are many monotonic properties of our static algorithm.\nFor instance, for all the values strictly increase.\nAs another example, for all the value of strictly decreases.\nThese monotonic properties allow us to extend MultiplicativeAuction+ to a dynamic setting with the following operations.\nSee 1.2 ###reference_heorem2###\nType (1) operations: Deleting a vertex in . \nTo delete a vertex , we can mark as deleted and skip all edges in for any in all further computation.\nIf were matched to some vertex , that is if there exists an edge , we need to unmatch and remove from .\nAll our invariants hold except Invariant 3 ###reference_riant3### for the unmatched .\nTo restore this invariant we simply call MatchR.\nType (2) operations: Adding a new vertex to with all incident edges. \nTo add a new vertex to with incident edges to with , we can create the queue by inserting the pairs such that it is non-increasing in the first element of the pair.\nAfterwards we call MatchR.\nAll invariants hold after doing so."
64
+ }
65
+ ],
66
+ "appendix": [],
67
+ "tables": {},
68
+ "image_paths": {},
69
+ "validation": true,
70
+ "references": [
71
+ {
72
+ "1": {
73
+ "title": "An auction algorithm for bipartite matching in streaming and\nmassively parallel computation models.",
74
+ "author": "Sepehr Assadi, S. Cliff Liu, and Robert E. Tarjan.",
75
+ "venue": "In Hung Viet Le and Valerie King, editors, 4th Symposium on\nSimplicity in Algorithms, SOSA 2021, Virtual Conference, January 11-12,\n2021, pages 165\u2013171. SIAM, 2021.",
76
+ "url": null
77
+ }
78
+ },
79
+ {
80
+ "2": {
81
+ "title": "Nearly linear-time packing and covering LP solvers - achieving\nwidth-independence and -convergence.",
82
+ "author": "Zeyuan Allen-Zhu and Lorenzo Orecchia.",
83
+ "venue": "Math. Program., 175(1-2):307\u2013353, 2019.",
84
+ "url": null
85
+ }
86
+ },
87
+ {
88
+ "3": {
89
+ "title": "A simple (1-)-approximation semi-streaming algorithm\nfor maximum (weighted) matching.",
90
+ "author": "Sepehr Assadi.",
91
+ "venue": "CoRR, abs/2307.02968, 2023.",
92
+ "url": null
93
+ }
94
+ },
95
+ {
96
+ "4": {
97
+ "title": "A framework for dynamic matching in weighted graphs.",
98
+ "author": "Aaron Bernstein, Aditi Dudeja, and Zachary Langley.",
99
+ "venue": "In Samir Khuller and Virginia Vassilevska Williams, editors, STOC \u201921: 53rd Annual ACM SIGACT Symposium on Theory of Computing,\nVirtual Event, Italy, June 21-25, 2021, pages 668\u2013681. ACM, 2021.",
100
+ "url": null
101
+ }
102
+ },
103
+ {
104
+ "5": {
105
+ "title": "A new algorithm for the assignment problem.",
106
+ "author": "Dimitri P. Bertsekas.",
107
+ "venue": "Math. Program., 21(1):152\u2013171, 1981.",
108
+ "url": null
109
+ }
110
+ },
111
+ {
112
+ "6": {
113
+ "title": "Deterministic decremental reachability, scc, and shortest paths via\ndirected expanders and congestion balancing.",
114
+ "author": "Aaron Bernstein, Maximilian Probst Gutenberg, and Thatchaphol Saranurak.",
115
+ "venue": "In Sandy Irani, editor, 61st IEEE Annual Symposium on\nFoundations of Computer Science, FOCS 2020, Durham, NC, USA, November\n16-19, 2020, pages 1123\u20131134. IEEE, 2020.",
116
+ "url": null
117
+ }
118
+ },
119
+ {
120
+ "7": {
121
+ "title": "Incremental (1-)-approximate dynamic matching in\no(poly(1/)) update time.",
122
+ "author": "Joakim Blikstad and Peter Kiss.",
123
+ "venue": "CoRR, abs/2302.08432, 2023.",
124
+ "url": null
125
+ }
126
+ },
127
+ {
128
+ "8": {
129
+ "title": "Dynamic algorithms for packing-covering lps via multiplicative weight\nupdates.",
130
+ "author": "Sayan Bhattacharya, Peter Kiss, and Thatchaphol Saranurak.",
131
+ "venue": "CoRR, abs/2207.07519, 2022.",
132
+ "url": null
133
+ }
134
+ },
135
+ {
136
+ "9": {
137
+ "title": "Online bipartite matching in offline time.",
138
+ "author": "Bartlomiej Bosek, Dariusz Leniowski, Piotr Sankowski, and Anna Zych.",
139
+ "venue": "In 55th IEEE Annual Symposium on Foundations of Computer\nScience, FOCS 2014, Philadelphia, PA, USA, October 18-21, 2014, pages\n384\u2013393. IEEE Computer Society, 2014.",
140
+ "url": null
141
+ }
142
+ },
143
+ {
144
+ "10": {
145
+ "title": "Maximum flow and minimum-cost flow in almost-linear time.",
146
+ "author": "Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg,\nand Sushant Sachdeva.",
147
+ "venue": "CoRR, abs/2203.00671, 2022.",
148
+ "url": null
149
+ }
150
+ },
151
+ {
152
+ "11": {
153
+ "title": "Randomized MWU for positive LPs.",
154
+ "author": "Chandra Chekuri and Kent Quanrud.",
155
+ "venue": "In Artur Czumaj, editor, Proceedings of the Twenty-Ninth Annual\nACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA,\nUSA, January 7-10, 2018, pages 358\u2013377. SIAM, 2018.",
156
+ "url": null
157
+ }
158
+ },
159
+ {
160
+ "12": {
161
+ "title": "Multi-item auctions.",
162
+ "author": "Gabrielle Demange, David Gale, and Marilda Sotomayor.",
163
+ "venue": "Journal of political economy, 94(4):863\u2013872, 1986.",
164
+ "url": null
165
+ }
166
+ },
167
+ {
168
+ "13": {
169
+ "title": "Linear-time approximation for maximum weight matching.",
170
+ "author": "Ran Duan and Seth Pettie.",
171
+ "venue": "J. ACM, 61(1):1:1\u20131:23, 2014.",
172
+ "url": null
173
+ }
174
+ },
175
+ {
176
+ "14": {
177
+ "title": "Fully dynamic -approximate matchings.",
178
+ "author": "Manoj Gupta and Richard Peng.",
179
+ "venue": "In 54th Symposium on Foundations of Computer Science, FOCS,\npages 548\u2013557. IEEE Computer Society, 2013.",
180
+ "url": null
181
+ }
182
+ },
183
+ {
184
+ "15": {
185
+ "title": "Recent advances in fully dynamic graph algorithms (invited talk).",
186
+ "author": "Kathrin Hanauer, Monika Henzinger, and Christian Schulz.",
187
+ "venue": "In James Aspnes and Othon Michail, editors, 1st Symposium on\nAlgorithmic Foundations of Dynamic Networks, SAND 2022, March 28-30, 2022,\nVirtual Conference, volume 221 of LIPIcs, pages 1:1\u20131:47. Schloss\nDagstuhl - Leibniz-Zentrum f\u00fcr Informatik, 2022.",
188
+ "url": null
189
+ }
190
+ },
191
+ {
192
+ "16": {
193
+ "title": "Unifying and strengthening hardness for dynamic problems via the\nonline matrix-vector multiplication conjecture.",
194
+ "author": "Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai, and Thatchaphol\nSaranurak.",
195
+ "venue": "In Proc. of the forty-seventh annual ACM symposium on Theory of\ncomputing, pages 21\u201330, 2015.",
196
+ "url": null
197
+ }
198
+ },
199
+ {
200
+ "17": {
201
+ "title": "The hungarian method for the assignment problem.",
202
+ "author": "Harold W Kuhn.",
203
+ "venue": "Naval research logistics quarterly, 2(1-2):83\u201397, 1955.",
204
+ "url": null
205
+ }
206
+ },
207
+ {
208
+ "18": {
209
+ "title": "A nearly linear-time PTAS for explicit fractional packing and\ncovering linear programs.",
210
+ "author": "Christos Koufogiannakis and Neal E. Young.",
211
+ "venue": "Algorithmica, 70(4):648\u2013674, 2014.",
212
+ "url": null
213
+ }
214
+ },
215
+ {
216
+ "19": {
217
+ "title": "Scalable auction algorithms for bipartite maximum matching problems.",
218
+ "author": "Quanquan C. Liu, Yiduo Ke, and Samir Khuller.",
219
+ "venue": "CoRR, abs/2307.08979, 2023.",
220
+ "url": null
221
+ }
222
+ },
223
+ {
224
+ "20": {
225
+ "title": "Dynamic matching algorithms under vertex updates.",
226
+ "author": "Hung Le, Lazar Milenkovic, Shay Solomon, and Virginia Vassilevska Williams.",
227
+ "venue": "In Mark Braverman, editor, 13th Innovations in Theoretical\nComputer Science Conference, ITCS 2022, January 31 - February 3, 2022,\nBerkeley, CA, USA, volume 215 of LIPIcs, pages 96:1\u201396:24. Schloss\nDagstuhl - Leibniz-Zentrum f\u00fcr Informatik, 2022.",
228
+ "url": null
229
+ }
230
+ },
231
+ {
232
+ "21": {
233
+ "title": "Algorithms for the assignment and transportation problems.",
234
+ "author": "James Munkres.",
235
+ "venue": "Journal of the society for industrial and applied mathematics,\n5(1):32\u201338, 1957.",
236
+ "url": null
237
+ }
238
+ },
239
+ {
240
+ "22": {
241
+ "title": "Nearly linear time approximations for mixed packing and covering\nproblems without data structures or randomization.",
242
+ "author": "Kent Quanrud.",
243
+ "venue": "In Martin Farach-Colton and Inge Li G\u00f8rtz, editors, 3rd\nSymposium on Simplicity in Algorithms, SOSA 2020, Salt Lake City, UT, USA,\nJanuary 6-7, 2020, pages 69\u201380. SIAM, 2020.",
244
+ "url": null
245
+ }
246
+ },
247
+ {
248
+ "23": {
249
+ "title": "Combinatorial optimization: polyhedra and efficiency,\nvolume 24.",
250
+ "author": "Alexander Schrijver et al.",
251
+ "venue": "Springer, 2003.",
252
+ "url": null
253
+ }
254
+ },
255
+ {
256
+ "24": {
257
+ "title": "Unified acceleration method for packing and covering problems via\ndiameter reduction.",
258
+ "author": "Di Wang, Satish Rao, and Michael W. Mahoney.",
259
+ "venue": "In Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani, and\nDavide Sangiorgi, editors, 43rd International Colloquium on Automata,\nLanguages, and Programming, ICALP 2016, July 11-15, 2016, Rome, Italy,\nvolume 55 of LIPIcs, pages 50:1\u201350:13. Schloss Dagstuhl -\nLeibniz-Zentrum f\u00fcr Informatik, 2016.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "25": {
265
+ "title": "Nearly linear-time approximation schemes for mixed packing/covering\nand facility-location linear programs.",
266
+ "author": "Neal E. Young.",
267
+ "venue": "CoRR, abs/1407.3015, 2014.",
268
+ "url": null
269
+ }
270
+ }
271
+ ],
272
+ "url": "http://arxiv.org/html/2301.09217v5"
273
+ }
20240123/2301.11915v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2303.07700v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2303.07846v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2303.10728v2.json ADDED
@@ -0,0 +1,222 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Training Deep Boltzmann Networks with Sparse Ising Machines",
3
+ "abstract": "The slowing down of Moore\u2019s law has driven the development of unconventional computing paradigms, such as specialized Ising machines tailored to solve combinatorial optimization problems. In this paper, we show a new application domain for probabilistic bit (p-bit) based on Ising machines by training deep generative AI models with them. Using sparse, asynchronous, and massively parallel Ising machines we train deep Boltzmann networks in a hybrid probabilistic-classical computing setup. We use the full MNIST and Fashion MNIST (FMNIST) dataset without any downsampling and a reduced version of CIFAR-10 dataset in hardware-aware network topologies implemented in moderately sized Field Programmable Gate Arrays (FPGA). For MNIST, our machine using only 4,264 nodes (p-bits) and about 30,000 parameters achieves the same classification accuracy (90%) as an optimized software-based restricted Boltzmann Machine (RBM) with approximately 3.25 million parameters. Similar results follow for FMNIST and CIFAR-10. Additionally, the sparse deep Boltzmann network can generate new handwritten digits and fashion products, a task the 3.25 million parameter RBM fails at despite achieving the same accuracy. Our hybrid computer takes a measured 50 to 64 billion probabilistic flips per second, which is at least an order of magnitude faster than superficially similar Graphics and Tensor Processing Unit (GPU/TPU) based implementations. The massively parallel architecture can comfortably perform the contrastive divergence algorithm (CD-) with up to \u2009=\u2009 million sweeps per update, beyond the capabilities of existing software implementations. These results demonstrate the potential of using Ising machines for traditionally hard-to-train deep generative Boltzmann networks, with further possible improvement in nanodevice-based realizations.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "I. Introduction",
9
+ "text": "The slowing down of Moore\u2019s Law is ushering in an exciting new era of electronics where the traditionally separate layers of the computing stack are becoming increasingly intertwined. The rise of domain-specific computing hardware and architectures is driving unconventional computing approaches. One approach that generated great excitement recently is the field of Ising machines, where special-purpose hardware is developed to solve combinatorial optimization problems [1 ###reference_1###]. The goal of Ising machines is to improve energy efficiency, time to solution, or some other useful metric to solve optimization problems by co-designing all layers in the computing stack.\nIn this paper, we draw attention to another possibility of using probabilistic Ising machines, beyond combinatorial optimization, to demonstrate their application to deep generative AI models. We focus on\ndeep Boltzmann Machines (BM) that are multi-layer generalizations of the original Boltzmann Machine [2 ###reference_2###, 3 ###reference_3###]. Despite being powerful models, BMs fell out of favor from mainstream deep learning praxis [4 ###reference_4###], primarily because they are computationally hard to train with widely available hardware [5 ###reference_5###]. Our goal in this paper is to illustrate how a sparse version of deep BMs can be efficiently trained using special-purpose hardware systems that provide orders of magnitude improvement over commonly used software implementations in the computationally hard probabilistic sampling task.\nWith minor modifications, our core computational kernel fast probabilistic Markov Chain Monte Carlo sampling could support a large family of energy-based models, including restricted and unrestricted BMs [6 ###reference_6###], contrastive Hebbian learning [7 ###reference_7###], Gaussian-Bernoulli BMs [8 ###reference_8###, 9 ###reference_9###], equilibrium propagation [10 ###reference_10###], predictive coding [11 ###reference_11###] and related algorithms.\nWe design a probabilistic bit (p-bit) [12 ###reference_12###] based realization of Boltzmann networks, as their lowest level realization in hardware. Using FPGAs, we physically construct a network of binary stochastic neurons (BSN) in hardware and connect them to one another in a fixed hardware topology. We also design an asynchronous architecture where p-bits (BSNs) dynamically evolve in parallel, much like an interacting collection of particles without a synchronizing global clock. Such a low-level realization of a Boltzmann network provides up to 5 orders of magnitude improvement in generating samples from the Boltzmann distribution, even in moderately sized FPGAs. An intense amount of work is currently underway to design scaled probabilistic computers out of magnetic nanodevices [13 ###reference_13###, 14 ###reference_14###, 15 ###reference_15###, 16 ###reference_16###] which can scale probabilistic computers to much larger densities in energy-efficient implementations. Despite our FPGA-specific design in this paper, much of our results are applicable to scaled p-computers as well as other Ising machines based on many different physical realizations [1 ###reference_1###]. Our broader goal is to help stimulate the development of physics-inspired probabilistic hardware [17 ###reference_17###, 18 ###reference_18###] which can lead to energy-efficient systems to reduce the rapidly growing costs of conventional deep learning based on graphics and tensor processing units (GPU/TPU) [19 ###reference_19###].\n###figure_1###"
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "II. A Hybrid Probabilistic-Classical Computing Scheme",
15
+ "text": "The approach we take in this paper to train deep Boltzmann networks is to construct a hybrid probabilistic and classical computing setup (FIG. 1 ###reference_###a). The role of the classical computer is to compute gradients and the new set of weights and biases given a set of samples from the probabilistic computer. The role of the probabilistic computer is to generate equilibrium samples for a given network (defined by the set of weights and biases) according to the Boltzmann law:\nwhere is the configuration dependent energy and is the inverse (algorithmic) temperature. In our context of probabilistic sampling, is typically set to 1, unlike the optimization setting where it is gradually increased to find the configuration with minimum energy. In general, the configuration-dependent energy can be expressed as a -local Hamiltonian [22 ###reference_22###], in this paper, we focus on the 2-local energy that is given by:\nwhere and represent the network topology and represents the bipolar state of nodes that are either or . The probabilistic computer we design approximates the Boltzmann law by the following dynamical equations, where the effective field and the activation of are given by [12 ###reference_12###, 23 ###reference_23###]:\nand the activation of a p-bit is given:\nThe iterated evolution of Eq. (3 ###reference_###) and Eq. (4 ###reference_###) with a predefined (or random) update order generates samples approximating the Boltzmann law defined by Eq. (1 ###reference_###) [24 ###reference_24###]. Note that in the rest of this paper is always set to 1, except in image generation experiments where we anneal the network.\nAn important requirement to reach the Boltzmann equilibrium is that connected p-bits are updated serially (re-computing Eq. (3 ###reference_###) every time) rather than in parallel [25 ###reference_25###] so that each p-bit updates with the most up-to-date information. This iterative process is called Gibbs Sampling [26 ###reference_26###] and is a fundamental Markov Chain Monte Carlo (MCMC) algorithm used in many machine learning applications [27 ###reference_27###]. The physical implementation of Eq. (3 ###reference_###) and Eq. (4 ###reference_###) to perform MCMC introduces several challenges. The primary difficulty is the serial updating requirement of connected p-bits, prohibiting the parallelization of updates in dense networks. The second difficulty is to ensure p-bits receive all the latest information from their neighbors before updating, otherwise, the network does not sample from the true Boltzmann distribution [28 ###reference_28###]."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "III. Hardware-aware Sparse Networks",
21
+ "text": "Both of these difficulties are more easily addressed in sparse networks. Sparsity limits the number of neighbors between p-bits allowing parallel and asynchronous updates on unconnected p-bits. Indeed, we show that as long as the chosen network topology is sparse, a massively parallel architecture where the frequency of probabilistic samples that linearly depends on the number of nodes in the network can be constructed [25 ###reference_25###] (see Section IX ###reference_### for details). Our present FPGA implementation of the probabilistic computer can take up to to flips per nanosecond (flips/ns) and projections indicate stochastic magnetic tunnel junction-based (sMTJ) implementations can take this number to about a million flips/ns or more (FIG. 1 ###reference_###b,c) [29 ###reference_29###, 30 ###reference_30###]. These projections have not been realized, however, nanosecond fluctuations with sMTJs [15 ###reference_15###, 31 ###reference_31###] have been demonstrated. Given the gigabit densities in MTJ-based memory [32 ###reference_32###], large-scale integration of p-computers remains plausible, with MTJ-based prototypes demonstrating architectures of the type we discuss here [13 ###reference_13###, 16 ###reference_16###, 14 ###reference_14###].\nIn this paper, we adopt the Pegasus [20 ###reference_20###] and the Zepyhr [21 ###reference_21###] topologies developed by D-Wave\u2019s quantum annealers to train hardware-aware sparse deep BMs (FIG. 1 ###reference_###d). Even though our approach is applicable to any sparse graph (regular and irregular), we focus on such hardware-aware networks with limited connectivity where maximum degrees range between 15 and 20. Our choice of sparse models is motivated by scaled but connectivity-limited networks such as the human brain and advanced microprocessors.\nDespite the common use of full connectivity in BM-based networks where inter-layer connections are typically fully connected [33 ###reference_33###], both advanced microprocessors with networks of billion transistors and the human brain exhibit a large degree of sparsity [34 ###reference_34###]. In fact, most hardware implementations of RBMs [35 ###reference_35###, 36 ###reference_36###, 37 ###reference_37###] suffer from scaling problems due to large fan-outs, requiring off-chip memory access or distributed computation in multiple chips [37 ###reference_37###]. On the other hand, sparse connectivity in hardware neural networks often exhibits energy and area advantages [38 ###reference_38###].\nFIG. 1 ###reference_###e shows a typical sparse DBM that we use in this paper with 2-layers of hidden bits. This graph is obtained by randomly assigning visible and hidden bits in the Pegasus (or Zephyr) graphs of various sizes. Unlike standard deep BMs [39 ###reference_39###, 40 ###reference_40###], sparse DBMs do not have fully-connected interlayer connections. On the other hand, they do allow connections between nodes in a given layer, increasing the representative capability of the network.\nIn Section VIII ###reference_###, we systematically study the effect of distributing visible/hidden nodes in such sparse networks, which introduces new challenges that do not exist in fully connected networks.\nUnlike standard deep BM training where training is typically done layer-by-layer [41 ###reference_41###], in sparse DBMs, we tackle the training directly on the full network, by relying on our massively parallel architecture and the efficient mixing of sparse graphs.\nAs we discuss in Section V ###reference_###, we reach about 90% classification accuracy in 100 epochs with the full MNIST dataset without any downsampling, coarse-graining, or the use of much simpler datasets, typically performed in alternative hardware-based approaches [42 ###reference_42###, 43 ###reference_43###, 44 ###reference_44###, 45 ###reference_45###]. To support our conclusions, we also train the harder fashion MNIST dataset and a reduced version of CIFAR-10, in the Supplementary Section .13 ###reference_3###-.14 ###reference_4###.\nMoreover, unlike RBMs, the sparse DBM learns the images well enough that for any given label, it can generate a new handwritten digit (or an FMNIST sample) as shown in FIG. 1 ###reference_###f, when a single one-hot encoded output p-bit is clamped to a given digit.\nImage generation is an important feature of physics-inspired algorithms such as diffusion models [46 ###reference_46###], and the fact that RBMs fail at this task even when they have 100 more parameters is surprising (both in MNIST and FMNIST), stressing the potential of sparse DBM models, as we discuss further in Section VI ###reference_###."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "IV. Training Sparse DBMs with Sparse Ising Machines",
27
+ "text": "###figure_2### As our network model, we use the Pegasus [20 ###reference_20###] and Zephyr [21 ###reference_21###] graphs at different sizes as a fixed network.\nBoltzmann networks are typically used for unsupervised learning without any explicit labels. To use deep BMs for classification, we follow a similar approach to [47 ###reference_47###] where we create additional visible bits, calling them \u201clabel bits\u201d. We use one-hot encoding to specify 10 digits with 10 label bits, such that each image in the MNIST data set is paired with these label bits (FIG. 1 ###reference_###e). We then use our fast Gibbs sampler (probabilistic computer) to perform the contrastive divergence (CD) algorithm [41 ###reference_41###, 48 ###reference_48###] that minimizes the KL divergence between the data and the model distributions. An equivalent formulation from a maximum likelihood estimation viewpoint [26 ###reference_26###, 49 ###reference_49###] can also be used to obtain the following learning rule (see Supplementary Section .9 ###reference_###),\nwhere and represent the weight and bias updates per iteration and the terms in the parentheses represent the negative gradient of the KL divergence between data and the model distributions. is the learning rate, and are the average correlation between p-bits and in the \u201cpositive\u201d (data) and \u201cnegative\u201d (model) phases, respectively. During the positive phase of sampling, the p-computer clamps the visible p-bits to the corresponding training image one after the other, taking sweeps for each image for a total of sweeps where is the batch size. Using these sweeps, the CPU then computes the data correlations . In the negative phase, the p-computer is allowed to run freely without any clamping, and the CPU computes the model correlations by taking sweeps. Then the connection weights are updated according to Eq. (5 ###reference_###) and Eq. (6 ###reference_###). In actual training, we also use a momentum modification to Eqs. (5 ###reference_###,6 ###reference_###) (see Supplementary Section .8 ###reference_###). A pseudocode of the algorithm is presented in Algorithm 1 ###reference_hm1###.\nFor the sparse DBMs we consider in this work, establishing correlations between the data requires executing Gibbs sampling even for the positive phase, which is obtained in a single inference step in RBMs. Our machine can be configured to implement the persistent contrastive divergence (PCD) [50 ###reference_50###, 6 ###reference_6###, 51 ###reference_51###] algorithm. PCD maintains a long-running Markov chain such that small changes in weights do not take the equilibrium state of the new network far from the old one. We discuss the possible benefits of PCD vs CD in the context of our results in Section VII ###reference_###."
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "V. Results on the Full MNIST dataset",
33
+ "text": "The dataset that we used for training sparse DBMs is the full MNIST handwritten digit dataset [52 ###reference_52###, 53 ###reference_53###] without any reduction or downsampling. We show results on FMNIST and CIFAR-10 in the Supplementary Section .13 ###reference_3###-.14 ###reference_4###. MNIST consists of 60,000 training images and 10,000 test images with pixels having digits from 0 to 9 and we use black/white images by rounding up the pixel intensities. We set the initial values of weights and biases according to the recipe that Hinton suggested for RBMs in [6 ###reference_6###]. The weights are initialized to small random values chosen from a zero-mean Gaussian with a standard deviation of 0.01 for every p-bit. The initial values of hidden biases are set to zero and visible biases are set to log[] where is the proportion of training vectors in which unit is on. The values of hyperparameters used while training are = 1, learning rate = 0.003, and momentum = 0.6.\n###figure_3### The sparse DBM network used here (the largest size Pegasus that we can fit into our FPGA) consists of 834 visible p-bits (; we used 5 sets of labels each containing 10 p-bits) and 3,430 hidden p-bits arranged in 2-layers as shown in the inset of FIG. 2 ###reference_###a. Then we randomly distribute the visible and hidden units on the sparse DBMs to ensure the label indices are delocalized (see Section VIII ###reference_### for details of this process and the original network in Supplementary FIG. S13 ###reference_###).\nTo train the network efficiently, we divide the training set into 1,200 mini-batches having 50 images in each batch. The weights are updated after each mini-batch following the CD algorithm. We train MNIST for 100 epochs with CD-, where sweeps of the entire network are taken in the negative phase (). The weight precision in the FPGA is 10 bits (1 sign, 6 integer, and 3 fraction bits, i.e., s{6}{3}) while the CPU uses double-precision floating-point with 64 bits, to compute the gradients. Before the new weights are loaded into the FPGA, however, they are reduced to s{6}{3} to fit into the FPGA. A systematic study of the effect of weight precision is shown in Supplementary Section .5 ###reference_### along with image completion experiments. In short, we do not observe any significant differences at higher precision in the FPGA, indicating that the 10-bit weight precision is adequate.\nDuring inference, the 784 p-bits that correspond to the pixels are clamped to the test data and the label p-bits fluctuate freely. To test classification accuracy, we use sweeps and perform a softmax classification scheme as follows: as we have 50 label p-bits for 5 sets of labels, by time-averaging the corresponding label bits we finally have the 10 labels for 10 digits. The p-bit with the highest probability of being \u20181\u2019 is used for the classified digit. For comparison, we also train an optimized RBM model using CD-1 in the CPU. The label, testing, and training details of RBMs are very similar to those of sparse DBMs.\nFIG. 2 ###reference_### shows our main results. We see that the sparse DBM architecture in the Pegasus graph with 4,264 p-bits reaches about 90% accuracy in 100 epochs (see Supplementary Section .4 ###reference_### where the training accuracy can reach 100% for MNIST/100 images). To compare the sparse DBM architecture with a standard RBM, we perform two tests, one at \u201ciso-parameter\u201d and the other at \u201ciso-accuracy\u201d. The iso-parameter test uses an RBM with about the same number of parameters (with an all-to-all interlayer connection). This RBM falls short of reaching 90% in this setting. Then, we choose an RBM with 100 more parameters and observe that the results saturate at about 90% accuracy. We also note that increasing CD-1 to CD- ( up to 100) does not result in an appreciable difference in accuracy while making the training computationally much harder.\nDetailed testing in both models (sparse DBM and RBM) indicates that marginal improvements are possible with more training epochs, however, both models show similar asymptotic behavior in 100 epochs, this is why we stop training around 100 epochs (FIG. 2 ###reference_###d shows experiments at various network sizes). Note that this is still a computationally intense process where 60,000 images are shown to the network for a total of 6,000,000 times and the weights are updated a total of 100 = 120,000 times since \u2009=\u20091200.\nTo investigate the effect of total parameters of sparse DBMs on the accuracy, we used five Pegasus graphs of different sizes to train MNIST using our massively parallel architecture. These include 960, 1,664, 2,560, 3,080, and 4,264 p-bit graphs with a varying number of parameters from to as shown in FIG. 2 ###reference_###d. We trained full MNIST on each of these five sparse DBMs with CD- using the same hyperparameters and reported the classification accuracy for the entire test set. Similarly, we also trained eight different RBMs with full MNIST for 100 epochs to compare their accuracy with the number of parameters (FIG. 2 ###reference_###d). Increasing the number of parameters to millions could not increase the test accuracy significantly whereas 90% accuracy is achieved with around 200,000 parameters.\nBased on these experimental results, we arrive at the following two important conclusions: First, the sparse DBM architecture despite having a much smaller degree of connections between its layers (limited to a graph degree of 15 to 20) matches the classification accuracy of a fully-connected RBM. Second, the sparse DBM requires far fewer parameters (about 30,000) to reach 90% accuracy in the MNIST dataset. Both of these indicate the potential of sparse DBMs which can be directly tackled by the orders of magnitude acceleration obtained in the hardware. We show in the Supplementary Section .13 ###reference_3###-.14 ###reference_4### that similar results with the same order of magnitude differences between sparse DBMs and RBMs hold for the full FMNIST and a reduced version of the CIFAR-10 dataset.\nCompared to more powerful standard DNN algorithms such as CNNs, sparse DBMs do not reach state-of-the-art classification accuracy in MNIST at these modest network sizes and depths. Further improvements should be possible by algorithmic techniques and at larger sizes as discussed in Ref\u2019s [54 ###reference_54###, 50 ###reference_50###]. Surprisingly, however, in a head-to-head comparison using the same contrastive divergence algorithm, the sparse DBM architecture matches the performance of highly optimized RBMs, despite the severely limited connectivity. More detailed comparisons may reveal the true potential of hardware-aware sparse DMBs which can be implemented on Ising Machines. It is important to note that the generative nature of BMs allows applications beyond classification, such as representing many-body quantum wavefunctions [55 ###reference_55###, 56 ###reference_56###]."
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "VI. Image generation",
39
+ "text": "Given their generative nature, a natural question to ask is whether the sparse DBM and RBM can generate new images when they are run in the \u201cinvertible mode\u201d. This is similar to the image generation from \u201cnoise\u201d discussed in diffusion models [46 ###reference_46###]. We test this idea post-training by clamping the label bits to a given digit and annealing the network using the inverse temperature, .\nHere we present an example of image synthesis with sparse DBMs with \u200930,000 parameters and the optimized RBM with \u20093.25 million parameters (FIG. 3 ###reference_###a-b for MNIST and FIG. 3 ###reference_###c-d for Fashion MNIST). For this process, we clamp the label bits for digits \u20180\u2019 or \u20187\u2019 in the case of MNIST while all other p-bits run freely. Using the final weights and biases and by annealing the network slowly from \u2009=\u2009 to \u2009=\u2009 with a increment and we check the 784 image p-bits at various time steps. At lower values of when the system is in a high-temperature state, the model is sampling from noise (first column of FIG. 3 ###reference_###a). With increasing values, digits gradually become recognizable, and at final \u2009=\u2009 we see clear images of a \u20180\u2019 or \u20187\u2019 (leftmost column of FIG. 3 ###reference_###a). This example is demonstrated with the Pegasus graph using about 30,000 parameters from FIG. 2 ###reference_###, similar results are obtained with the Zephyr graph as we discuss in Supplementary Section .7 ###reference_###. Using the same approach of image generation, the fashion products \u2018Trouser\u2019 or \u2018Pullover\u2019 from FMNIST are generated as shown in FIG. 3 ###reference_###c (details in Supplementary Section .13 ###reference_3###).\nIn contrast, the images generated with the RBM (4,096 hidden units) are not recognizable even after careful annealing (FIG. 3 ###reference_###b-d), despite multiple experiments with different trials. Similarly, the annealing schedule for RBM varies from \u2009=\u2009 to \u2009=\u2009. To test whether RBMs can generate images with better gradient calculations, we also trained an RBM with 4,096 hidden p-bits with CD- but this did not lead to any success in image generation. Interestingly, however, the RBM with 4,096 hidden p-bits and CD- can accomplish a simpler \u201cimage completion\u201d task when it is presented half of a given digit (see Supplementary Section .11 ###reference_1###).\nThese results seem to be in keeping with the idea that in freely \u201cdreaming\u201d or image-completing networks, it may be possible to generate images with RBMs [57 ###reference_57###]. In our experiments, the image generation task is forced by clamping only the label bits, without giving the network any other lead. In this stricter sense, the failure of the RBM to generate images is consistent with the general understanding on the subject [58 ###reference_58###, 59 ###reference_59###]. We believe that accelerating the Gibbs sampling by orders of magnitude can enable the training of even deeper, previously untrainable deep BMs. The potential of physics-inspired RBMs for image generation is also seen in the recent interest in Gaussian-Bernoulli (GRBMs) [9 ###reference_9###], whose sparse and deep variants could be even more powerful."
40
+ },
41
+ {
42
+ "section_id": "7",
43
+ "parent_section_id": null,
44
+ "section_name": "VII. Mixing times",
45
+ "text": "One of the key difficulties that are often cited in the training of Boltzmann networks is the computational intractability of the partition function, , that appears in the Boltzmann law, Eq. (1 ###reference_###). Formally, what is required to ensure an exact calculation of the gradient is that the calculation of correlations and averages come from the equilibrium states of a given network defined by and . The time it takes for an undirected network to reach equilibrium is defined as the \u201cmixing time\u201d. A formal analysis of how long it takes for a given graph to mix can be extremely difficult to calculate and is unknown for all but the simplest, most regular networks [60 ###reference_60###]. Here, we empirically study the mixing time of the Pegasus graph that we used in generating our main results in FIG. 2 ###reference_###. The fact that there is no a priori method to determine mixing times, a lot of hyperparameter optimization might be necessary to squeeze the maximum results out of these networks (see, for example, [6 ###reference_6###]).\n###figure_4### ###figure_5### In FIG. 4 ###reference_###, we observe that the test set accuracy of our network increases significantly if the probabilistic sampler takes or more sweeps per weight update. Above this value, there seem to be diminishing returns in improving the accuracy. This suggests that taking more sweeps does not improve the estimation of the averages and correlations because these samples are already in equilibrium and \u2009 sweeps at this size (with 4,264 p-bits) of the Pegasus graph can be empirically defined as the mixing time of the network (Supplementary Section .6 ###reference_### shows the mixing time study of different size Pegasus). As mentioned earlier, our probabilistic computer could be modified to perform persistent CD algorithm (PCD) [50 ###reference_50###]. Beyond CD-, this may have diminishing returns since the chain mixes and starts sampling from the equilibrium distribution, even if it starts from a random state, as shown in FIG. 4 ###reference_###.\nThe reason for the saturating classification accuracy of sparse DBMs at around 90% is likely that the network is not deep or wide enough and not because of the intractability of the algorithm. In fact, considering our hardware architecture FPGA is able to take 64 billion samples per second, and obtaining sweeps from our machine can be done in mere milliseconds (Table 1 ###reference_### shows comparisons of sampling rates between standard CPUs and our graph colored (GC) architecture, where our probabilistic computer (GC-FPGA) demonstrates 4 to 6 orders of magnitude improvement over the optimized and standard CPU implementations of Gibbs sampling, respectively). In Supplementary Section .10 ###reference_0###, we show how the performance reported in Table 1 ###reference_### fares against superficially similar Ising solvers in highly optimized GPU and TPU implementations.\nThese results suggest that our machine can be used to sample much more \u201cdifficult\u201d networks that require many more samples to mix, enabling the training of previously untrainable networks with potentially richer representation.\nIt is important to note that in case of the contrastive divergence algorithm where the goal is to estimate model correlations and averages, one does not need to compute the effect of all samples. After the network reaches equilibrium, a small number of samples may be used to estimate the averages and correlations (with complexity). This significantly eases practical read-out constraints of a fast probabilistic sampler (See Supplementary Section .2 ###reference_### for our detailed read-out architecture)."
46
+ },
47
+ {
48
+ "section_id": "8",
49
+ "parent_section_id": null,
50
+ "section_name": "VIII. Randomization of indices",
51
+ "text": "A very important point that arises in training Boltzmann network models on a given sparse network is the notion of graph distance between visible, hidden, and label nodes. Typically, if the layers are fully connected, the graph distance between any given two nodes is a constant. On the other hand, when training sparse DBMs, the placement of visible, hidden, and label p-bits plays a crucial role. FIG. 5 ###reference_### shows the comparison of how the indexing of p-bits affects the classification accuracy in the Pegasus and Zephyr graphs. We observe that if the hidden, visible, and label bits are clustered and too close, the classification accuracy suffers greatly. This is likely because the correlation between the label bits and the visible bits gets weaker if their graph distance is too large. On the other hand, randomizing the indices seems to solve this problem, repeatable in completely different but sparse graphs. We performed further experiments with different random indices and essentially observed the same behavior. For example, FIG. 2 ###reference_###d shows a monotonically increasing accuracy with different sizes of sparse DBMs (Pegasus) even though each graph has a different randomized index set.\n###figure_6### To reduce the graph distance between the label bits, visible and hidden bits, we chose 5 sets of label bits (510 = 50 p-bits) using one-hot encoding per digit. Experiments with more label bits did not show significant differences. Also, experiments with multiple label bits in the RBM did not show any difference. This suggests that randomization of indices is particularly important for sparse models, but is unnecessary for fully-connected networks whose graph distance between any two nodes is a constant."
52
+ },
53
+ {
54
+ "section_id": "9",
55
+ "parent_section_id": null,
56
+ "section_name": "IX. p-computer Architecture",
57
+ "text": "On the sparse DBM, we color the graph using the heuristic graph-coloring algorithm DSatur [61 ###reference_61###] to exploit parallel updating of unconnected p-bits. This approach involves assigning different colors to connected p-bits and the same color to unconnected p-bits as shown in FIG. 6 ###reference_###a to implement Gibbs sampling in a massively parallel manner on sparse and irregular graphs [25 ###reference_25###]. Finding the minimum number of colors is an NP-hard problem, however, the minimum number of colors is not a strict requirement as sparse graphs require only a limited number of colors, and for our purpose, heuristic coloring algorithms like DSatur with polynomial complexity can color the graph efficiently.\nIn the case of the Pegasus graph with 4,264 p-bits, where the maximum number of neighbors is 15, only four colors are used as shown in FIG. 6 ###reference_###a. Therefore we need four equally phase-shifted and same-frequency clocks for updating the p-bits in each color block one by one. Similarly, the Zephyr graph (3,360 p-bits and the maximum number of neighbors is 20) can also be colored with five colors using this procedure. In this approach, a graph comprised of p-bits is able to perform a full sweep in a single clock cycle (). We refer to this architecture as the pseudo-asynchronous Gibbs sampling [17 ###reference_17###]. The key advantage of this approach is that the p-computer becomes faster as the graph size grows as shown in FIG. 6 ###reference_###b and Table 1 ###reference_### for both graph-colored FPGA and graph-colored CPU.\nParallelism offers many more samples to be taken at a clock cycle (scales as N, being the number of p-bits in the network as shown in FIG. 6 ###reference_###b), however, we also establish that this parallelism does not introduce any approximations or errors by performing an \u201cinference\u201d experiment as discussed in Supplementary Section .12 ###reference_2###."
58
+ }
59
+ ],
60
+ "appendix": [
61
+ {
62
+ "section_id": "Appendix x1",
63
+ "parent_section_id": null,
64
+ "section_name": "Methods",
65
+ "text": "In this article, Xilinx Alveo U250, a data center accelerator card (Virtex UltraScale+ XCU250 FPGA) with peripheral component interconnect express (PCIe) connectivity has been used [65 ###reference_65###]. PCIe interface performs data transfer at the rate of 2.5 gigatransfers per second (GT/s). The classical computer used in this study is equipped with an 11th Gen Intel Core i7-11700 processor with a clock speed of up to 4.90 GHz and 64 GB of random access memory (RAM).\nThe digital implementation of p-bits consists of a pseudorandom number generator (PRNG), a lookup table for the activation function (tanh), and a threshold to generate a binary output (details in the Supplementary Section .1 ###reference_###). The read-out architecture with mirror p-bits is discussed in the Supplementary Section .2 ###reference_###. Weights and biases with fixed point precision of 10 bits (1 sign bit, 6 integer bits, and 3 fraction bits) are used to provide tunability through the activation function.\nMNIST files are downloaded from [52 ###reference_52###]. Then the image data are converted to binary form (black and white) by rounding up the pixel intensities in MATLAB. While we focused on black and white images for our main results, in the Supplementary Section .13 ###reference_3###-.14 ###reference_4###, we show how the learning algorithm can be extended to learn grayscale images, following a similar time-averaging approach discussed in Ref. [66 ###reference_66###]. The Pegasus and Zephyr graphs are extracted using the procedure described in [67 ###reference_67###]. The RBM code used in this work is similar to the one available in [68 ###reference_68###].\nA PCIe interface is used to communicate between FPGA and CPU through MATLAB interface for the \u2018read/write\u2019 operations (see Supplementary FIG. S1 ###reference_###c). A global \u2018disable/enable\u2019 signal broadcast from MATLAB to the FPGA is used to freeze/resume all p-bits. Before a \u2018read\u2019 instruction, the p-bit states are saved to the local block memory (BRAM) with a snapshot signal. Then the data are read once from the BRAM using the PCIe interface and sent to MATLAB for post-processing i.e., computing gradients and updating the weights. For the \u2018write\u2019 instruction, the \u2018disable\u2019 signal is sent from MATLAB to freeze the p-bits before sending the updated weights. After the \u2018write\u2019 instruction is done, p-bits are enabled again with the \u2018enable\u2019 signal sent from MATLAB. The data transfer efficiency is influenced by this back-and-forth communication between the FPGA and MATLAB. Furthermore, the conversion of bipolar to binary weights and biases during each epoch (as explained in Supplementary .1.2 ###reference_.SSS2###) adds some time overheads while sending them from MATLAB to FPGA. Even though sampling is very fast in FPGA, due to these overheads it takes \u200920 hours to train full MNIST on 4,264 p-bit Pegasus with CD- for 100 epochs. This issue can be improved significantly by updating the weights and biases inside the FPGA. To understand the improvement introduced by our hybrid approach, we also note that the corresponding equivalent version with CD- and with the same graph coloring on a CPU took days (57.5 hours) to complete only 10 epochs (projected time for 100 epochs is days).\nTo measure the flips/ns, one p-bit in each color block is designed with a programmable counter in the FPGA to count the flip attempts. A reference counter running parallelly is set to count up to a preset value at the positive edge of a reference clock. When the reference counter is done counting, the p-bit counters are stopped. Comparing the p-bit counter outputs (representing the total number of attempted flips in each color block) with the reference counter preset value, the time for the total flips is obtained. With this data, the flips/ns of the p-computer is measured experimentally. To determine the flips/ns for the standard CPU and graph-colored CPU, MATLAB built-in \u2018tic\u2019 and \u2018toc\u2019 functions are used to measure the elapsed time while counting the total flips. The flips/ns is measured in real-time using this data. The error bars in FIG. 6 ###reference_###b are obtained by taking 500 measurements of flips/ns."
66
+ },
67
+ {
68
+ "section_id": "Appendix x2",
69
+ "parent_section_id": null,
70
+ "section_name": "Acknowledgements",
71
+ "text": "We gratefully acknowledge discussions with Dr. Jan Kaiser. We are thankful to the Xilinx University Donation Program (XUP) for the FPGA development boards and G. Eschemann for useful discussions on airhdl. This work is partially supported by an Office of Naval Research Young Investigator Program grant and a National Science Foundation CCF 2106260 grant."
72
+ },
73
+ {
74
+ "section_id": "Appendix x3",
75
+ "parent_section_id": null,
76
+ "section_name": "Data availability",
77
+ "text": "The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request."
78
+ },
79
+ {
80
+ "section_id": "Appendix x4",
81
+ "parent_section_id": null,
82
+ "section_name": "Code availability",
83
+ "text": "The computer code used in this study is available from the corresponding author upon reasonable request."
84
+ },
85
+ {
86
+ "section_id": "Appendix x5",
87
+ "parent_section_id": null,
88
+ "section_name": "Author contributions",
89
+ "text": "SN and KYC conceived the study. KYC supervised the study. SN and NAA developed the hybrid FPGA-CPU implementation. SN and SC performed the benchmark RBM training. SN and NAA performed the FPGA experiments to train sparse DBMs. SN, NAA, MM, SC, YQ, KYC discussed, analyzed the experiments, and participated in writing the manuscript."
90
+ },
91
+ {
92
+ "section_id": "Appendix x6",
93
+ "parent_section_id": null,
94
+ "section_name": "Competing interests",
95
+ "text": "The authors declare no competing interests."
96
+ },
97
+ {
98
+ "section_id": "Appendix x7",
99
+ "parent_section_id": null,
100
+ "section_name": "Supplementary Information",
101
+ "text": "We present the experimental results in the main paper using a hybrid classical-probabilistic computer. Here, we discuss the technical details of the FPGA-based p-computer implemented on a Xilinx Alveo U250 data center accelerator card. The basic architecture is presented in FIG. S1 ###reference_###.\nFIG. S2 ###reference_### shows our readout architecture that can take a \u201csnapshot\u201d of the system at a given time. The nature of the learning algorithm is that only a handful of equilibrium samples are needed to estimate correlations and averages without requiring continuous samples. By way of mirror (or copy) p-bit registers, we decouple system p-bits (that are always on) from snapshot states that are saved to memory. In this way, while the system goes through a large number of samples, we are able to take a sufficient number of samples that are saved to local memory (at our own accord) which can then be read by a CPU.\n###figure_7### ###figure_8### The networks we present in the manuscript (Pegasus 4,264 p-bits and Zephyr 3,360 p-bits) are highly sparse, which we quantitatively show in FIG. S3 ###reference_###.\nWe use the typical graph density metric, measured by\nBy this metric, the network we have shown in FIG. 1 ###reference_###e (Pegasus, 4264) only has a graph density of . Moreover, we also show a vertex degree distribution of the network providing a histogram of nodes and the number of their neighbors. On the right, the same metrics are shown for the Zepyhr graphs with similar results.\nWe have studied the effect of training the sparse DBMs (Pegasus 4,264 p-bits) with a small subset of data before training the full MNIST. In this setting, we chose 100 images from MNIST to train on the sparse network with our massively parallel architecture. To train these 100 images, we used 10 mini-batches, having 10 images in each batch and the same set of hyperparameters as in training full MNIST. The training accuracy reached 100% within 1,000 epochs as illustrated in FIG. S4 ###reference_###. We also explored different values of the \u2018regularization\u2019 parameter ( = 0.005, 0.001) which is generally used to keep the weights from growing too much. In our case of sparse DBM, we did not observe any significant difference between a small regularization and without any regularization.\nThe poor test accuracy here is an indication of overfitting due to the small size of the training set (only 100 images). We observed similar accuracy on Zephyr graphs with 3,360 p-bits for 100 images.\n###figure_9### The results we have in Section V ###reference_### in the main paper utilized a weight precision of 10 bits (1 sign, 6 integer, and 3 fraction bits i.e., s{6}{3}). Here, we explore different weight precisions by changing the fraction bit width and compare the results to identify the effect of weight precision on accuracy. We trained full MNIST with 1,200 mini-batches for 200 epochs, using five different weight precisions of s{6}{2}, s{6}{3}, s{6}{5}, s{4}{2} and s{3}{2}. We chose a Pegasus 2,560 p-bit graph as a sparse DBM for this experiment that fits into the FPGA since increasing bit precision reduces the available resources. The weight update is accomplished in MATLAB (double-precision floating point with 64 bits), but before the new weights are loaded to the FPGA, they are converted to the corresponding fixed-point precision. The choice of hyperparameters remains the same for all cases. The test accuracy goes to % in each case (with 17,984 parameters) and there is no remarkable difference among the accuracy of the different weight precisions between s{6}{5} to s{6}{3}, accuracy starts degrading at or below s{4}{2} (FIG. S5 ###reference_###a). We also trained full MNIST on RBM (512 hidden units) using both float64 and s{6}{3} weight precision for 200 epochs. The test accuracy remains the same for these two different precisions as shown in FIG. S5 ###reference_###b.\nTo further study the impact of weight precision on the generative properties of the sparse DBM network, we have conducted image completion experiments. FIG. S5 ###reference_###c shows inference experiments where we obscure half of an image and let a trained network evolve to the corresponding minima (by annealing the network from \u2009=\u20090 to \u2009=\u20095). We observe that while s{6}{3} can complete this task, precisions below s{4}{2} start failing.\n###figure_10### We described the mixing times in Section VII ###reference_### of the main paper showing the results (FIG. 4 ###reference_###) from our largest size Pegasus (4,264 p-bits). Here, we show another graph, Pegasus with 3,080 p-bits to measure the mixing time of the network. Unlike the main model, for this experiment, we trained full MNIST for only 50 epochs (instead of 100) using the same hyperparameters as mentioned in the main Section V ###reference_### with different numbers of sweeps starting from CD- to CD-. Test accuracy improves significantly when we take more than CD- per epoch.\n###figure_11### In the main paper, FIG. 1 ###reference_###f displays the images generated with Pegasus (4,264 p-bits) graph and the procedure is described in Section VI ###reference_###. Here we explored image generation with a different type of sparse DBM, Zephyr (3,360 p-bits) that also reached % accuracy with randomized indices as demonstrated in the main FIG. 5 ###reference_###c (bottom). The generated images with Zephyr as shown in FIG. S7 ###reference_### are slightly different from the Pegasus ones.\n###figure_12### In our training, we used the momentum in our update rules, which are empirically added to the learning rule we discuss in the next section. By retaining a portion of the last update to the weights, momentum helps increase the effective learning rate [6 ###reference_6###]. The effective increase in the learning rate is equivalent to multiplying it by a factor of 1/(1-) where is denoted as momentum. Using this process, the algorithm can increase the effective learning rate without causing unstable oscillations, which ultimately speeds up the convergence of the training process [50 ###reference_50###]. We modify the learning rule equations in the main Eq. (5 ###reference_###) and Eq. (6 ###reference_###) by introducing the momentum term as follows:\nwhere represents the th index (ranging from 1 to the number of batches) in Algorithm 1 in the main paper.\nThe basic idea of Boltzmann networks is to start from a physics-inspired variational guess, that the data distribution will be approximated by a model whose probability for a given input vector (, being the input index) obeys the Boltzmann law (ignoring biases in our derivation for simplicity):\nIn our setting, we have a system of fully visible p-bits connected in some arbitrary graph topology. The problem is learning a \u201ctruth table\u201d with exactly lines of inputs in it. The model is going to try to select these states in the space of possible discrete probabilities. Like in any other ML model, fitting every line of the truth table exactly will overfit, but the Boltzmann formulation given by Eq. (S.6 ###reference_###) smooths out the sum of \u201cdelta function\u201d-like data vectors in the space, which can later be used for generating new samples.\nWe define a as the probability distribution of the data, corresponding to the visible bits. Then, a maximum likelihood estimation minimizing the Kullback\u2013Leibler divergence between the data and the model can be used to derive the learning rule, by taking the negative derivative of :\nwhere is the index of truth table lines . To simplify analysis, we consider fully visible networks where is independent of for any network topology since represents the data distribution.\nwhere the index represents all possible states from 1 to for the model.\nwhich gives the familiar learning rule. A similar learning rule in terms of the averages can be derived by accounting for the biases in the energy, which we ignored for simplicity.\nIn the main paper, we show how graph-colored FPGA achieves massive parallelism to provide a few orders of magnitude faster sampling throughput than traditional CPUs. Here in Supplementary Table S1 ###reference_###, we also compare the sampling speed to some state-of-the-art (SOTA) Ising machines implemented on the latest GPUs and TPUs. The throughput reported in this work up to 64 billion flips per second outperforms the numbers reported by the SOTA Ising solvers in GPUs and TPUs. It is also important that this comparison is not completely accurate and favors the GPU/TPU implementations for two reasons: First, all the GPUs and TPUs discussed here are solving simple, nearest-neighbor chessboard lattices, unlike the irregular and relatively high-degree (with up to 20 neighbors) graphs used in this work. Second, GPU/TPU implementations generally use low precision {+1,-1} weights (compared to 10 bits of weight precision in our work) and thus can explore only a few discrete energy levels. Both of these features are heavily exploited in reporting a large degree of flips/ns in these solvers and their performance would presumably be much worse if they were implemented in the same graphs with the same precision we discuss in this work.\nA typical task for energy-based generative models is that of image completion as opposed to image generation. Image completion is relatively easier since the network is clamped near local minima. In the main manuscript, we show how an iso-accuracy RBM with around 3M parameters cannot perform image generation. Here, we show that an RBM can complete the easier task of image completion. We clamp half of the visible bits along with the labels and clamp the other half to noise. Then we let the trained network evolve to the corresponding minima by annealing the network from \u2009=\u20090 to \u2009=\u20095. Results are shown for a sparse DBM (4,264 p-bits) and RBM (4,096 hidden units, trained with CD-100). We observe that despite failing at image generation, the RBM performs similarly to our sparse DBM in image completion as shown in FIG. S8 ###reference_###.\n###figure_13### To explicitly show the quality of our parallel samples in our graph-colored architecture (in the main Section IX ###reference_###), we have performed the following \u201cinference\u201d experiment in the CPU performing exact Gibbs sampling vs. our parallelized FPGA using a sparse DBM (Pegasus 3,080 p-bits):\nWe start with an MNIST-trained Pegasus network with (3080 p-bits) with known weights and biases.\nWe initialize all p-bits to the +1 state at time step 1 and define a \u201cnetwork magnetization\u201d,\nWe perform Exact (sequential) Gibbs Sampling in a CPU and our parallelized Gibbs Sampling in the FPGA for M=100 times, measuring for each run, . We obtain an ensemble-averaged .\nWe then compare these averaged magnetizations, as a function Monte Carlo sweeps taken in the FPGA and CPU.\nFIG. S9 ###reference_### shows the results of this experiment showing near identical relaxation between the FPGA and the CPU. FPGA takes about 0.067 seconds to take a million samples as opposed to a projected 21.1 hours from the CPU (we did not take more than 10,000 samples over 100 iterations in the CPU, since at that point both models converged). These numbers are in accordance with our expectations from their relative flips/second numbers and they establish that the samples taken by the FPGA follow the Gibbs sampling process.\n###figure_14### To test the differences between sparse DBMs and RBMs, we used our largest network (Pegasus 4264 p-bits) to train full Fashion MNIST [75 ###reference_75###], a more challenging dataset than MNIST [9 ###reference_9###]. Fashion MNIST consists of 2828 grayscale images of 70,000 fashion products (60,000 in the training set and 10,000 in the test set) from 10 categories (e.g. t-shirt/top, trouser, sneaker, bag, pullover, and others), with 7,000 images per category. We have trained this dataset using our sparse DBM on the Pegasus graph. There are 4264 p-bits with 30,404 parameters in this network where the number of visible units is 784, 50 label units, and 3430 hidden units.\nOur approach to grayscale images is based on time-averaging inspired by the stochastic computing approach of Ref. [66 ###reference_66###], where grayscale images between 0 and 1 are treated as the time-averaged probability of activation for p-bits. During the positive phase, we choose N (e.g., 20, 50, 100) binary samples from this probability and clamp the visible nodes as described in the main Section IV ###reference_### and Section V ###reference_###. Here, we used 1200 mini-batches containing 50 grayscale images in each batch during training. To train Fashion MNIST, we used 20 binary (black and white) samples for each grayscale image resulting in a total of 60,00020 training images. We found that the number of black and white samples can vary depending on the dataset as a hyperparameter.\n###figure_15### To test classification accuracy, we also used 20 black and white samples for each grayscale image to perform the same softmax classification as described in the main Section V ###reference_###. After the network sees those 20 black and white samples to form a grayscale image, we check the labels to establish classification accuracy. Using this scheme of grayscale images, our sparse DBM with 30,404 parameters can reach around 80% in 120 epochs as shown in FIG. S10 ###reference_###a.\nWe trained RBMs with different numbers of parameters as listed in Table S2 ###reference_### using the same approach as sparse DBM. The iso-parameter RBM (43 hidden units and 34k parameters) can reach a maximum of 72% accuracy on full Fashion MNIST while the million-parameter RBMs can go to around 85% test accuracy. As in MNIST, we see that RBM requires the order of a million parameters to reach sparse DBM accuracy.\nUsing a similar approach to image generation as described in the main Section VI ###reference_###, we can also generate images of fashion products with our sparse DBM as shown in FIG. S10 ###reference_###b. Similar to other cases, for image generation, we only clamp label bits for a particular image. We anneal the network slowly from \u2009=\u2009 to \u2009=\u2009 with a increment using the final weights and biases. Then we check the 784 visible p-bits after time-averaging the collected samples to obtain grayscale images. Using a similar procedure, we observe that none of the RBMs can generate images despite having a maximum of 85% accuracy. We observed that different annealing schedules (e.g., with slower changes) do not help RBM image generation.\nTo see whether the same conclusions hold for our architectural and algorithmic ideas for sparse DBMs, we trained 100 images (10 images from each class) from the CIFAR-10 dataset [76 ###reference_76###]. Due to resource limitations in our FPGA, we could not increase the number of parameters, hence we used Pegasus 4,264 p-bits as sparse DBM to train the grayscale CIFAR-10/100. The CIFAR-10 dataset consists of 3232 color images of 10 different classes (i.e., airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck), with 6000 images per class. There are 50000 training images and 10000 test images.\nWe converted the color images into grayscale [77 ###reference_77###] using the \u2018rgb2gray\u2019 function in MATLAB. We used 1024 visible units, 5 sets of labels (50 p-bits), and 3190 hidden units arranged in 2 layers as shown in the inset of FIG. S11 ###reference_###a. Utilizing the same approach of binary transformation from grayscale images as described in Section .13 ###reference_3###, we have trained the CIFAR-10/100 dataset with 100 black and white samples per grayscale image with 10 mini-batches (having randomly selected 10 grayscale images in each batch). Training accuracy of this dataset with sparse DBM can reach around 90% in 2000 epochs as shown in FIG. S11 ###reference_###a while the iso-parameter RBM (40 hidden units) accuracy is only 68% as listed in Table S3 ###reference_###. RBMs reach the same levels of accuracy using between 264,000 and 1 million parameters, in line with our earlier results.\n###figure_16### Image generation for CIFAR-10 in this reduced setting with only 100 images in the training set failed both for sparse DBM and RBM. For this reason, we also examined the image completion task (details in Section .11 ###reference_1###) with RBM and sparse DBM as shown in FIG. S11 ###reference_###b. We clamped only the left half of a grayscale image (using 100 black and white samples) along with the corresponding label bits and checked the right half of that image. In this case, both RBM (4096 hidden units) and sparse DBM performed similarly, in this much harder setting.\nBelow we show the full Pegasus network topology with 4,264 p-bits and its sparse deep BM representation.\n###figure_17### ###figure_18###"
102
+ }
103
+ ],
104
+ "tables": {
105
+ "1": {
106
+ "table_html": "<figure class=\"ltx_table\" id=\"S7.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span><span class=\"ltx_text\" id=\"S7.T1.8.1\" style=\"font-size:80%;\">Comparison of the FPGA-based MCMC sampler with standard CPU and graph-colored CPU implementations. All data points are measured, as discussed in the Methods.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S7.T1.6\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S7.T1.6.7.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T1.6.7.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T1.6.7.1.1.1\">Sampling method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T1.6.7.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T1.6.7.1.2.1\">topology</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T1.6.7.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T1.6.7.1.3.1\">size</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T1.6.7.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T1.6.7.1.4.1\">max. degree</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S7.T1.6.7.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S7.T1.6.7.1.5.1\">flips/ns</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S7.T1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S7.T1.1.1.2\">Standard Gibbs (CPU)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.1.1.3\">Pegasus</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.1.1.4\">4,264</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.1.1.5\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.1.1.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.2.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"S7.T1.2.2.2\">GC Gibbs (CPU)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.2.2.3\">Pegasus</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.2.2.4\">4,264</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.2.2.5\">15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.2.2.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"S7.T1.3.3.2\">GC Gibbs (FPGA)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.3.3.3\">Pegasus</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.3.3.4\">4,264</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.3.3.5\">15</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.3.3.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S7.T1.4.4.2\">Standard Gibbs (CPU)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.4.4.3\">Zephyr</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.4.4.4\">3,360</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.4.4.5\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S7.T1.4.4.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S7.T1.5.5.2\">GC Gibbs (CPU)</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.5.5.3\">Zephyr</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.5.5.4\">3,360</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.5.5.5\">20</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S7.T1.5.5.1\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S7.T1.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S7.T1.6.6.2\">GC Gibbs (FPGA)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T1.6.6.3\">Zephyr</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T1.6.6.4\">3,360</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T1.6.6.5\">20</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S7.T1.6.6.1\"></td>\n</tr>\n</tbody>\n</table>\n</figure>",
107
+ "capture": "Table 1: Comparison of the FPGA-based MCMC sampler with standard CPU and graph-colored CPU implementations. All data points are measured, as discussed in the Methods."
108
+ },
109
+ "2": {
110
+ "table_html": "<figure class=\"ltx_table\" id=\"Ax7.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table S1: </span><span class=\"ltx_text\" id=\"Ax7.T1.2.1\" style=\"font-size:80%;\">Optimized GPU and TPU implementations of Markov Chain Monte Carlo sampling with regular chessboard lattices. It is important to note that these TPU and GPU implementations solve Ising problems in sparse graphs, however, their graph degrees are usually restricted to 4 or 6, unlike more irregular and higher degree graphs.</span></figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Ax7.T1.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Ax7.T1.3.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Ax7.T1.3.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T1.3.1.1.1.1\">Sampling method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Ax7.T1.3.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T1.3.1.1.2.1\">topology</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Ax7.T1.3.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T1.3.1.1.3.1\">max. degree</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Ax7.T1.3.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T1.3.1.1.4.1\">flips/ns</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Ax7.T1.3.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Ax7.T1.3.2.1.1\">Nvidia Tesla C1060 GPU <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib71\" title=\"\">71</a>, <a class=\"ltx_ref\" href=\"#bib.bib72\" title=\"\">72</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Ax7.T1.3.2.1.2\">Chessboard</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Ax7.T1.3.2.1.3\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Ax7.T1.3.2.1.4\">7.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T1.3.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"Ax7.T1.3.3.2.1\">Nvidia Tesla V100 GPU <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib73\" title=\"\">73</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T1.3.3.2.2\">Chessboard</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T1.3.3.2.3\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T1.3.3.2.4\">11.37</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T1.3.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"Ax7.T1.3.4.3.1\">Google TPU <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib73\" title=\"\">73</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T1.3.4.3.2\">Chessboard</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T1.3.4.3.3\">4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T1.3.4.3.4\">12.88</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T1.3.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Ax7.T1.3.5.4.1\">Nvidia Fermi GPU <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib74\" title=\"\">74</a>]</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Ax7.T1.3.5.4.2\">Chessboard</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Ax7.T1.3.5.4.3\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Ax7.T1.3.5.4.4\">29.85</td>\n</tr>\n</tbody>\n</table>\n</figure>",
111
+ "capture": "Table S1: Optimized GPU and TPU implementations of Markov Chain Monte Carlo sampling with regular chessboard lattices. It is important to note that these TPU and GPU implementations solve Ising problems in sparse graphs, however, their graph degrees are usually restricted to 4 or 6, unlike more irregular and higher degree graphs."
112
+ },
113
+ "3": {
114
+ "table_html": "<figure class=\"ltx_table\" id=\"Ax7.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table S2: </span> Fashion MNIST accuracy with different sizes of RBMs.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Ax7.T2.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Ax7.T2.7.8.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Ax7.T2.7.8.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T2.7.8.1.1.1\">Number of</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Ax7.T2.7.8.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T2.7.8.1.2.1\">number of</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Ax7.T2.7.8.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T2.7.8.1.3.1\">maximum</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T2.7.9.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Ax7.T2.7.9.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T2.7.9.2.1.1\">hidden units</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Ax7.T2.7.9.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T2.7.9.2.2.1\">parameters</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Ax7.T2.7.9.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T2.7.9.2.3.1\">accuracy (%)</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Ax7.T2.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Ax7.T2.1.1.2\">43</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Ax7.T2.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Ax7.T2.1.1.3\">71.90</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T2.2.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.2.2.2\">64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.2.2.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.2.2.3\">76.56</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T2.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.3.3.2\">128</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.3.3.3\">77.45</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T2.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.4.4.2\">256</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.4.4.3\">78.45</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T2.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.5.5.2\">1264</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.5.5.3\">85.56</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T2.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.6.6.2\">2048</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T2.6.6.3\">84.72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T2.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Ax7.T2.7.7.2\">4096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Ax7.T2.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Ax7.T2.7.7.3\">82.64</td>\n</tr>\n</tbody>\n</table>\n</figure>",
115
+ "capture": "Table S2: Fashion MNIST accuracy with different sizes of RBMs."
116
+ },
117
+ "4": {
118
+ "table_html": "<figure class=\"ltx_table\" id=\"Ax7.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table S3: </span> CIFAR-10/100 accuracy with different sizes of RBMs.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Ax7.T3.7\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Ax7.T3.7.8.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Ax7.T3.7.8.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T3.7.8.1.1.1\">Number of</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Ax7.T3.7.8.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T3.7.8.1.2.1\">number of</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Ax7.T3.7.8.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T3.7.8.1.3.1\">maximum</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T3.7.9.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Ax7.T3.7.9.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T3.7.9.2.1.1\">hidden units</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Ax7.T3.7.9.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T3.7.9.2.2.1\">parameters</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column\" id=\"Ax7.T3.7.9.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Ax7.T3.7.9.2.3.1\">accuracy (%)</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Ax7.T3.1.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Ax7.T3.1.1.2\">40</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Ax7.T3.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Ax7.T3.1.1.3\">68</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T3.2.2\">\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.2.2.2\">64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.2.2.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.2.2.3\">83</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T3.3.3\">\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.3.3.2\">128</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.3.3.3\">88</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T3.4.4\">\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.4.4.2\">256</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.4.4.3\">88</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T3.5.5\">\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.5.5.2\">968</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.5.5.3\">99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T3.6.6\">\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.6.6.2\">2048</td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Ax7.T3.6.6.3\">100</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Ax7.T3.7.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Ax7.T3.7.7.2\">4096</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Ax7.T3.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Ax7.T3.7.7.3\">100</td>\n</tr>\n</tbody>\n</table>\n</figure>",
119
+ "capture": "Table S3: CIFAR-10/100 accuracy with different sizes of RBMs."
120
+ }
121
+ },
122
+ "image_paths": {
123
+ "1": {
124
+ "figure_path": "2303.10728v2_figure_1.png",
125
+ "caption": "Fig. 1: (a) Hybrid computing scheme with probabilistic computer and classical computer implemented on a CPU. The p-computer generates samples according to the Boltzmann-Gibbs distribution and provides them to the CPU. Then CPU computes gradients, updates the weights (J) and biases (h), and sends them back to the p-computer until convergence. (b) The p-computer illustrated here is based on digital CMOS implementation (FPGA) and can have a measured sampling speed of \u224850\u2062 to \u206264absent50 to 64\\approx 50\\text{ to }64\u2248 50 to 64 flips/ns. (c) Nanodevice-based p-computer: Various analog implementations have been proposed [17]. (d) Hardware-aware sparse Deep Boltzmann Machines (DBMs) are represented with visible and hidden p-bits (examples of the Pegasus [20] and Zephyr graphs [21] are shown). (e) The sparse DBMs shown in (d) are illustrated with two layers of hidden units (Left) where both the interlayer and intralayer (not shown) connections are allowed. (see Supplementary section .15 for a full view of the networks used in this work. The graph density and vertex degree distribution of the sparse DBMs are shown in the Supplementary Section .3.) When a particular label p-bit corresponding to a digit is activated (clamping that label p-bit to 1 and clamping the rest to 0), the network evolves to an image of that digit as shown in the example (Right). (f) All 10 digits are generated with sparse DBM after training the network with the full MNIST dataset.",
126
+ "url": "http://arxiv.org/html/2303.10728v2/x1.png"
127
+ },
128
+ "2": {
129
+ "figure_path": "2303.10728v2_figure_2.png",
130
+ "caption": "Fig. 2: (a) MNIST accuracy vs training epochs: with sparse DBM, 90% accuracy is achieved in 100 epochs. Full MNIST (60,000 images) is trained on sparse DBM (Pegasus 4,264 p-bits) with CD-105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT, batch size = 50, learning rate = 0.003, momentum = 0.6 and epoch = 100 where the total number of parameters is 30,404. Each epoch is defined as the network seeing the entire 60,000 images with 1,200 weight updates. Test accuracy shows the accuracy of all the 10,000 images from the MNIST test set and the training accuracy represents the accuracy of 10,000 images that are randomly chosen from the training dataset. (b) MNIST accuracy with Restricted Boltzmann Machine (RBM) using 43 hidden units and CD-1 (CPU implementation) where the total number of parameters is 34,142. The accuracy of this RBM is less than 90% but sparse DBM can reach 90% with approximately the same number of parameters. (c) MNIST accuracy of RBM with 4,096 hidden units. Here the total number of parameters is 3,252,224 and the accuracy is 90% in 100 epochs which can be achieved using sparse DBM with around 100\u00d7100\\times100 \u00d7 fewer parameters. (d) Test accuracy of MNIST as a function of the number of parameters with sparse DBMs (Pegasus) and RBMs. We trained full MNIST with 5 different sizes of Pegasus graphs for 100 epochs using the same set of hyperparameters and collected the test accuracy of the whole test set. When the number of parameters is only 6,464 with the smaller Pegasus (960 p-bits), test accuracy could not reach beyond 50%. On larger graphs with increased parameters, accuracy starts to increase and \u2248\\approx\u2248 90%percent\\%% accuracy is achieved with the largest Pegasus (4264 p-bits) that fits into our FPGA. RBM reached 90% accuracy with around 200,000 parameters but the increased number of parameters (up to 3.25 million) could not help go beyond \u224892%absentpercent92\\approx 92\\%\u2248 92 % accuracy.",
131
+ "url": "http://arxiv.org/html/2303.10728v2/x2.png"
132
+ },
133
+ "3": {
134
+ "figure_path": "2303.10728v2_figure_3.png",
135
+ "caption": "Fig. 3: (a) Images generated with sparse DBM by annealing the network from \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20090 to \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20095 with 0.125 steps after training the full MNIST dataset. The labels for a particular digit are clamped to show how the visible p-bits evolve to that specific image. Examples of digits \u20180\u2019 and \u20187\u2019 are shown here. (b) The same procedure for image generation is applied to the RBM network (with 4,096 hidden units) that achieves 90% test accuracy. Using the same annealing schedule, RBM does not produce the correct digits, unlike the sparse DBM. (c) Generated images of fashion products (e.g. \u2018Trouser\u2019 and \u2018Pullover\u2019) with sparse DBM by annealing the network from \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20090 to \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20095 with 0.125 steps after training full Fashion MNIST. (d) RBM with 4096 hidden units can not generate the correct images according to the labels despite achieving around 83% test accuracy.",
136
+ "url": "http://arxiv.org/html/2303.10728v2/x3.png"
137
+ },
138
+ "4": {
139
+ "figure_path": "2303.10728v2_figure_4.png",
140
+ "caption": "Fig. 4: (a) Test accuracy after training full MNIST (up to only 40 epochs for computational simplicity) with different numbers of sweeps per iteration is shown. For our sparse graph, to mix the Markov chain properly we need a minimum CD-104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT. Reducing the number of sweeps to 103superscript10310^{3}10 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT or 102superscript10210^{2}10 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT degrades the quality of mixing the chain significantly. (b) Test accuracy as a function of CD-n at epoch 40 showing the equilibrium and non-equilibrium samples.",
141
+ "url": "http://arxiv.org/html/2303.10728v2/x4.png"
142
+ },
143
+ "5": {
144
+ "figure_path": "2303.10728v2_figure_5.png",
145
+ "caption": "Fig. 5: (a) The sparse DBMs (Pegasus and Zephyr) where all the p-bits are distributed in a serial manner such as 1 to 784 are the visible p-bits, 785 to 834 are the label p-bits (50 bits for 5 sets of labels), and the rest are hidden p-bits. (b) The sparse DBMs with randomized indices are shown here. (c) Test accuracy of full MNIST as a function of training epochs for two different sparse DBMs. In both cases, training the sparse DBMs with the serial distribution (no randomization) of indices could not achieve an accuracy of more than 50%. In contrast, randomization of indices helps the network to reach 90% accuracy.",
146
+ "url": "http://arxiv.org/html/2303.10728v2/x5.png"
147
+ },
148
+ "6": {
149
+ "figure_path": "2303.10728v2_figure_6.png",
150
+ "caption": "Fig. 6: (a) An example of massively parallel architecture with four parallel same-frequency and equally phase-shifted clocks to trigger the colored p-bit blocks. The sparse DBM (Pegasus 4,264 p-bits) is colored with four colors using the graph-coloring algorithm to exploit parallel updating of unconnected p-bits and the input for each p-bit is computed using Eq. (3). (b) Measured flips/ns as a function of graph size (number of p-bits) showing ideal parallelism scaling linearly with the system size in the case of the graph-colored FPGA (top). The graph-colored CPU flips/ns as a function of the graph size (bottom).",
151
+ "url": "http://arxiv.org/html/2303.10728v2/x6.png"
152
+ },
153
+ "7": {
154
+ "figure_path": "2303.10728v2_figure_7.png",
155
+ "caption": "Fig. S1: (a) The MAC (multiplier\u2013accumulator) unit implements Eq. (3). The p-bit unit consists of a xoshiro pseudorandom number generator (PRNG), a lookup table for the activation function (tanh), and a comparator to generate a binary output. (b) A built-in clocking unit generates equally phase-shifted and same-frequency parallel clocks to trigger the PRNGs inside the colored p-bit blocks. (c) A PCIe interfacing unit transfers data between MATLAB and the FPGA.",
156
+ "url": "http://arxiv.org/html/2303.10728v2/x7.png"
157
+ },
158
+ "8": {
159
+ "figure_path": "2303.10728v2_figure_8.png",
160
+ "caption": "Fig. S2: (a) Block diagram showing the mirror (or copy) p-bit architecture with a snapshot signal. The controller block generates the snapshot signal at which time the original p-bit states are copied to local memory (registers) and at the inverted snapshot signal those states are saved into block memory (BRAM), only once. Subsequent zero signals from the snapshot signal do nothing to mirror/copy p-bits or to BRAM. (b) Conceptual diagram to visualize the operation of the snapshot signal.",
161
+ "url": "http://arxiv.org/html/2303.10728v2/x8.png"
162
+ },
163
+ "9": {
164
+ "figure_path": "2303.10728v2_figure_9.png",
165
+ "caption": "Fig. S3: The graph density and the neighbor distribution of sparse DBMs (Pegasus and Zephyr) where graph density, \u03c1=2\u2062|E|/(|V|2\u2212|V|)\ud835\udf0c2\ud835\udc38superscript\ud835\udc492\ud835\udc49\\rho=\\displaystyle 2|E|/(|V|^{2}-|V|)italic_\u03c1 = 2 | italic_E | / ( | italic_V | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - | italic_V | ) where |E|\ud835\udc38\\displaystyle|E|| italic_E | = the number of edges and |V|\ud835\udc49\\displaystyle|V|| italic_V | = the number of vertices in the graph. The density of Pegasus 4,264 p-bits is 0.33% and 3,256 p-bits have the maximum number of neighbors 15. Zephyr (3,360 p-bits) has a density of 0.56% and 2,432 p-bits have the 20 maximum neighbors.",
166
+ "url": "http://arxiv.org/html/2303.10728v2/x9.png"
167
+ },
168
+ "10": {
169
+ "figure_path": "2303.10728v2_figure_10.png",
170
+ "caption": "Fig. S4: Accuracy of training 100 images with sparse DBMs up to 1,000 epochs. Training is accomplished with 10 mini-batches and CD-105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT. All the trained 100 images and 20 unseen images have been tested here.",
171
+ "url": "http://arxiv.org/html/2303.10728v2/x10.png"
172
+ },
173
+ "11": {
174
+ "figure_path": "2303.10728v2_figure_11.png",
175
+ "caption": "Fig. S5: (a) Test accuracy of full MNIST with sparse DBMs (Pegasus 2,560 p-bits) up to 200 epochs with five different fixed point precisions of weights (s{6}{2}, s{6}{3}, s{6}{5}, s{4}{2} and s{3}{2}). (b) Test accuracy of full MNIST for RBM (512 hidden units) with double-precision floating point 64 bits and s{6}{3}. (c) Image completion examples with sparse DBMs for fixed point precisions of weights s{6}{3} and s{4}{2} (annealing schedule varies from \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20090 to \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20095 with 0.125 steps). With s{6}{3} precision, the network can complete the images where the right half of the image starts from random noise. Below s{4}{2}, the network fails to complete the images.",
176
+ "url": "http://arxiv.org/html/2303.10728v2/x11.png"
177
+ },
178
+ "12": {
179
+ "figure_path": "2303.10728v2_figure_12.png",
180
+ "caption": "Fig. S6: (a) Test accuracy after training full MNIST up to 50 epochs with different numbers of sweeps using sparse DBMs (Pegasus 3,080 p-bits). For our sparse graph, to mix the Markov chain properly we need minimum CD-104superscript10410^{4}10 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT. Reducing the number of sweeps significantly degrades the quality of mixing in the chain. (b) Test accuracy as a function of CD-n at epoch 50 showing the equilibrium and non-equilibrium samples.",
181
+ "url": "http://arxiv.org/html/2303.10728v2/x12.png"
182
+ },
183
+ "13": {
184
+ "figure_path": "2303.10728v2_figure_13.png",
185
+ "caption": "Fig. S7: Image generation examples with sparse DBM Zephyr (3,360 p-bits) after training the network with the full MNIST dataset.",
186
+ "url": "http://arxiv.org/html/2303.10728v2/x13.png"
187
+ },
188
+ "14": {
189
+ "figure_path": "2303.10728v2_figure_14.png",
190
+ "caption": "Fig. S8: Image completion examples with RBM (4,096 hidden units and CD-100) and sparse DBM (Pegasus 4,264 p-bits). Only the left half of the images is shown (clamped) to the networks while the other half is obscured. The label bits are also clamped and the annealing schedule varies from \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20090 to \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20095 with 0.125 steps.",
191
+ "url": "http://arxiv.org/html/2303.10728v2/x14.png"
192
+ },
193
+ "15": {
194
+ "figure_path": "2303.10728v2_figure_15.png",
195
+ "caption": "Fig. S9: Average magnetization as a function of Monte Carlo sweeps in a CPU (exact Gibbs sampling) vs. parallelized FPGA. See text for details.",
196
+ "url": "http://arxiv.org/html/2303.10728v2/x15.png"
197
+ },
198
+ "16": {
199
+ "figure_path": "2303.10728v2_figure_16.png",
200
+ "caption": "Fig. S10: (a) Fashion MNIST accuracy can reach around 80% in 120 epochs with sparse DBM. Full Fashion MNIST (60,000 images) with 20 black and white samples per grayscale image is trained on sparse DBM (Pegasus 4,264 p-bits and 30,404 parameters) using CD-105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT, batch size = 50, learning rate = 0.003, momentum = 0.6 and epoch = 120. Test accuracy shows the accuracy of the whole test set (10,000 images) and the training accuracy represents the accuracy of 10,000 images randomly chosen from the training set. (b) Image generation examples with sparse DBM after training the network with full Fashion MNIST dataset by annealing the network from \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20090 to \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20091 with 0.125 steps.",
201
+ "url": "http://arxiv.org/html/2303.10728v2/x16.png"
202
+ },
203
+ "17": {
204
+ "figure_path": "2303.10728v2_figure_17.png",
205
+ "caption": "Fig. S11: (a) Training accuracy of 100 images from CIFAR-10 is around 90% in 2000 epochs with sparse DBM. Training is accomplished with 100 black and white samples per grayscale image using CD-105superscript10510^{5}10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT, batch size = 10, learning rate = 0.006, momentum = 0.6, and epoch = 2000. (b) Image completion examples with RBM (4096 hidden units) and sparse DBM. Only the left half of the grayscale images are shown (clamped) to the networks (using 100 black and white samples) while the other half is obscured. The label bits are also clamped and the annealing schedule varies from \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20090 to \u03b2\ud835\udefd\\betaitalic_\u03b2\u2009=\u20095 with 0.125 steps, and for the other case \u03b2\ud835\udefd\\betaitalic_\u03b2 is kept to 1.",
206
+ "url": "http://arxiv.org/html/2303.10728v2/x17.png"
207
+ },
208
+ "18": {
209
+ "figure_path": "2303.10728v2_figure_18.png",
210
+ "caption": "Fig. S12: Layered embedding of the 4,264 p-bit Pegasus graph of FIG. S13, illustrating the sparse DBM architecture: the first layer is visible p-bits with 834 nodes, second and third layers are the hidden p-bits with 3,226 and 204 nodes respectively. There are also some intralayer connections within each layer. An example is shown in the right circle which shows the neighboring connections around node 3,443. The number next to a line represents the number of wires grouped in that branch, the total number being the fan-out of a given p-bit (vertex).",
211
+ "url": "http://arxiv.org/html/2303.10728v2/x18.png"
212
+ },
213
+ "19": {
214
+ "figure_path": "2303.10728v2_figure_19.png",
215
+ "caption": "Fig. S13: The original sparse DBM network (Pegasus: 4,264 p-bits) used in this work with marked-up visible (blue), hidden (orange), and label (yellow) units.",
216
+ "url": "http://arxiv.org/html/2303.10728v2/x19.png"
217
+ }
218
+ },
219
+ "validation": true,
220
+ "references": [],
221
+ "url": "http://arxiv.org/html/2303.10728v2"
222
+ }
20240123/2303.13716v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2304.13014v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2305.00557v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2305.02317v3.json ADDED
@@ -0,0 +1,435 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Visual Chain-of-Thought: Bridging Logical Gaps with Multimodal Infillings",
3
+ "abstract": "Recent advances in large language models elicit reasoning in a chain-of-thought that allows models to decompose problems in a human-like fashion. Though this paradigm improves multi-step reasoning ability in language models, it is limited by being unimodal and applied mainly to question-answering tasks. We claim that incorporating visual augmentation into reasoning is essential, especially for complex, imaginative tasks. Consequently, we introduce VCoT111Source available: https://github.com/dannyrose30/VCOT,\na novel method that leverages chain-of-thought prompting with vision-language grounding to recursively bridge the logical gaps within sequential data. Our method uses visual guidance to generate synthetic multimodal infillings that add consistent and novel information to reduce the logical gaps for downstream tasks that can benefit from temporal reasoning, as well as provide interpretability into models\u2019 multi-step reasoning. We apply VCoT to the Visual Storytelling and WikiHow summarization datasets and demonstrate through human evaluation that VCoT offers novel and consistent synthetic data augmentation beating chain-of-thought baselines, which can be used to enhance downstream performance.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "###figure_1### A recent landmark result in natural language processing is chain-of-thought prompting (CoT) Wei et al. (2022 ###reference_28###); Kojima et al. (2022 ###reference_14###),\nwhereby decomposing complex problems into simple steps for input to a large language model (LLM) confers improved performance on a variety of tasks Lampinen et al. (2022 ###reference_17###).\nWhile CoT demonstrates impressive performance on tasks in the text-only question-answering domain,\nit is unclear how this technique can be generalized to multimodal (e.g., vision-language) settings.\nWe hypothesize that one core benefit to CoT in the text domain is that it prompts an LLM to fill in logical or sequential gaps. With this frame, both the methodology and benefits of extending CoT to the visual domain becomes plausible.\nMany vision-language (VL) reasoning tasks, such as virtual assistants Qiu et al. (2021 ###reference_22###), navigators Anderson et al. (2018 ###reference_1###), and decision-makers Huang et al. (2022 ###reference_13###), require some degree of sequential data understanding.\nHowever, these techniques are currently restricted to \u201creason\u201d over a limited set of input data (e.g., key frames) which may contain logical gaps, hindering task-specific performance (Figure 1 ###reference_###). This leads us to our core idea: we can extend CoT prompting into the vision and language domain by integrating generative image models to produce intermediate images.\nWe argue incorporating the visual modality into CoT can help bridge logical gaps in two ways. First, multi-step reasoning with visuals better fills logical gaps because images capture additional information that unimodal text cannot. Second, visual chains mimic human imagination which creates novel solutions Tan & Bansal (2021 ###reference_24###); Lu et al. (2022 ###reference_19###); Zhu et al. (2022 ###reference_32###) and provide interpretability Wang et al. (2022b ###reference_27###) into decision making. One imagined picture provides a thousand-word insight to enhance computer reasoning.\nWe propose Visual Chain-of-Thought (VCoT), which combines the efficiency, robustness, and multi-step reasoning of CoT with the multimodal capabilities of vision-language models. VCoT synthetically augments sequential datasets and bridges logical gaps by recursively generating multimodal infillings and using the synthetic data to improve downstream task performance. These synthetic generations also serve as human-interpretable insights into AI systems\u2019 ability of multi-step reasoning.\nWe demonstrate that VCoT creates consistent and novel synthetic data that enhances downstream performance on the Vist Huang et al. (2016 ###reference_12###) and WikiHow Koupaee & Wang (2018 ###reference_16###) datasets. Our main contributions are:\nWe propose Visual Chain-of-Thought for sequential data to generate synthetic text-visual pairs as data augmentation for downstream reasoning.\nWe devise a consistency and novelty-driven approach to recursively generate multimodal infillings that augment faithful, relevant context.\nWe demonstrate the effectiveness of our method through human evaluation, showing improvements in sequential reasoning."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Related Work",
15
+ "text": "###figure_2###"
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Problem Formulation",
21
+ "text": "To improve temporal reasoning in language models, we define the multimodal infilling task to bridge the logical gaps in sequential data with synthetic multimodal infillings. Given two consecutive text-visual pairs , we generate an infilling through generator (e.g., VCoT) using model (Equation 1 ###reference_###). After we generate multiple such infillings, we select the best among the candidates using judgement function (e.g., Clip similarity) (Equation 2 ###reference_###).\nIn keeping with the training-free formulation of CoT, we limit ourselves to generating and selecting candidate intermediate steps through prompting existing pretrained models.\nFor a downstream task (e.g. visual storytelling, instruction summarization) measured by performance (e.g., novelty, consistency, coherence, descriptiveness), is an optimal infilling to be kept if its\u2019 addition improves the downstream task performance (Equation 3 ###reference_###). Otherwise, the optimal infilling is redundant or damaging, and it is set to null (Equation 4 ###reference_###). Our VCoT is a method that serves as a generator and a way to select , but we leave determining optimality of recursive termination to future work."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "VCoT Multimodal Infilling Generation",
27
+ "text": "We propose the VCoT method as a solution to the multimodal infilling task. To generate high-quality multimodal infillings, we use a combination of CoT and vision-language models. We propose the following pipeline222https://sites.google.com/view/vcot/home ###reference_### (Figure 2 ###reference_###):\nWe transform text-only datasets into multimodal text-visual pairs for task unification (section 4.1 ###reference_###).\nWe identify the multipoint foveation to extract the global focus to guide the generation of consistent multimodal infillings (section 4.2 ###reference_###).\nWe generate our synthetic data through a novelty-driven recursive infilling (section 4.3 ###reference_###) and consistency-driven visual augmentation (section 4.4 ###reference_###) approach to provide interpretability for multi-step reasoning and reduce logical gaps to aid downstream task performance."
28
+ },
29
+ {
30
+ "section_id": "4.1",
31
+ "parent_section_id": "4",
32
+ "section_name": "Task Unification",
33
+ "text": "To apply VCoT to general sequences, we first reformat sequential data into text-visual pairs. For text-only sequences, we generate candidate visuals for the corresponding input text sequence by using Stable Diffusion. Here, each is a set of multiple candidate visuals for each . We then assess the similarity of each candidate visual in to the grounding input text using Clip embeddings and select the most similar candidate. This yields a sequence of consistent visuals that form pairs with the input text, unifying general sequences into a series of text-visual pairs."
34
+ },
35
+ {
36
+ "section_id": "4.2",
37
+ "parent_section_id": "4",
38
+ "section_name": "Multipoint Foveation",
39
+ "text": "###figure_3### To keep the multimodal infillings consistent with the input sequence, we use foveation to identify the overall main focus Mei et al. (2022 ###reference_21###). Since pairwise sequential elements may omit relevant and important fixation points, we define multipoint foveation (MPF) to identify all of the core fixation points (e.g., setting, characters) of the entire visual-text input sequence (Figure 3 ###reference_###). We then project the visual-text pairs into a unimodal text space by captioning the visuals: . The projected output, along with three to four hand-written few-shot exemplars, is fed into Gpt-3.5 to generate a maximum likelihood333Likelihood is defined in Appendix A.3.1 ###reference_SSS1###. summary from which the MPF is extracted (Equation 5 ###reference_###). The foveation guides infillings to be consistent and not introduce excessive information. We provide a qualitative example showing the effectiveness of MPF (Figure 14 ###reference_###)."
40
+ },
41
+ {
42
+ "section_id": "4.3",
43
+ "parent_section_id": "4",
44
+ "section_name": "Novelty-Driven Recursive Infilling",
45
+ "text": "###figure_4### The judgment function, (Equation 2 ###reference_###), judges the generated multimodal infillings based on two metrics: consistency and novelty. Consistency ensures that the infillings maintain faithful details from the surrounding steps, while novelty ensures that the infillings add relevant and new information that accurately bridges the logical gaps.\nWe infill a set of visual-text pairs and using foveation with new information bridging the logical gap (Appendix A.1 ###reference_###, Algorithm 2 ###reference_hm2###). We opt for a recursive approach rather than an iterative approach when infilling logical gaps as some information may have more logical gaps than others and may require more infillings. A recursive approach makes it easier to dynamically determine when infillings are not beneficial to the task and when to stop generating them in contrast to an iterative approach. Our approach generates multiple depths of infillings to add valuable new, relevant, and consistent multimodal information for downstream tasks (Figure 4 ###reference_###)."
46
+ },
47
+ {
48
+ "section_id": "4.4",
49
+ "parent_section_id": "4",
50
+ "section_name": "Consistency-Driven Visual Augmentation",
51
+ "text": "To explicitly guide consistent infilling generation, we use multipoint foveation (section 4.2 ###reference_###) to ground our generations to the input sequence and Clip to select the most consistent recursively generated infilling with respect to their surrounding pair (Algorithm 1 ###reference_hm1###). Specifically, we generate five candidate text-infillings with Gpt-3.5 and compare them to their surrounding visuals using Clip embeddings and select the most similar candidate. When generating text-infillings, we prompt GPT-3.5 with information from the surrounding text to provide logical context that maintains consistency, novelty, and sequential order. To generate a consistent, sequential visual, we prompt Stable Diffusion with the selected text-infilling to generate four candidate visuals. Then, we choose the candidate visual most consistent using Clip embeddings.\nTo determine the recursive stopping condition, we experiment with both a fixed recursive depth and an learned approach by prompting Gpt-3.5 to classify whether a logical gap remains. Empirical results demonstrate the Gpt-3.5-halting approach shows inconsistent performance that adds significant noise. Instead, we opt for a fixed depth, -=2, which balances sufficiently filling logical gaps with not injecting irrelevant information."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments",
57
+ "text": "We leverage leading vision and language generation models Stable Diffusion and Gpt-3.5 for synthetic data augmentation, along with Clip guidance and Ofa captioning. We test our method on the Vist and WikiHow datasets due to their sequential composition to show the effectiveness of VCoT in bridging reasoning gaps.\n###figure_5###"
58
+ },
59
+ {
60
+ "section_id": "5.1",
61
+ "parent_section_id": "5",
62
+ "section_name": "Experimental Settings",
63
+ "text": "We use Vist and WikiHow to evaluate the quality of VCoT\u2019s synthetic multimodal infillings and their impacts on visual storytelling and instruction summarization, respectively.\nVist is a visual storytelling dataset consisting of sequences of five text-visual pairs representing a single story Huang et al. (2016 ###reference_12###). The test set contains 2021 stories. The clear gaps between flickr visuals and human-written captions, and pairwise sequential elements often create sizable logical gaps.\nWikiHow is a text summarization dataset containing \u201cHow-To\u201d articles, with 6000 test set articles Koupaee & Wang (2018 ###reference_16###). VCoT\u2019s synthetic multimodal infillings between logically distanced instructions can decompose difficult instructions.\nVist provides us with a standard sequential text-visual dataset, and we showcase WikiHow as a text-only dataset for our task unification process (section 4.1 ###reference_###). For downstream evaluation of WikiHow, we input \u201cHow-To\u201d articles as a sequence of paragraphs that we seek to summarize into descriptive, human-understandable instructions, a slightly different approach than strict summarization.\nVCoT generates five textual infilling candidates using Gpt-3.5, one with zero temperature and four with 0.5 temperature. In our current approach, we select the best textual infilling (Equation 2 ###reference_###) and input it input into the Stable Diffusion generator. We otherwise use zero temperature to maximize consistency. To evaluate the infillings themselves, we combine WikiHow and Vist examples and present 7062 generated infillings whose scores are provided by Amazon Mechanical Turk crowd workers who pass an attention check. For the downstream tasks, we evaluate each dataset separately, considering 227 full Wikihow articles and 266 Vist stories.\nWe use Gpt-3.5 (text-davinci-003) for all language tasks over open-source alternatives as Gpt-3.5 demonstrates stability over open-source alternatives. We utilize an out-of-the-box image captioning checkpoint (OFA-base) to show the generality of VCoT and demonstrate performance without the need for task-specific fine-tuning.\nWe use Stable Diffusion Rombach et al. (2022 ###reference_23###) (Stable-Diffusion 1.4) for image generation.\nWe use Clip for multimodal similarity comparisons to guide the cross-modal generation.\n###figure_6###"
64
+ },
65
+ {
66
+ "section_id": "5.2",
67
+ "parent_section_id": "5",
68
+ "section_name": "Human Evaluation",
69
+ "text": "The reason we opt for human evaluation over automatic evaluations is to account for multimodality and ensure complex analysis of filling logical gaps with novel and consistent synthetic tokens. Further, we do not seek to compare with the datasets\u2019 ground truths because the ground truths in Vist and WikiHow often are incoherent or undescriptive. Instead, our data augmentation seeks to surpass ground truth results by synthetically filling logical gaps and providing human-interpretable multi-step reasoning.\nFor both evaluations, we use the novelty and consistency metrics defined in section 4.3 ###reference_###. As additional downstream evaluation metrics, we add coherence for storytelling and descriptiveness for instruction summarization. Coherence confirms that the output flows logically together as an interesting and creative story. Descriptiveness verifies that the generated summaries describe accurately and with detail the steps of the \u201cHow-To\u201d article. We select our human evaluation criteria based on BartScore Yuan et al. (2021 ###reference_30###) and our goal of generating relevant logical infillings.\nWe ask human annotators to follow the evaluation criteria (novelty, consistency, coherence, and descriptiveness) using win-tie-lose comparison, a common approach in human evaluation Gu et al. (2021 ###reference_8###); Yang et al. (2019 ###reference_29###), which reduces variance and increases inter-annotator agreement. Since our method is the first to generate multimodal infillings, existing vision-language models are not well-suited as baselines. Instead, we use head-to-head comparisons of VCoT\u2019s text-visual pairs with purely textual chain-of-thought (CoT) paired with the purely visual chain-of-images (CoI) performed in parallel, using the same recursion depth. To evaluate the infillings, we additionally compare with a random baseline, which selects a random multimodal pair from our generated examples. To evaluate downstream task performance, we also compare with generation without using generated infillings, as well as with the ground truth of each dataset. We hired a total of 20,534 workers using the Mechanical Turk platform, and paid a rate of $.30/HIT for our labeling tasks to an average hourly wage of $15/hr."
70
+ },
71
+ {
72
+ "section_id": "5.3",
73
+ "parent_section_id": "5",
74
+ "section_name": "Quality of Multimodal Infillings",
75
+ "text": "We ask human evaluators to judge our synthetic multimodal infillings based on novelty and consistency (section 4.3 ###reference_###) on a 5-point scale (1-2 = poor, 3 = neutral, 4-5 = good). VCoT infillings outperform all baselines on the 5-point scoring of quality (Figure 6 ###reference_###). When comparing win-tie-losses, VCoT also outperforms both baselines by at least and for the consistency and novelty of our synthetic visuals and text (Table 1 ###reference_###), respectively. Qualitative examples show that VCoT outperforms baselines by generating more useful and relevant infillings for sizable logical gaps (Figure 7 ###reference_###, Figure 13 ###reference_###). The strong consistency and novelty of the synthetic multimodal infillings indicates that they add both relevant and new information to their surrounding sequential steps, and thus VCoT helps bridge logical gaps in sequential data.\nVCoT infills chronological gaps in the Vist and WikiHow sequences with consistent bridging information. We hypothesize the utility of VCoT increases with size of the logical gap because these gaps hinder downstream task performance (section 5.4 ###reference_###) and interpretability for large language models. VCoT synthetically augments additional context to language models with multimodal infillings, allowing for smaller logical leaps and enhanced reasoning. We argue that VCoT outperforms baselines by maintaining consistency through foveation grounding (section 4.2 ###reference_###) and Clip similarity alignment (section 4.4 ###reference_###).\nWith regard to novelty, VCoT adds new, relevant information for a logical flow between surrounding steps (Figure 11 ###reference_###, Figure 12 ###reference_###). We observe that many baseline-generated images contain new but less relevant information (Table 1 ###reference_###). Notably, CoI often generates novel images (Figure 13 ###reference_###) that do not align with the surrounding steps, hindering the relevance aspect. In contrast, VCoT excels in increasing yet balancing both newness and relevance."
76
+ },
77
+ {
78
+ "section_id": "5.4",
79
+ "parent_section_id": "5",
80
+ "section_name": "Downstream Task Performance",
81
+ "text": "###figure_7### ###figure_8### It is clear from our qualitative results (Figure 8 ###reference_###, Figure 9 ###reference_###) that VCoT increases consistency among the text and images, while also adding relevant, novel information. Additionally, the infillings provide multimodal interpretability into computer reasoning (Figure 4 ###reference_###).\nVCoT convincingly surpasses baselines in average downstream results (Table 2 ###reference_###), winning in every category besides tying Chain-of-Images for VIST.\nFor WikiHow, we see a clear improvement in novelty (Table 2 ###reference_###), which makes sense because we augment the data with novel, relevant tokens for instruction generation. Meanwhile for consistency, VCoT ties with CoT and surpasses all other baselines. We hypothesize this tie is due to the lengthy nature of the input text, allowing Gpt-3.5 to create fairly consistent instructions with or without the use of VCoT. For descriptiveness, VCoT beats all baselines except for CoI, likely because CoI injects visually-descriptive information that isn\u2019t grounded by the original text. Specifically, CoI\u2019s image-only modality allows it to capture novel but less consistent information than that of the limited textual description. Overall, VCoT offers a balanced combination of performance improvements.\nFor Vist, VCoT\u2019s infillings improve consistency against all baselines besides CoT. Conversely, we find that VCoT loses in novelty to all baselines other than CoT, which VCoT (Table 2 ###reference_###) improves upon. These juxtaposed results offer interpretability into computer reasoning, namely that CoI introduces more novel information, CoT preserves consistency, and VCoT maintains a balance. We suspect that VCoT\u2019s loss in novelty is a result of 1) VCoT\u2019s attentiveness to consistency through Clip guidance and multipoint foveation, and 2) repeated tokens generated with our consistency-driven approach that overlap and cause repetition. These results demonstrate that VCoT provides insight on specific ways to improve computer reasoning\u2013design strategies that both enhance and balance novelty and consistency."
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "Conclusion",
87
+ "text": "We introduce a new research direction to generate synthetic logical infillings for sequential data, which we tackle with our novel multimodal paradigm visual chain-of-thought. We combine chain-of-thought with visual guidance to recursively generate multimodal infillings that bridge the natural logical gaps between sequential elements. By adding infillings to sequences while maintaining consistency, we augment novel, relevant information to bolster downstream task performance while also providing human-interpretable insights into the system\u2019s reasoning process. Through task unification, we can apply VCoT on various multimodal tasks.\nHuman experiments show that VCoT creates more novel and consistent logical infillings than the unimodal CoT and CoI baselines performed in parallel on the sequential datasets Vist and WikiHow, and these infillings are helpful to improve downstream task performance.\nWhile we demonstrate VCoT on the instruction summarization and visual storytelling tasks, future work can explore VCoT in new domains that could benefit from synthetic data augmentation and bolstered reasoning abilities, such as procedural planning, DNA sequencing, and video understanding.\nFurthermore, future research can look into aligning multimodal infillings with other desired downstream performance metrics. Along these lines, it is valuable to measure desired outputs through automatic evaluation metrics to support evaluation at scale."
88
+ }
89
+ ],
90
+ "appendix": [
91
+ {
92
+ "section_id": "Appendix 1",
93
+ "parent_section_id": null,
94
+ "section_name": "Appendix A Appendix",
95
+ "text": "VCoT\u2019s recursive multimodal infilling generation algorithm given two sequential text-visual pairs.\nWe show the interface of our human evaluations in Figure 10 ###reference_###.\nWe manually ensure no personal information is collected and no offensive content is presented during human evaluations.\n###figure_9### ###figure_10### ###figure_11### ###figure_12### ###figure_13### We apply our generated multimodal infillings to improve the traditionally text-based tasks of visual storytelling and instruction summarization. To integrate our multimodal infillings into the downstream tasks, we pass them along with the unified input to create an extensive summary using few-shot examples and image captioning. The information in the summary can guide models to better reason temporally in a wide range downstream tasks.\nBecause storytelling and summarization are inherently different, we use different prompting schemes. For Vist, we pass in the few-shot examples, summary, focus, current steps, and past story steps to autoregressively build the final sequential story. Since it is very important for a story to flow over time, current steps are very dependent on the past steps in time. For Wikihow, we also pass in the few-shot examples, summary, focus, current steps. Unlike the past story steps for Vist, we input the surrounding multimodal infillings because the summarized steps of WikiHow articles aren\u2019t quite as dependent on flowing over time. Summarizing \u201cHow-To\u201d articles into a series of human-understandable instructions does, however, require understanding the nearby logical steps.\nExperiments demonstrate that VCoT improves the overall quality of the final stories and instruction summarization for the example datasets Wikihow and Vist.\nThe only non-open source model we use is Gpt-3.5 text-davinci-003. We used the OpenAI API in January 2023 with temperature 0 for one candidate infilling and temperature 0.5 to generate 4 different candidate infillings, and temperature 0 otherwise for all other tasks; all other hyperparameters are set to the default. Repetitive runs on Vist/WikiHow examples yield similar results, promoting reproducibility."
96
+ }
97
+ ],
98
+ "tables": {
99
+ "1": {
100
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Consistency and novelty ratings of <span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T1.13.1\">VCoT</span> intermediate visual-text infillings compared to the Chain-of-Thought (<span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T1.14.2\">CoT</span>) + Chain-of-Images (<span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T1.15.3\">CoI</span>) and random baselines, represented as wins-tie-loss percentages. <span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T1.16.4\">VCoT</span> has higher win percentages compared to both baselines.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T1.8\" style=\"width:377.6pt;height:64pt;vertical-align:-0.9pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-26.7pt,4.5pt) scale(0.876065632199318,0.876065632199318) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.8.8\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8.9.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"S5.T1.8.8.9.1.1\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T1.8.8.9.1.1.1\"><span class=\"ltx_text ltx_font_bold ltx_font_smallcaps\" id=\"S5.T1.8.8.9.1.1.1.1\">VCoT<span class=\"ltx_text ltx_font_upright\" id=\"S5.T1.8.8.9.1.1.1.1.1\"> vs. Baseline</span></span></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S5.T1.8.8.9.1.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.8.8.9.1.2.1\">Image Consistency</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S5.T1.8.8.9.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.8.8.9.1.3.1\">Text Consistency</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S5.T1.8.8.9.1.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.8.8.9.1.4.1\">Image Novelty</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" colspan=\"3\" id=\"S5.T1.8.8.9.1.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.8.8.9.1.5.1\">Text Novelty</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8.8\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.1.1.1.1\">Win()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.8.8.8.9\">Tie</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.2.2.2.2\">Lose()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.3.3.3.3\">Win()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.8.8.8.10\">Tie</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.4.4.4.4\">Lose()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.5.5.5.5\">Win()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.8.8.8.11\">Tie</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.6.6.6.6\">Lose()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.7.7.7.7\">Win()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.8.8.8.12\">Tie</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T1.8.8.8.8\">Lose()</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8.10.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T1.8.8.10.1.1\">\n<span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T1.8.8.10.1.1.1\">CoT</span>+<span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T1.8.8.10.1.1.2\">CoI</span>\n</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.2\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T1.8.8.10.1.2.1\" style=\"color:#713968;\">26.82</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.3\">53.02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.4\">20.16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T1.8.8.10.1.5.1\" style=\"color:#713968;\">28.07</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.6\">52.21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.7\">19.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.8\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T1.8.8.10.1.8.1\" style=\"color:#713968;\">30.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.9\">50.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.10\">19.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.11\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T1.8.8.10.1.11.1\" style=\"color:#713968;\">25.77</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.12\">52.86</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T1.8.8.10.1.13\">21.37</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8.11.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T1.8.8.11.2.1\">Random</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.2\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T1.8.8.11.2.2.1\" style=\"color:#713968;\">30.13</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.3\">50.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.4\">19.63</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T1.8.8.11.2.5.1\" style=\"color:#713968;\">43.40</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.6\">39.27</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.7\">17.33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.8\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T1.8.8.11.2.8.1\" style=\"color:#713968;\">43.66</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.9\">38.95</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.10\">17.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.11\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T1.8.8.11.2.11.1\" style=\"color:#713968;\">41.87</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.12\">40.36</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T1.8.8.11.2.13\">17.77</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
101
+ "capture": "Table 1: Consistency and novelty ratings of VCoT intermediate visual-text infillings compared to the Chain-of-Thought (CoT) + Chain-of-Images (CoI) and random baselines, represented as wins-tie-loss percentages. VCoT has higher win percentages compared to both baselines."
102
+ },
103
+ "2": {
104
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The downstream wins-tie-loss percentages for <span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T2.21.1\">VCoT</span> against four baselines for downstream summarization and storytelling tasks. In addition to novelty and consistency, we measure descriptivity and coherence for <span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T2.22.2\">WikiHow</span> and <span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T2.23.3\">Vist</span>, respectively. We emphasize in <span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.24.4\" style=\"color:#713968;\">purple</span> if <span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T2.25.5\">VCoT</span> wins or loses by greater than , and we average the scores on the left. <span class=\"ltx_text ltx_font_smallcaps\" id=\"S5.T2.26.6\">VCOT</span> wins or ties in almost every averaged score as well as in most general categories.</figcaption>\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T2.14\" style=\"width:377.6pt;height:179.2pt;vertical-align:-0.8pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-39.8pt,18.8pt) scale(0.825775117567845,0.825775117567845) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.14.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.13.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt ltx_border_t\" id=\"S5.T2.14.12.13.1.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.13.1.1.1\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_tt ltx_border_t\" id=\"S5.T2.14.12.13.1.2\" rowspan=\"2\">\n<span class=\"ltx_text ltx_font_bold ltx_font_smallcaps\" id=\"S5.T2.14.12.13.1.2.1\">VCoT<span class=\"ltx_text ltx_font_upright\" id=\"S5.T2.14.12.13.1.2.1.1\"> vs. Baselines</span></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt ltx_border_t\" colspan=\"3\" id=\"S5.T2.14.12.13.1.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.13.1.3.1\">Novelty</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt ltx_border_t\" colspan=\"3\" id=\"S5.T2.14.12.13.1.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.13.1.4.1\">Consistency</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt ltx_border_t\" colspan=\"3\" id=\"S5.T2.14.12.13.1.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.13.1.5.1\">Descriptivity</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt ltx_border_t\" colspan=\"3\" id=\"S5.T2.14.12.13.1.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.13.1.6.1\">Average</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.8.6.6\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.3.1.1.1\">Win()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.8.6.6.7\">Tie</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.4.2.2.2\">Lose()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.5.3.3.3\">Win()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.8.6.6.8\">Tie</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.6.4.4.4\">Lose()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.7.5.5.5\">Win()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.8.6.6.9\">Tie</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.8.6.6.6\">Lose()</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.8.6.6.10\">Win</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.8.6.6.11\">Tie</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S5.T2.8.6.6.12\">Loss</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.14.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.14.12.14.1.1\" rowspan=\"4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.14.1.1.1\">WikiHow</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.14.12.14.1.2\">Chain-of-Thought</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.14.1.3.1\" style=\"color:#713968;\">34.23</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.4\">36.90</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.5\">28.87</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.6\">30.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.7\">39.66</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.8\">30.28</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.9\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.14.1.9.1\" style=\"color:#713968;\">23.31</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.10\">56.39</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.11\">20.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.12\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.14.1.12.1\" style=\"color:#713968;\">29.20</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.13\">44.32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.14.1.14\">26.48</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.15.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.14.12.15.2.1\">Chain-of-Images</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.2\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.15.2.2.1\" style=\"color:#713968;\">37.28</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.3\">25.82</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.4\">36.90</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.15.2.5.1\" style=\"color:#713968;\">44.20</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.6\">26.04</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.7\">29.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.8\">33.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.9\">27.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.10\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.15.2.10.1\" style=\"color:#713968;\">38.72</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.11\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.15.2.11.1\" style=\"color:#713968;\">38.44</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.12\">26.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.15.2.13\">35.13</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.16.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.14.12.16.3.1\">No Infilling</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.2\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.16.3.2.1\" style=\"color:#713968;\">33.56</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.3\">38.47</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.4\">27.98</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.16.3.5.1\" style=\"color:#713968;\">38.24</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.6\">26.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.7\">35.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.8\">33.46</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.9\">32.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.10\">33.83</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.11\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.16.3.11.1\" style=\"color:#713968;\">35.09</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.12\">32.43</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.16.3.13\">32.48</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.17.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.14.12.17.4.1\">Reference Step</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.2\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.17.4.2.1\" style=\"color:#713968;\">40.92</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.3\">28.42</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.4\">30.65</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.17.4.5.1\" style=\"color:#713968;\">47.99</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.6\">21.80</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.7\">30.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.8\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.17.4.8.1\" style=\"color:#713968;\">42.11</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.9\">22.56</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.10\">35.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.11\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.17.4.11.1\" style=\"color:#713968;\">43.67</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.12\">24.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.17.4.13\">32.07</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.18.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.14.12.18.5.1\" rowspan=\"2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.18.5.1.1\">Dataset</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.14.12.18.5.2\" rowspan=\"2\">\n<span class=\"ltx_text ltx_font_bold ltx_font_smallcaps\" id=\"S5.T2.14.12.18.5.2.1\">VCoT<span class=\"ltx_text ltx_font_upright\" id=\"S5.T2.14.12.18.5.2.1.1\"> vs. Baselines</span></span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S5.T2.14.12.18.5.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.18.5.3.1\">Novelty</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S5.T2.14.12.18.5.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.18.5.4.1\">Consistency</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S5.T2.14.12.18.5.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.18.5.5.1\">Coherence</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"3\" id=\"S5.T2.14.12.18.5.6\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.14.12.18.5.6.1\">Average</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.9.7.7.1\">Win()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.12.7\">Tie</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.10.8.8.2\">Lose()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.11.9.9.3\">Win()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.12.8\">Tie</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.12.10.10.4\">Lose()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.13.11.11.5\">Win()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.12.9\">Tie</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.12.6\">Lose()</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.12.10\">Win</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.12.11\">Tie</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.12.12\">Loss</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.19.6\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"S5.T2.14.12.19.6.1\" rowspan=\"4\"><span class=\"ltx_text ltx_font_bold ltx_font_smallcaps\" id=\"S5.T2.14.12.19.6.1.1\">Vist</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"S5.T2.14.12.19.6.2\">Chain-of-Thought</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.3\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.19.6.3.1\" style=\"color:#713968;\">39.86</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.4\">29.42</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.5\">30.72</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.6\">30.17</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.7\">38.01</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.8\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.19.6.8.1\" style=\"color:#713968;\">31.82</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.9\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.19.6.9.1\" style=\"color:#713968;\">35.11</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.10\">35.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.11\">29.79</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.12\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.19.6.12.1\" style=\"color:#713968;\">35.05</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.13\">34.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T2.14.12.19.6.14\">30.77</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.20.7\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.14.12.20.7.1\">Chain-of-Images</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.2\">33.81</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.3\">23.71</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.20.7.4.1\" style=\"color:#713968;\">42.47</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.20.7.5.1\" style=\"color:#713968;\">35.33</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.6\">30.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.7\">34.50</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.8\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.20.7.8.1\" style=\"color:#713968;\">35.11</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.9\">36.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.10\">28.01</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.11\">34.76</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.12\">30.25</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.20.7.13\">34.99</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.21.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"S5.T2.14.12.21.8.1\">No Infilling</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.2\">34.64</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.3\">23.92</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.21.8.4.1\" style=\"color:#713968;\">41.44</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.21.8.5.1\" style=\"color:#713968;\">37.32</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.6\">27.56</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.7\">35.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.8\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.21.8.8.1\" style=\"color:#713968;\">31.21</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.9\">43.62</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.10\">25.18</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.11\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.21.8.11.1\" style=\"color:#713968;\">34.39</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.12\">34.21</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T2.14.12.21.8.13\">31.40</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.14.12.22.9\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb\" id=\"S5.T2.14.12.22.9.1\">Reference Step</th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.2\">28.73</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.3\">40.62</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.4\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.22.9.4.1\" style=\"color:#713968;\">30.65</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.5\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.22.9.5.1\" style=\"color:#713968;\">37.04</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.6\">30.45</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.7\">32.51</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.8\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.22.9.8.1\" style=\"color:#713968;\">36.52</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.9\">38.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.10\">25.18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.11\"><span class=\"ltx_text ltx_framed_underline\" id=\"S5.T2.14.12.22.9.11.1\" style=\"color:#713968;\">34.10</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.12\">36.46</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T2.14.12.22.9.13\">29.44</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n</figure>",
105
+ "capture": "Table 2: The downstream wins-tie-loss percentages for VCoT against four baselines for downstream summarization and storytelling tasks. In addition to novelty and consistency, we measure descriptivity and coherence for WikiHow and Vist, respectively. We emphasize in purple if VCoT wins or loses by greater than , and we average the scores on the left. VCOT wins or ties in almost every averaged score as well as in most general categories."
106
+ }
107
+ },
108
+ "image_paths": {
109
+ "1": {
110
+ "figure_path": "2305.02317v3_figure_1.png",
111
+ "caption": "Figure 1: Sequences often contain logical gaps between elements that can limit reasoning tasks; our proposed Visual Chain-of-Thought method bridges these gaps with multimodal infillings to downstreaming reasoning.",
112
+ "url": "http://arxiv.org/html/2305.02317v3/x1.png"
113
+ },
114
+ "2": {
115
+ "figure_path": "2305.02317v3_figure_2.png",
116
+ "caption": "Figure 2: Overview of our novel Visual Chain-of-Thought method. The preparation stage unifies an arbitrary input sequence as a sequence of visual-text pairs (section 4.1), constructs associated captions, and a global focus (section 4.2). Next, VCoT recursively generates multimodal infillings by first producing novel candidates (section 4.3) and then selecting the most consistent option (section 4.4). VCoT\u2019s multimodal infillings provide interpretability into the reasoning process and synthetic data for downstream tasks.",
117
+ "url": "http://arxiv.org/html/2305.02317v3/x2.png"
118
+ },
119
+ "3": {
120
+ "figure_path": "2305.02317v3_figure_3.png",
121
+ "caption": "Figure 3: Visualization of our multipoint foveation method, which extracts a global focus from a sequence of text-visual pairs to guide new infilling generations.",
122
+ "url": "http://arxiv.org/html/2305.02317v3/x3.png"
123
+ },
124
+ "4": {
125
+ "figure_path": "2305.02317v3_figure_4.png",
126
+ "caption": "Figure 4: Infilling examples. Given three inputs: image, text pairs: (vi\u22121,ti\u22121)subscript\ud835\udc63\ud835\udc561subscript\ud835\udc61\ud835\udc561(v_{i-1},t_{i-1})( italic_v start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT ), (vi+1,ti+1)subscript\ud835\udc63\ud835\udc561subscript\ud835\udc61\ud835\udc561(v_{i+1},t_{i+1})( italic_v start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT ), and d\u2062e\u2062p\u2062t\u2062h=2\ud835\udc51\ud835\udc52\ud835\udc5d\ud835\udc61\u210e2depth=2italic_d italic_e italic_p italic_t italic_h = 2, VCoT recursively generates (vi,ti)subscript\ud835\udc63\ud835\udc56subscript\ud835\udc61\ud835\udc56(v_{i},t_{i})( italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ), (vi\u2032,ti\u2032)subscriptsuperscript\ud835\udc63\u2032\ud835\udc56subscriptsuperscript\ud835\udc61\u2032\ud835\udc56(v^{{}^{\\prime}}_{i},t^{{}^{\\prime}}_{i})( italic_v start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT \u2032 end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_t start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT \u2032 end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ), and (vi\u2032\u2032,ti\u2032\u2032)subscriptsuperscript\ud835\udc63\u2032\u2032\ud835\udc56subscriptsuperscript\ud835\udc61\u2032\u2032\ud835\udc56(v^{{}^{\\prime\\prime}}_{i},t^{{}^{\\prime\\prime}}_{i})( italic_v start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT \u2032 \u2032 end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_t start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT \u2032 \u2032 end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) that fill in the logical gaps.",
127
+ "url": "http://arxiv.org/html/2305.02317v3/x4.png"
128
+ },
129
+ "5": {
130
+ "figure_path": "2305.02317v3_figure_5.png",
131
+ "caption": "Figure 6: Consistency and novelty rating distributions of VCoT text-visual infillings compared to the Chain-of-Thought (CoT), Chain-of-Images CoI, and random baselines. Here multimodal infillings are selected as \u201cgood\u201d, \u201cneutral\u201d, or \u201cpoor\u201d, and VCoT again surpasses all other baselines.",
132
+ "url": "http://arxiv.org/html/2305.02317v3/extracted/5362893/images/ratings.png"
133
+ },
134
+ "6": {
135
+ "figure_path": "2305.02317v3_figure_6.png",
136
+ "caption": "Figure 7: Compared to the Chain-of-Thought (CoT) and Chain-of-Images (CoI) baselines, VCoT infills more consistent and relevant novel information to the input steps on \"How to Become a VFX Artist.\" CoT and CoI generate their respective infilling unimodally whereas VCoT infills using context from both modes.",
137
+ "url": "http://arxiv.org/html/2305.02317v3/x5.png"
138
+ },
139
+ "7": {
140
+ "figure_path": "2305.02317v3_figure_7.png",
141
+ "caption": "Figure 8: Comparison of a WikiHow summary produced by Chain-of-Thought versus Visual Chain-of-Thought. The purple text highlights how VCoT can improve on the summary quality compared to text-only CoT.",
142
+ "url": "http://arxiv.org/html/2305.02317v3/x6.png"
143
+ },
144
+ "8": {
145
+ "figure_path": "2305.02317v3_figure_8.png",
146
+ "caption": "Figure 9: Comparison of a Vist story produced by CoT versus Visual Chain-of-Thought. The purple text highlights how VCoT can improve on the storytelling quality compared to text-only CoT.",
147
+ "url": "http://arxiv.org/html/2305.02317v3/x7.png"
148
+ },
149
+ "9": {
150
+ "figure_path": "2305.02317v3_figure_9.png",
151
+ "caption": "Figure 10: Amazon Mechanical Turk Platform Interface",
152
+ "url": "http://arxiv.org/html/2305.02317v3/x8.png"
153
+ },
154
+ "10": {
155
+ "figure_path": "2305.02317v3_figure_10.png",
156
+ "caption": "Figure 11: Example VCoT multimodal infillings (middle text-visual pair) generated with our visual chain-of-thought method.",
157
+ "url": "http://arxiv.org/html/2305.02317v3/x9.png"
158
+ },
159
+ "11": {
160
+ "figure_path": "2305.02317v3_figure_11.png",
161
+ "caption": "Figure 12: Example VCoT multimodal infillings (middle text-visual pair) generated with our visual chain-of-thought method.",
162
+ "url": "http://arxiv.org/html/2305.02317v3/x10.png"
163
+ },
164
+ "12": {
165
+ "figure_path": "2305.02317v3_figure_12.png",
166
+ "caption": "Figure 13: Comparison of generating multimodal infillings for two surrounding steps using visual chain-of-thought vs text-only chain-of-thought plus image-only chain-of-images performed in parallel.",
167
+ "url": "http://arxiv.org/html/2305.02317v3/x11.png"
168
+ },
169
+ "13": {
170
+ "figure_path": "2305.02317v3_figure_13.png",
171
+ "caption": "Figure 14: Comparison of infillings generated with and without Multipoint Foveation (MPF) in the VIST dataset. In this example, the overall story is about people exploring different parts of a city, including a tattoo parlor, music show, city streeets, etc. The infilling generated with MPF is more consistent with the global context of the story. By contrast, the infilling generated without MPF overfits to the local information of the man with the tattoo and creates an unrealistic tattooed singing man inconsistent with the actual story.",
172
+ "url": "http://arxiv.org/html/2305.02317v3/x12.png"
173
+ }
174
+ },
175
+ "validation": true,
176
+ "references": [
177
+ {
178
+ "1": {
179
+ "title": "Vision-and-language navigation: Interpreting visually-grounded\nnavigation instructions in real environments.",
180
+ "author": "Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko\nS\u00fcnderhauf, Ian Reid, Stephen Gould, and Anton Van Den Hengel.",
181
+ "venue": "In Proceedings of the IEEE conference on computer vision and\npattern recognition, pp. 3674\u20133683, 2018.",
182
+ "url": null
183
+ }
184
+ },
185
+ {
186
+ "2": {
187
+ "title": "Language models are few-shot learners, 2020.",
188
+ "author": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan,\nPrafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda\nAskell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan,\nRewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter,\nChristopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray,\nBenjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford,\nIlya Sutskever, and Dario Amodei.",
189
+ "venue": "URL https://arxiv.org/abs/2005.14165.",
190
+ "url": null
191
+ }
192
+ },
193
+ {
194
+ "3": {
195
+ "title": "Bert: Pre-training of deep bidirectional transformers for language\nunderstanding, 2018.",
196
+ "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.",
197
+ "venue": "URL https://arxiv.org/abs/1810.04805.",
198
+ "url": null
199
+ }
200
+ },
201
+ {
202
+ "4": {
203
+ "title": "Fine-tuning pretrained language models: Weight initializations, data\norders, and early stopping, 2020.",
204
+ "author": "Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi,\nand Noah Smith.",
205
+ "venue": "URL https://arxiv.org/abs/2002.06305.",
206
+ "url": null
207
+ }
208
+ },
209
+ {
210
+ "5": {
211
+ "title": "Temporal reasoning in sequence graphs.",
212
+ "author": "J\u00fcrgen Dorn.",
213
+ "venue": "In AAAI, pp. 735\u2013740, 1992.",
214
+ "url": null
215
+ }
216
+ },
217
+ {
218
+ "6": {
219
+ "title": "Making pre-trained language models better few-shot learners, 2020.",
220
+ "author": "Tianyu Gao, Adam Fisch, and Danqi Chen.",
221
+ "venue": "URL https://arxiv.org/abs/2012.15723.",
222
+ "url": null
223
+ }
224
+ },
225
+ {
226
+ "7": {
227
+ "title": "A knowledge-grounded neural conversation model.",
228
+ "author": "Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao,\nWen-tau Yih, and Michel Galley.",
229
+ "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,\n32(1), Apr. 2018.",
230
+ "url": null
231
+ }
232
+ },
233
+ {
234
+ "8": {
235
+ "title": "Dialogbert: Discourse-aware response generation via learning to\nrecover and rank utterances.",
236
+ "author": "Xiaodong Gu, Kang Min Yoo, and Jung-Woo Ha.",
237
+ "venue": "In Proceedings of the AAAI Conference on Artificial\nIntelligence, volume 35, pp. 12911\u201312919, 2021.",
238
+ "url": null
239
+ }
240
+ },
241
+ {
242
+ "9": {
243
+ "title": "Widening the pipeline in human-guided reinforcement learning with\nexplanation and context-aware data augmentation, 2020.",
244
+ "author": "Lin Guan, Mudit Verma, Sihang Guo, Ruohan Zhang, and Subbarao Kambhampati.",
245
+ "venue": "URL https://arxiv.org/abs/2006.14804.",
246
+ "url": null
247
+ }
248
+ },
249
+ {
250
+ "10": {
251
+ "title": "Visual transformation telling, 2023.",
252
+ "author": "Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, and Xueqi Cheng.",
253
+ "venue": "URL https://openreview.net/forum?id=NqaGPQXblk.",
254
+ "url": null
255
+ }
256
+ },
257
+ {
258
+ "11": {
259
+ "title": "Universal language model fine-tuning for text classification, 2018.",
260
+ "author": "Jeremy Howard and Sebastian Ruder.",
261
+ "venue": "URL https://arxiv.org/abs/1801.06146.",
262
+ "url": null
263
+ }
264
+ },
265
+ {
266
+ "12": {
267
+ "title": "Visual storytelling.",
268
+ "author": "Ting-Hao K. Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Jacob\nDevlin, Aishwarya Agrawal, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv\nBatra, et al.",
269
+ "venue": "In 15th Annual Conference of the North American Chapter of the\nAssociation for Computational Linguistics (NAACL 2016), 2016.",
270
+ "url": null
271
+ }
272
+ },
273
+ {
274
+ "13": {
275
+ "title": "Language models as zero-shot planners: Extracting actionable\nknowledge for embodied agents, 2022.",
276
+ "author": "Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch.",
277
+ "venue": "URL https://arxiv.org/abs/2201.07207.",
278
+ "url": null
279
+ }
280
+ },
281
+ {
282
+ "14": {
283
+ "title": "Large language models are zero-shot reasoners, 2022.",
284
+ "author": "Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke\nIwasawa.",
285
+ "venue": "URL https://arxiv.org/abs/2205.11916.",
286
+ "url": null
287
+ }
288
+ },
289
+ {
290
+ "15": {
291
+ "title": "Knowledge base inference using bridging entities.",
292
+ "author": "Bhushan Kotnis, Pradeep Bansal, and Partha Talukdar.",
293
+ "venue": "In Proceedings of the 2015 Conference on Empirical Methods in\nNatural Language Processing, pp. 2038\u20132043, 2015.",
294
+ "url": null
295
+ }
296
+ },
297
+ {
298
+ "16": {
299
+ "title": "Wikihow: A large scale text summarization dataset, 2018.",
300
+ "author": "Mahnaz Koupaee and William Yang Wang.",
301
+ "venue": "URL https://arxiv.org/abs/1810.09305.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "17": {
307
+ "title": "Can language models learn from explanations in context?",
308
+ "author": "Andrew K Lampinen, Ishita Dasgupta, Stephanie CY Chan, Kory Matthewson,\nMichael Henry Tessler, Antonia Creswell, James L McClelland, Jane X Wang, and\nFelix Hill.",
309
+ "venue": "arXiv preprint arXiv:2204.02329, 2022.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "18": {
315
+ "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks.",
316
+ "author": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir\nKarpukhin, Naman Goyal, Heinrich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim\nRockt\u00e4schel, Sebastian Riedel, and Douwe Kiela.",
317
+ "venue": "In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin\n(eds.), Advances in Neural Information Processing Systems, volume 33,\npp. 9459\u20139474. Curran Associates, Inc., 2020.",
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "19": {
323
+ "title": "Imagination-augmented natural language understanding.",
324
+ "author": "Yujie Lu, Wanrong Zhu, Xin Eric Wang, Miguel Eckstein, and William Yang Wang.",
325
+ "venue": "In Proceedings of 2022 Conference of the North American\nChapter of the Association for Computational Linguistics: Human Language\nTechnologies, pp. 4392\u20134402, Dublin, Ireland, 2022.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "20": {
331
+ "title": "Memprompt: Memory-assisted prompt editing with user feedback, 2022.",
332
+ "author": "Aman Madaan, Niket Tandon, Peter Clark, and Yiming Yang.",
333
+ "venue": "URL https://arxiv.org/abs/2201.06009.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "21": {
339
+ "title": "Foveate, attribute, and rationalize: Towards safe and trustworthy ai,\n2022.",
340
+ "author": "Alex Mei, Sharon Levy, and William Yang Wang.",
341
+ "venue": "URL https://arxiv.org/abs/2212.09667.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "22": {
347
+ "title": "Socaog: Incremental graph parsing for social relation inference in\ndialogues.",
348
+ "author": "Liang Qiu, Yuan Liang, Yizhou Zhao, Pan Lu, Baolin Peng, Zhou Yu, Ying Nian Wu,\nand Song-Chun Zhu.",
349
+ "venue": "In The 59th Annual Meeting of the Association for Computational\nLinguistics (ACL), 2021.",
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "23": {
355
+ "title": "High-resolution image synthesis with latent diffusion models.",
356
+ "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00c3\u00b6rn\nOmmer.",
357
+ "venue": "In Proceedings of the IEEE Conference on Computer Vision and\nPattern Recognition (CVPR), 2022.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "24": {
363
+ "title": "Improving language understanding via contextualized,\nvisually-grounded supervision.",
364
+ "author": "Haochen Tan and Mohit Bansal.",
365
+ "venue": "In Proceedings of 38th International Conference on Machine\nLearning, pp. 8748\u20138763, San Francisco, California, 2021.",
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "25": {
371
+ "title": "Temporal reasoning in natural language inference.",
372
+ "author": "Siddharth Vashishtha, Adam Poliak, Yash Kumar Lal, Benjamin Van Durme, and\nAaron Steven White.",
373
+ "venue": "In Findings of the Association for Computational Linguistics:\nEMNLP 2020, pp. 4070\u20134078, 2020.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "26": {
379
+ "title": "Rationale-augmented ensembles in language models.",
380
+ "author": "Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou.",
381
+ "venue": "arXiv preprint arXiv:2207.00747, 2022a.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "27": {
387
+ "title": "Language models with image descriptors are strong few-shot\nvideo-language learners.",
388
+ "author": "Zhenhailong Wang, Manling Li, Ruochen Xu, Luowei Zhou, Jie Lei, Xudong Lin,\nShuohang Wang, Ziyi Yang, Chenguang Zhu, Derek Hoiem, et al.",
389
+ "venue": "arXiv preprint arXiv:2205.10747, 2022b.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "28": {
395
+ "title": "Chain of thought prompting elicits reasoning in large language\nmodels.",
396
+ "author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and\nDenny Zhou.",
397
+ "venue": "arXiv preprint arXiv:2201.11903, 2022.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "29": {
403
+ "title": "A hybrid retrieval-generation neural conversation model.",
404
+ "author": "Liu Yang, Junjie Hu, Minghui Qiu, Chen Qu, Jianfeng Gao, W Bruce Croft,\nXiaodong Liu, Yelong Shen, and Jingjing Liu.",
405
+ "venue": "In Proceedings of the 28th ACM international conference on\ninformation and knowledge management, pp. 1341\u20131350, 2019.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "30": {
411
+ "title": "Bartscore: Evaluating generated text as text generation.",
412
+ "author": "Weizhe Yuan, Graham Neubig, and Pengfei Liu.",
413
+ "venue": "Advances in Neural Information Processing Systems,\n34:27263\u201327277, 2021.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "31": {
419
+ "title": "Temporal reasoning with medical data\u2014a review with emphasis on\nmedical natural language processing.",
420
+ "author": "Li Zhou and George Hripcsak.",
421
+ "venue": "Journal of biomedical informatics, 40(2):183\u2013202, 2007.",
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "32": {
427
+ "title": "Visualize before you write: Imagination-guided open-ended text\ngeneration.",
428
+ "author": "Wanrong Zhu, An Yan, Yujie Lu, Wenda Xu, Xin Eric Wang, Miguel Eckstein, and\nWilliam Yang Wang.",
429
+ "venue": "arXiv preprint arXiv:2210.03765, 2022.",
430
+ "url": null
431
+ }
432
+ }
433
+ ],
434
+ "url": "http://arxiv.org/html/2305.02317v3"
435
+ }
20240123/2305.07730v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2305.11321v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2305.13208v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2305.13998v5.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2305.14800v6.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2305.18417v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2305.19004v3.json ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Policy Gradient Algorithms for Robust MDPs with Non-Rectangular Uncertainty Sets",
3
+ "abstract": "We propose policy gradient algorithms for robust infinite-horizon Markov decision processes (MDPs) with non-rectangular uncertainty sets, thereby addressing an open challenge in the robust MDP literature. Indeed, uncertainty sets that display statistical optimality properties and make optimal use of limited data often fail to be rectangular. Unfortunately, the corresponding robust MDPs cannot be solved with dynamic programming techniques and are in fact provably intractable. We first present a randomized projected Langevin dynamics algorithm that solves the robust policy evaluation problem to global optimality but is inefficient. We also propose a deterministic policy gradient method that is efficient but solves the robust policy evaluation problem only approximately, and we prove that the approximation error scales with a new measure of non-rectangularity of the uncertainty set.\nFinally, we describe an actor-critic algorithm that finds an -optimal solution for the robust policy improvement problem in iterations. We thus present the first complete solution scheme for robust MDPs with non-rectangular uncertainty sets offering global optimality guarantees. Numerical experiments show that our algorithms compare favorably against state-of-the-art methods.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Markov decision processes (MDPs) form the backbone of reinforcement learning and dynamic decision-making [7 ###reference_7###, 40 ###reference_40###, 47 ###reference_47###, 38 ###reference_38###]. Classical MDPs operate in a time-invariant stochastic environment represented by a known constant transition kernel.\nIn most applications, however, the transition kernel is only indirectly observable through a state-action trajectory generated under a fixed policy. In addition, it may even change over time. Uncertain and non-stationary transition kernels are routinely encountered, for example, in finance, healthcare or robotics etc. [19 ###reference_19###, 45 ###reference_45###, 53 ###reference_53###]. In these applications it is thus expedient to work with robust MDPs [39 ###reference_39###, 57 ###reference_57###, 58 ###reference_58###],\nwhich assume that the unknown true transition kernel falls within a known uncertainty set and aim to identify a policy that exhibits the best performance under the worst-case transition kernel in this uncertainty set. Optimal policies of robust MDPs display a favorable out-of-sample performance when the transition kernel must be estimated from scarce data or changes over time [37 ###reference_37###, 54 ###reference_54###].\nRobust MDPs are also popular in machine learning\u2014particularly in inverse reinforcement learning with expert demonstrations or in offline reinforcement learning with time-varying environments [13 ###reference_13###, 52 ###reference_52###, 51 ###reference_51###, 12 ###reference_12###].\nThe literature on robust MDPs distinguishes rectangular and non-rectangular uncertainty sets. An uncertainty set is called -rectangular (or -rectangular) if it is representable as a Cartesian product of separate uncertainty sets for the transition probabilities associated with the different current states (or current state-action pairs ). Otherwise, the uncertainty set is called non-rectangular. Rectangularity is intimately related to computational tractability. Indeed, robust MDPs with rectangular polyhedral uncertainty sets can be solved in polynomial time, whereas robust MDPs with non-rectangular polyhedral uncertainty sets are NP-hard [58 ###reference_58###]. Most existing papers on robust MDPs focus on rectangular uncertainty sets. However, statistically optimal uncertainty sets often fail to be rectangular. Indeed, classical Cram\u00e9r-Rao bounds imply that non-rectangular ellipsoidal uncertainty sets around the maximum likelihood estimator of the transition kernel constitute\u2014in an asymptotic sense\u2014the smallest possible confidence sets for the ground truth transition kernel (see [58 ###reference_58###, \u00a7 5] and Appendix A ###reference_###). Results from large deviations theory further imply that non-rectangular conditional relative entropy uncertainty sets lead to polices that display an optimal trade-off between in-sample performance and out-of-sample disappointment [46 ###reference_46###, 30 ###reference_30###].\nRobust MDPs with rectangular uncertainty sets are usually addressed with value iteration, policy iteration, convex reformulation, or policy gradient methods. Value iteration constructs a sequence of increasingly accurate estimates for the value function of the optimal policy by iterating the robust Bellman operator [25 ###reference_25###, 39 ###reference_39###, 58 ###reference_58###], whereas\npolicy iteration computes a sequence of increasingly optimal policies by iteratively computing the value function of the current policy and updating it greedily [25 ###reference_25###, 58 ###reference_58###]. The convex reformulation method is reminiscent of the linear programming approach for non-robust MDPs [23 ###reference_23###]. It uses an exponential change of variables to construct a convex optimization problem whose solution coincides with the fixed point of an entropy-regularized robust Bellman operator [22 ###reference_22###]. Policy gradient methods, finally, construct a sequence of increasingly optimal policies by locally updating the current policy along the policy gradient of the value function [54 ###reference_54###].\nValue iteration methods enjoy linear convergence and are thus theoretically faster than most known policy gradient methods, which are only guaranteed to display sublinear convergence. However, evaluating the robust Bellman operator can be costly, and value iteration methods can be\nslower than policy gradient methods for large state and action spaces [54 ###reference_54###]. This observation has spurred significant\ninterest in gradient-based methods. A policy gradient method tailored to robust MDPs with specially structured -rectangular uncertainty sets is described in [56 ###reference_56###], while a policy mirror descent algorithm that can handle general -rectangular uncertainty sets is developed in [32 ###reference_32###]. In addition, there exists a projected policy gradient method for robust MDPs with -rectangular uncertainty sets [54 ###reference_54###].\nWhile this paper was under review, it has been discovered that policy gradient methods for the robust policy evaluation problem can in fact achieve linear convergence [31 ###reference_31###]. We emphasize that the convergence guarantees of all reviewed solution methods for robust MDPs critically exploit a robust version of Bellman\u2019s optimality principle, which ceases to hold for non-rectangular uncertainty sets [21 ###reference_21###].\nTo make things worse, the solution methods described above become inefficient or converge to strictly suboptimal solutions of the robust MDP if the uncertainty set fails to be rectangular.\nFor example, value iteration outputs the optimal value function corresponding to the -rectangular hull of the uncertainty set. This function provides only an upper bound on the sought value function if the uncertainty set is non-rectangular [58 ###reference_58###, Proposition 3.6]. The corresponding optimal policy is therefore over-conservative and may perform poorly in out-of-sample tests [58 ###reference_58###, \u00a7 6]. Policy iteration, on the other hand, is computationally excruciating because the robust policy evaluation subroutine is already NP-hard [58 ###reference_58###, Theorem 1]. However, there exists an efficient approximate policy iteration scheme based on ideas from robust optimization [58 ###reference_58###]. This scheme characterizes the value function of any given policy as the solution of an adjustable robust optimization problem, which can be solved approximately but efficiently in linear decision rules. However, the decision rule approximation is accurate only for small uncertainty sets. A Frank-Wolfe policy gradient method for robust policy evaluation with a non-rectangular conditional relative entropy uncertainty set is described in [30 ###reference_30###]. However, this method is only guaranteed to find a stationary point. A projected policy gradient method for robust MDPs with generic convex uncertainty sets is proposed in [54 ###reference_54###]. However, its convergence proof relies on the assumption that the set of worst-case kernels is finite, which is difficult to check in practice. The proof also assumes access to a robust policy evaluation oracle, but no such oracle is provided.\nThe main contributions of our paper can be summarized as follows.\nWe show that robust policy evaluation problems with non-rectangular uncertainty sets can be solved to global optimality with a projected Langevin dynamics algorithm. Numerical results suggest that if the uncertainty set happens to be rectangular, then this randomized algorithm is competitive with state-of-the-art deterministic first-order methods in terms of runtime.\nWe present a conservative policy iteration algorithm that solves robust policy evaluation problems approximately. The approximation error is shown to scale with a new measure of non-rectangularity of the uncertainty set. We prove that the same method solves robust policy evaluation problems with rectangular uncertainty sets to any accuracy in iterations, where denotes the number of states. In contrast, the iteration complexity of the state-of-the-art policy gradient method for this problem class developed in [54 ###reference_54###] includes an extra factor , where denotes the number of actions.\nWe present an actor-critic method that solves robust policy improvement problems with non-rectangular uncertainty sets to any accuracy in iterations. This is the first complete solution scheme for robust MDPs with non-rectangular uncertainty sets offering global optimality guarantees. A similar projected gradient descent algorithm with access to an abstract approximate robust policy evaluation oracle is described in [54 ###reference_54###]. However, the policy evaluation oracle is is not made explicit for general non-rectangular uncertainty sets. In addition, the convergence proof in [54 ###reference_54###] relies on the implicit assumption that the set of worst-case transition kernels for any given policy is finite, which would be difficult to certify in practice.\nOur theoretical contributions critically rely on celebrated results in approximate dynamic programming and multi-agent reinforcement learning. Specifically, we adapt a policy iteration algorithm for non-robust MDPs described in [26 ###reference_26###] to solve robust policy evaluation problems. In addition, the convergence analysis of our actor-critic algorithm for robust policy improvement exploits a gradient dominance result originally developed for multi-agent reinforcement learning problems with a fixed transition kernel and adapts it to single-agent MDPs with an uncertain transition kernel.\nWe remark that if the uncertainty set of the transition kernel is non-rectangular, then the corresponding robust MDP fails to be time consistent [36 ###reference_36###, 43 ###reference_43###, 44 ###reference_44###]. Thus, it satisfies no Bellman-type equation and cannot be addressed with dynamic programming. Even though alternative optimality criteria are discussed in [11 ###reference_11###, 29 ###reference_29###, 59 ###reference_59###], robust MDPs with general non-rectangular ambiguity sets remain unsolved to date."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Rectangular and Non-rectangular Uncertainty Sets",
15
+ "text": "Consider an MDP\ngiven by a five-tuple comprising a finite state space , a finite action space , a transition kernel , a cost-per-stage function ,\nand an initial distribution .\nNote that describes a controlled discrete-time stochastic system, where the state at time and the action applied at time are denoted as random variables and , respectively.\nIf the system is in state at time and action is applied, then an immediate cost is incurred, and the system moves to state at time with probability .\nActions are chosen according to a policy that prescribes a random action at time depending on the state history up to time and the action history up to time . Throughout the rest of the paper, we restrict attention to stationary policies, which are described by a stochastic kernel , that is, denotes the probability of choosing action if the current state is . Unless otherwise stated, we assume without loss of generality that for all and\nGiven a stationary policy , there exists a unique probability measure defined on the canonical sample space equipped with its power set -algebra \nsuch that for every , while\nfor all , , and .\nWe denote the expectation operator with respect to by . One readily verifies that the stochastic process represents a time-homogeneous Markov chain under with transition probabilities .\nThroughout this paper, we assess the desirability of a policy by its expected net present cost with respect to a prescribed discount factor .\nThe value function corresponding to a transition kernel and a stationary policy is defined through\n\nOne can show that constitutes a continuous, rational function of and [40 ###reference_40###, Appendix A].\nThe policy evaluation problem consists in evaluating the value function for a fixed policy and initial state , whereas the policy improvement problem seeks a policy that solves .\nIn this paper, we are interested in robust MDPs. We thus assume that the transition kernel is only known to belong to an uncertainty set , and we assess the desirability of a policy by its worst-case expected net present cost.\nThe worst-case value function associated with a given policy and an uncertainty set is defined through\n\nThe robust policy evaluation problem then consists in evaluating the worst-case value function for a fixed policy and initial state , and the robust policy improvement problem aims to solve\nThe structure of the uncertainty set largely determines the difficulty of solving the robust policy evaluation and improvement problems. These problems become relatively easy if the uncertainty set is rectangular.\nA set of transition matrices is called\n-rectangular [25 ###reference_25###]\nif for some , ;\n-rectangular [28 ###reference_28###] if for some .\n\nThere is also an alternative notion of rectangularity, known as -rectangularity [19 ###reference_19###], which models the transition kernel as a linear function of an uncertain factor matrix. We will not study -rectangular uncertainty sets in the remainder. From now on, we\ncall\nan uncertainty set non-rectangular if it is neither -rectangular nor -rectangular (nor -rectangular).\nAs the probability simplex and thus also have an empty interior, we employ a reparametrization to represent as the image of a solid parameter set .\nSpecifically, we assume that there exists an affine function that maps a solid parameter set to such that \nThis reparametrization may lead to a dimensionality reduction as it allows us to account for structural knowledge about the uncertainty set (e.g., it may be known that certain transitions are impossible or that some transitions have the same probabilities).\nThis reparametrization will also help us to establish algorithmic guarantees in Section 3 ###reference_###.\nIf is rectangular, then the robust policy improvement problem (3 ###reference_###) can be solved in polynomial time using robust value iteration.\nIf the parameter set is representable through linear and convex quadratic constraints, and if induces an -rectangular uncertainty set , then an -optimal solution to the robust policy improvement problem (3 ###reference_###) can be computed in polynomial time .\nIf the uncertainty set fails to be rectangular, on the other hand, then the robust policy evaluation problem is strongly NP-hard even if is a convex polyhedron.\nDeciding whether the worst-case value function (2 ###reference_###) over a non-rectangular polyhedral uncertainty set exceeds a given value is strongly NP-hard for any stationary policy .\nTheorem 2.5 ###reference_heorem5### implies that, unless P=NP, there exists no algorithm for computing an -optimal solution of the robust policy evaluation problem (2 ###reference_###) with a non-rectangular uncertainty set in time polynomial in the input size and Thus, the best we can realistically hope for is to develop methods that have a runtime polynomial in ."
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Robust Policy Evaluation",
21
+ "text": "Throughout this section we fix a policy and a convex and compact parameter set that induces an uncertainty set . Our aim is to solve the robust policy evaluation problem (2 ###reference_###) to global optimality.\nThe following definitions are needed throughout the paper.\nThe action-value function corresponding to a transition kernel and a stationary policy is defined through\n\nThe action-next-state value function corresponding to a transition kernel and a stationary policy is defined through\n\nThe discounted state visitation distribution corresponding to a transition kernel , a stationary policy , and an initial state is defined through\nThe discounted state-action visitation distribution\n corresponding to a transition kernel and an initial state-action pair is defined through\n\nLemma B.1 ###reference_heorem1### in the appendix shows that the value functions and are related through several linear equations, which imply the Bellman equation for One can use these equations to express , and as explicit rational functions of and These functions are well-defined on dense subsets of and and, in particular, on open neighborhoods of the physically meaningful domains and . In the following, we can thus assume that the functions , and extend to open sets containing and . This implies in particular that the gradients of these functions with respect to and are well-defined.\nOne can show that is the -th entry of the matrix , where If we set then by\n[40 ###reference_40###, Theorem 6.1.1].\nA robust MDP can be viewed as a zero-sum game between the decision maker, who selects the policy and an adversary, who chooses the transition kernel . In this view, the parameter encodes the adversary\u2019s policy.\nAdopting a similar reasoning as in [48 ###reference_48###, Theorem 1], we can thus derive an explicit formula for the gradient of the value function with respect to the adversary\u2019s policy parameter .\nFor any and , we have\n\nBy Lemma B.1 ###reference_heorem1###(i) ###reference_1### and the chain rule we have\nThus, it remains to find an explicit formula for the derivative of the action-value function with respect to the transition kernel . A direct calculation reveals that\nwhere the first and third equalities use Lemmas B.1 ###reference_heorem1###(ii) ###reference_2### and B.1 ###reference_heorem1###(iii) ###reference_3###, respectively. The last equality follows from the defining properties (1a ###reference_###) and (1b ###reference_###) of .\nRepeating the above reasoning for the state-action pair instead of yields\nSubstituting the above expression for into (5 ###reference_###) and recalling that constitutes a Markov chain under yields\nIteratively reformulating for , we finally obtain\nwhere the last equality exploits the definition of the discounted state visitation distribution.\nThe claim then follows by substituting the above expression into (4 ###reference_###).\nLemma 3.5 ###reference_heorem5### is a key ingredient for two complementary algorithms for solving the robust policy evaluation problem (2 ###reference_###) with a non-rectangular uncertainty set. Section 3.1 ###reference_### first develops a Markov chain Monte Carlo method for solving (2 ###reference_###) exactly. Next, Section 2 ###reference_### develops a more efficient conservative policy iteration method for solving (2 ###reference_###) approximately. Throughout the two sections we fix an initial state ."
22
+ },
23
+ {
24
+ "section_id": "3.1",
25
+ "parent_section_id": "3",
26
+ "section_name": "Projected Langevin Dynamics",
27
+ "text": "We now develop a Markov Chain Monte Carlo method to solve the robust policy evaluation problem (2 ###reference_###) to global optimality and derive its convergence rate in expectation.\nTo this end, we assume throughout this section that is a compact convex body, and we consider the problem of sampling from the Gibbs distribution\nwhere represents the inverse temperature.\nNote that the denominator is finite because is compact and is continuous in . Indeed, is continuous in [40 ###reference_40###, Appendix A], and is affine in \nSampling from the Gibbs distribution is of interest because the robust policy evaluation problem (2 ###reference_###) is equivalent to\nand because converges weakly to the uniform distribution on the set of global maximizers of (6 ###reference_###)\nas tends to infinity [24 ###reference_24###, Section 2]. We use the discrete-time counterpart of the Langevin diffusion [41 ###reference_41###] to generate samples that are (approximately) governed by the Gibbs distribution see Algorithm 1 ###reference_###. In each iteration , Algorithm 1 ###reference_### first uses Lemma 3.5 ###reference_heorem5### to compute the adversary\u2019s policy gradient at the current iterate perturbs it by adding Gaussian noise, and then applies a projected gradient step to find the next iterate . After iterations, Algorithm 1 ###reference_### outputs a random iterate whose distribution approximates in the -Wasserstein distance [27 ###reference_27###, Theorem 1].\nIf , , and then there exist universal constants , , and such that for and\n, the distribution of the output of Algorithm 1 ###reference_### satisfies\nBy [54 ###reference_54###, Lemma 4], there exists a constant such that the objective function of problem (6 ###reference_###) is -smooth in .\nIn addition, is a convex body. The claim thus follows from [27 ###reference_27###, Proposition 3].\nTheorem 3.7 ###reference_heorem7### shows that the number of iterations needed by Algorithm 1 ###reference_### to compute an -optimal solution for the robust policy evaluation problem (2 ###reference_###) scales exponentially with the dimension of the uncertain parameter and with the number of desired accuracy digits \nThis is consistent with the hardness result of Theorem 2.5 ###reference_heorem5###.\nNonetheless, Algorithm 1 ###reference_### solves the robust policy evaluation problem via a simple gradient-based approach and enjoys global optimality guarantees even if the uncertainty set fails to be rectangular.\nThe following modifications can improve the scalability of Algorithm 1 ###reference_### in practice. First, Algorithm 1 ###reference_### computes an exact policy gradient in every iteration, which can be costly when the state and action spaces are large. Stochastic or approximate policy gradients may be cheaper to evaluate. Fortunately, Theorem 3.7 ###reference_heorem7### continues to hold when stochastic instead of exact policy gradients are used provided that they are affected by sub-Gaussian noise [27 ###reference_27###, Proposition 3]. In addition, the projection onto the parameter space is computed in every iteration, which can be costly. As is convex, however, the Euclidean projection subroutine solves a convex program and is thus amenable to efficient general-purpose solvers that scale to high dimensions [50 ###reference_50###]. For specific non-rectangular polyhedral uncertainty sets, Euclidean balls, or -balls, projections are available in closed form or can be computed highly efficiently with specialized methods [17 ###reference_17###, 35 ###reference_35###].\nThe concentration behavior of the discrete-time counterpart of the Langevin diffusion is generally open despite some recent results for convex objective functions [2 ###reference_2###]. We leave the study of strong concentration bounds complementing Theorem 3.7 ###reference_heorem7### for future research. However,\nby applying Markov\u2019s inequality, we directly obtain the following probabilistic guarantee.\nUnder the assumptions of Theorem 3.7 ###reference_heorem7### we have\n for all"
28
+ },
29
+ {
30
+ "section_id": "3.2",
31
+ "parent_section_id": "3",
32
+ "section_name": "\u200b\u200bConservative Policy Iteration",
33
+ "text": "\u200b\u200bThe robust policy evaluation problem \u200b(2 ###reference_###)\u200b is challenging because the objective function is non-concave in Accordingly, it is not surprising that the runtime of the Markov Chain Monte Carlo method developed in Section 3.1 ###reference_### scales exponentially with the dimension of . In this section we show that a stationary point of (2 ###reference_###) can be found in time polynomial in . We will also show that the suboptimality of this stationary point vis-\u00e0-vis the global maximum of (2 ###reference_###) admits a tight computable estimate that depends on the degree of non-rectangularity of the uncertainty set \nTo this end, we first note that problem (2 ###reference_###) is susceptible to a Frank-Wolfe algorithm [18 ###reference_18###], see Algorithm 2 ###reference_###.\nA similar Frank-Wolfe method has been proposed to solve the policy improvement problem associated with non-robust MDPs [8 ###reference_8###]. This method is often referred to as conservative policy iteration (CPI) [26 ###reference_26###].\nAlgorithm 2 ###reference_### can thus be viewed as a CPI method for robust policy evaluation problems with non-rectangular uncertainty sets.\nIn each iteration , Algorithm 2 ###reference_### computes an -optimal solution of the direction-finding subproblem\nwhich linearizes the objective function of problem (2 ###reference_###) around the current iterate . The next iterate is constructed as a point on the line segment connecting and \nThe algorithm terminates as soon as the (approximate) Frank-Wolfe gap drops below the prescribed tolerance .\nA more explicit reformulation of the direction-finding subproblem (7 ###reference_###) suitable for implementation is provided in Proposition 3.11 ###reference_heorem11### below. For notational convenience, we define the adversary\u2019s advantage function as\nIt quantifies the extent to which the adversary prefers to set the next state to instead of sampling it from , assuming that the future dynamics of the states and actions are determined by and\nIt can be shown that the direction-finding subproblem (7 ###reference_###) can be equivalently expressed in terms of the adversary\u2019s advantage function.\nProblem (7 ###reference_###) is equivalent to\n\nUnder the trivial embedding for , nature\u2019s policy gradient constitutes a tensor containing a in position and zeros elsewhere. Thus, Lemma 3.5 ###reference_heorem5### implies that\nwhere the second equality follows from (2 ###reference_###) and the law of total probability, which implies that . The last equality follows from Lemma B.3 ###reference_heorem3### in the appendix. Thus, (7 ###reference_###) and (8 ###reference_###) are equivalent.\nThe following assumption is instrumental for the main results of this section.\nThe Markov chain induced by the given policy is irreducible for every , where denotes the smallest -rectangular uncertainty set that contains .\nAssumption 1 ###reference_mption1### ensures that, for every transition kernel , every state can be reached from any other state within a finite number of transitions. Similar assumptions are frequently adopted in the literature on robust and non-robust MDPs [58 ###reference_58###, 20 ###reference_20###].\nIn the following, we define the distribution mismatch coefficient associated with two transition kernels as , and we set the universal distribution mismatch coefficient to .\nIn addition, we define the degree of non-rectangularity of the uncertainty set with respect to an anchor point as\nwhere denotes again the smallest -rectangular uncertainty set that contains , and we set the absolute degree of non-rectangularity of to .\nNote that if is -rectangular, then , and thus vanishes for every anchor point implying that If is non-rectangular, however, then and thus is non-negative for every . Hence, is non-negative, too.\nAssumption 1 ###reference_mption1### ensures that for all As and are respectively continuous in and for all , and as is compact, it is clear that is finite and strictly positive. Similarly, as is continuous in [54 ###reference_54###, Lemma 4] while and are compact, Berge\u2019s maximum theorem [5 ###reference_5###, pp. 115-116] ensures that is continuous in Thus, is finite and non-negative. Note that both and depend only on , and\nThe following theorem uses Assumption 1 ###reference_mption1### to show that the CPI algorithm offers a global performance guarantee.\nSuppose that Assumption \u200b1 ###reference_mption1### holds and that For every , define the approximate Frank-Wolfe gap\nwhere denotes the -optimal solution of problem (7 ###reference_###) computed in the -th iteration of Algorithm 2 ###reference_###, and let . Then, Algorithm 2 ###reference_### terminates within iterations, and its output satisfies\n\nTheorem 3.14 ###reference_heorem14### implies that if is -rectangular, in which case , then Algorithm 2 ###reference_### solves the robust policy evaluation problem (2 ###reference_###) to global optimality. This insight is formalized in the next corollary.\nSuppose that all assumptions of Theorem 3.14 ###reference_heorem14### hold and that is -rectangular. Then,\nthe output of Algorithm 2 ###reference_### satisfies\nA policy gradient method that solves robust policy evaluation problems with -rectangular uncertainty sets to global optimality is proposed in [54 ###reference_54###]. While displaying the same dependence on , one can show that the iteration complexity of this alternative method exceeds that of our algorithm by a factor ; see also the more detailed discussion in Section 5.2 ###reference_###.\nIn addition, the method in [54 ###reference_54###] requires an exact projection oracle onto the uncertainty set, while our Frank-Wolfe algorithm only requires approximate solutions of the direction-finding subproblem (7 ###reference_###). Our projection-free Frank-Wolfe algorithm is thus preferable for non-elementary uncertainty sets. Numerical experiments suggest that the policy gradient method developed in [54 ###reference_54###] converges faster than dynamic programming methods despite its suboptimal theoretical convergence rate.\nThe proof of Theorem 3.14 ###reference_heorem14### relies on a few preparatory results. First, we need the following variant of the celebrated performance difference lemma for non-robust MDPs [26 ###reference_26###, Lemma 6.1], which compares the performance of different transition kernels under a fixed policy .\nFor any , and , we have\n\nFor any we have\nwhere the first equality follows from Lemma B.1 ###reference_heorem1###(ii) ###reference_2###, the second equality follows from Lemmas B.1 ###reference_heorem1###(i) ###reference_1### and B.1 ###reference_heorem1###(iii) ###reference_3###, and the last equality holds because of (2 ###reference_###). Substituting the above equation for into the above equation for yields\nBy iteratively expanding for all and recalling that , we then find\nwhere the second equality follows from the construction of in Definition 3.3 ###reference_heorem3###. By Lemma B.1 ###reference_heorem1###(i) ###reference_1###, we finally obtain\nwhere the last equality follows from\u200b (9 ###reference_###) and the identity . Thus, the claim follows.\nStep 4 ###reference_### of Algorithm 2 ###reference_### readily implies that\nThus, the difference between any two consecutive iterates of Algorithm 2 ###reference_### is bounded by twice the stepsize.\nThe next lemma, which is inspired by [26 ###reference_26###, Theorem 4.1], translates this bound to one for the difference between the discounted state visitation frequencies corresponding to two consecutive iterates.\nThe iterates of Algorithm 2 ###reference_### satisfy\n\nWe use to denote the probability mass function of under conditional on .\nIts dependence on and is suppressed to avoid clutter. Note first that for any we have\nTaking absolute values on both sides, using the triangle inequality and summing over then yields\nwhere the second inequality follows from (10 ###reference_###).\nBy unfolding this recursive bound for all time points from to and noting that we then obtain\nNext, from the definition of it is clear that\nBy (11 ###reference_###), we therefore find\nwhere the second inequality holds because .\nThe next lemma shows that, under the adaptive stepsize schedule of Theorem 3.14 ###reference_heorem14###, the objective function values of the transition kernels generated by Algorithm 2 ###reference_### are non-decreasing. It is inspired by [26 ###reference_26###, Corollary 4.2].\nUnder the stepsize schedule of Theorem 3.14 ###reference_heorem14###, we have for all .\nThroughout the proof we use to denote the -optimal solution of problem (7 ###reference_###) that is computed in the -th iteration of Algorithm 2 ###reference_###. By Lemma 3.17 ###reference_heorem17###, we then have\nwhere the second equality follows from the construction of in Algorithm 2 ###reference_###, and the last equality follows from Lemma B.3 ###reference_heorem3###.\nAdding and subtracting on the right hand side of (12 ###reference_###) and using\na similar reasoning as in the proof of Proposition 3.11 ###reference_heorem11### to express the approximate Frank-Wolfe gap in terms of the advantage function , we obtain\nThe first inequality in the above expression follows from H\u00f6lder\u2019s inequality. Recall next that for all , which implies that for all . By the definition of the advantage function and by Lemmas B.1 ###reference_heorem1###(ii) ###reference_2### and B.1 ###reference_heorem1###(iii) ###reference_3###, we then have for all and . This justifies the second inequality. The last inequality follows from Lemma 3.19 ###reference_heorem19###. The stepsize was chosen to maximize the last expression. Replacing by this formula yields the desired bound.\nWe can now show that CPI terminates within iterations with a Frank-Wolfe gap of at most .\nUnder the stepsize schedule of Theorem 3.14 ###reference_heorem14###, Algorithm 2 ###reference_### terminates within iterations, and its output satisfies\nTheorem 5 in [30 ###reference_30###] also shows that Algorithm 2 ###reference_### converges to an approximate stationary point but does not provide an explicit expression for the iteration complexity. While [30 ###reference_30###] focuses on a specific non-rectangular uncertainty set constructed from the conditional relative entropy and uses exact line search to determine the stepsize sequence, which is computationally expensive, Lemma 3.23 ###reference_heorem23### applies to general non-rectangular uncertainty sets and leverages an easily computable stepsize schedule. In addition, [30 ###reference_30###] assumes to have access to an exact optimizer of the direction-finding subproblem, while Lemma 3.23 ###reference_heorem23### only requires access to an -optimal solution.\nNote that if Algorithm 2 ###reference_### does not terminate in iteration , then , and hence by Lemma 3.21 ###reference_heorem21###.\nAs , we have for every . The above per-iteration improvement can thus only persist for at most iterations. If Algorithm 2 ###reference_### terminates in iteration , however, then and thus we have\nHence, the claim follows.\nWe are now ready to establish the convergence behavior of Algorithm 2 ###reference_###.\nLet be any maximizer of the robust policy evaluation problem (2 ###reference_###) when is replaced with the smallest -rectangular uncertainty set that contains .\nAs we have\nThis in turn implies that\nwhere the first two equalities exploit Lemma 3.17 ###reference_heorem17### and Lemma B.3 ###reference_heorem3###, respectively.\nThe third inequality follows from the definition of the distribution mismatch coefficient and from H\u00f6lder\u2019s inequality, \u200bwhich applies because and hence for all . The third equality exploits the -rectangularity of , and the last equality follows from a variant of Proposition 3.11 ###reference_heorem11### where is replaced by .\nHence, we find\nwhere the second inequality holds thanks to the definition of and Lemma 3.23 ###reference_heorem23###. The claim finally follows because and are trivially bounded above by and respectively, which are independent of the output of Algorithm 2 ###reference_###."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Robust Policy Improvement",
39
+ "text": "We now develop an actor-critic algorithm to solve the robust policy improvement problem (3 ###reference_###) for a fixed initial state to global optimality; see Algorithm 3 ###reference_###.\nIn each iteration , Algorithm 3 ###reference_### first computes an -optimal solution of the robust policy evaluation problem (2 ###reference_###) associated with the current policy (critic) and\nthen applies a projected gradient step to find a new policy that locally improves the value function associated with the current transition kernel (actor). The critic\u2019s subproblem could be addressed with Algorithm 1 ###reference_###, for example, which outputs an -optimal solution of the robust policy evaluation problem with high probability. The actor\u2019s subproblem consists in projecting a vector onto the probability simplex which can be done efficiently [55 ###reference_55###].\nThe following assumption is essential for the main results of this section.\nThe Markov chain is irreducible for any and .\nA similar reasoning as in Remark 3.13 ###reference_heorem13### shows that the distribution mismatch coefficient is finite and strictly positive under Assumption 2 ###reference_mption2###.\nRecall now form Remark 3.4 ###reference_heorem4### and the surrounding discussion that constitutes a rational function of that is defined on a neighborhood of . The following lemma establishes several desirable properties this value function. In the remainder of this section, we frequently use the constants and .\nSuppose that Assumption 2 ###reference_mption2### holds. Then, for every there exists an open neighborhood of such that any point in has a (Frobenius) distance of at most from some point in , and the value function satisfies the following conditions for every .\nis -Lipschitz continuous and -smooth in on .\nfor all\n\nBy [54 ###reference_54###, Lemma 3], is -Lipschitz continuous and -smooth on . In addition, for all thanks to [1 ###reference_1###, Lemma 4.1]. As is continuous and rational in and on a neighborhood of and as is compact, Berge\u2019s maximum theorem [5 ###reference_5###, pp. 115-116] implies that is continuous in on a neighborhood of .\nThe claim then follows because both and are compact.\nThroughout the rest of this section we use as a shorthand for the worst-case value function , which is defined for all . This helps us to avoid clutter.\nWe henceforth refer to as the primal function.\nIn addition, we let be an optimal solution of the policy improvement problem (3 ###reference_###).\nThe primal function generically fails to be differentiable. It is thus useful to approximate by its Moreau envelope parametrized by , which is defined through \nThe following lemma establishes useful properties of the primal function and its Moreau envelope .\nThe following hold.\nis -weakly convex and -Lipschitz continuous on .\nIf , then is convex and differentiable. If additionally for some , then there exists such that and\n\nAs for Assertion (i) ###reference_1###, note first that is -Lipschitz continuous on thanks to [34 ###reference_34###, Lemma 4.3], which applies because of the -Lipschitz continuity of established in Lemma 4.1 ###reference_heorem1###(i) ###reference_1###.\nSimilarly, [49 ###reference_49###, Lemma 3.3] implies that inherits -weak convexity from the -smoothness of established in Lemma 4.1 ###reference_heorem1###(i) ###reference_1###. Assertion (ii) ###reference_2### then holds because of [15 ###reference_15###, Section 2.2]. We include a short proof to keep this paper self-contained. For ease of notation we set . Note first that is strongly convex in because is -weakly convex and because . Danskin\u2019s theorem [6 ###reference_6###, Proposition B.25] thus implies that, for any , is convex and differentiable with , where is the unique minimizer of across all . This implies that if , then .\nOn the other hand, the optimality of implies that , which is equivalent to . Hence, it follows that .\nLemma 4.3 ###reference_heorem3###(ii) ###reference_2### asserts that\nif , then\nthe -neighborhood of any approximate stationary point of the Moreau envelope contains an approximate stationary point of . Thus, approximate stationary points of can be found by searching for approximate stationary points of\nIf and , then the iterates of Algorithm 3 ###reference_### satisfy .\nLemma 4.5 ###reference_heorem5### guarantees that the iterates generated by Algorithm 3 ###reference_### satisfy\nThe proof of [54 ###reference_54###, Theorem 3.3] reveals that\nThe claim then follows from our choice of and .\nThe following lemma, which is inspired by [14 ###reference_14###, Lemma 12],\nestablishes a fundamental inequality that can be used to convert an approximate stationary point of the Moreau envelope to an approximate minimizer of\nIf Assumption 2 ###reference_mption2### holds, then we have for all .\nChoose any . By Lemma 4.3 ###reference_heorem3###(i) ###reference_1###, is -weakly convex on . Theorem 25.5 in [42 ###reference_42###] then implies that the set of points at which is differentiable is dense in and hence in . We\nfirst prove that the claimed inequality holds approximately for any point at which is differentiable.\nIn this case the subdifferential is a singleton, and a generalization of Danskin\u2019s theorem (Theorem B.7 ###reference_heorem7###) implies that for any . As Lemma 4.1 ###reference_heorem1###(ii) ###reference_2### holds in particular for , we have\nwhere the second inequality holds because\n \nThe Cauchy-Schwarz inequality then allows us to conclude that\nThe equality in the above expression holds because there exist with and and because thanks to Lemma B.5 ###reference_heorem5###. This implies that . The second inequality follows from (13 ###reference_###).\nNext, set . By Lemma 4.3 ###reference_heorem3###(ii) ###reference_2###, there is such that and Theorem B.7 ###reference_heorem7### thus implies that there exists with . We then find\nwhere the second inequality follows from the Cauchy-Schwarz inequality and our earlier insight that for all , the third inequality follows from Lemma 4.1 ###reference_heorem1###(ii) ###reference_2###, and the fourth inequality holds because and\n\nThe above reasoning implies that\nwhere the first inequality follows from (14 ###reference_###) and Lemma 4.3 ###reference_heorem3###(i) ###reference_1###,\nand the second inequality holds because . As , we thus have\nHence, if is small, the claimed gradient dominance condition holds approximately at any point where the primal function is differentiable.\nConsider now an arbitrary irrespective of whether or not is differentiable at . Let be a sequence in an open neighborhood of converging to such that is differentiable at and with for every . From the inequality (15 ###reference_###) established in the first part of the proof we know that\nThe claim then follows because converges to and converges to , while as well as the gradient of its Moreau envelope are continuous at .\nWith all these preparatory results at hand, we are now ready to characterize the convergence behavior of Algorithm 3 ###reference_###.\nIf Assumption 2 ###reference_mption2### holds, and , then the iterates of Algorithm 3 ###reference_### satisfy\n\nWe have\nwhere the equality exploits the definition of the primal function , while the three inequalities follow from Lemma 4.7 ###reference_heorem7###, Jensen\u2019s inequality and Lemma 4.5 ###reference_heorem5###, respectively.\nTheorem 4.9 ###reference_heorem9### implies that an -optimal solution of the robust policy improvement problem (3 ###reference_###) can be computed in iterations.\nA similar global convergence result for a projected gradient descent algorithm with access to an approximate robust policy evaluation oracle was established in [54 ###reference_54###]. However, no robust policy evaluation oracle for general non-rectangular uncertainty sets is described, and its accuracy is required to increase geometrically with the number of iterations of the algorithm. The convergence proof in [54 ###reference_54###] also relies on the implicit assumption that the set of worst-case transition kernels for any given policy is finite, which would be difficult to check in practice. In contrast, Theorem 4.9 ###reference_heorem9### does not rely on such an assumption."
40
+ },
41
+ {
42
+ "section_id": "5",
43
+ "parent_section_id": null,
44
+ "section_name": "Numerical Experiments",
45
+ "text": "We assess the performance of the proposed algorithms on standard test problems: A stochastic GridWorld problem [47 ###reference_47###], randomly generated Garnet MDPs [3 ###reference_3###], and a machine replacement problem [16 ###reference_16###].\nSections 5.1 ###reference_### and 5.2 ###reference_### focus on robust policy evaluation. Section 5.1 ###reference_### first compares the solution qualities of the projected Langevin dynamics algorithm (PLD, Algorithm 1 ###reference_###) and the conservative policy iteration algorithm (CPI, Algorithm 2 ###reference_###) in the context of a GridWorld problem. Section 5.2 ###reference_### uses Garnet MDPs to assess the runtime performance of CPI against that of the state-of-the-art projected gradient descent algorithm for robust policy evaluation described in [54 ###reference_54###]. Section 5.3 ###reference_###, finally, focuses on a machine replacement problem and compares the actor-critic algorithm (ACA, Algorithm 3 ###reference_###) against the only existing method for robust policy improvement with non-rectangular uncertainty sets described in [58 ###reference_58###].\nAll experiments are implemented in Python, and are run on an Intel i7-10700 CPU (2.9GHz) computer with 16\nGB RAM."
46
+ },
47
+ {
48
+ "section_id": "5.1",
49
+ "parent_section_id": "5",
50
+ "section_name": "Stochastic GridWorld: Rectangular and Non-Rectangular Uncertainty Sets",
51
+ "text": "The purpose of the first experiment is to show that PLD outputs the same policy value as CPI when the uncertainty set is rectangular but may output a higher policy value than CPI otherwise. Our experiment is based on a stylized GridWorld problem, which is widely studied in reinforcement learning [47 ###reference_47###]. Specifically, the state space comprises the cells of a grid, and the action space comprises the directions \u201cup,\u201d \u201cdown,\u201d \u201cleft,\u201d and \u201cright.\u201d An agent moves across the grid with the aim to reach the Goal State in cell (in the top left corner) while avoiding the Bad State in cell (in the bottom right corner). If the agent resides in cell and selects action , then she moves to cell with probability . The agent incurs a cost of in the Bad State, a cost of in the Goal State, and a cost of in any other state. The initial state is assumed to follow the uniform distribution on , which we denote as , and the discount factor is set to . We also assume that the agent\u2019s knowledge is captured by an uncertainty set , which is defined as some neighborhood of a reference transition kernel . In the following we define as the set of all cells adjacent to . We set if is the cell adjacent to in direction , if is any other cell adjacent to , if , and otherwise. If there is no cell adjacent to in direction , then we set if is any cell adjacent to , if , and otherwise.\nOur goal is to compute the worst-case net present cost of the policy that selects actions randomly from the uniform distribution on irrespective of the current state.\nGradient-based methods such as PLD or CPI can be used to compute even if the initial state is random. In this case, however, nature\u2019s policy gradients of the form must be replaced with . Throughout the first experiment we employ PLD with Gibbs parameter , stepsize , initial iterate corresponding to the nominal transition kernel and iterations. In addition, we use CPI with tolerance , stepsizes chosen as in Theorem 3.14 ###reference_heorem14### and initial iterate . We also work with variants of the PLD and CPI algorithms that output the best iterates found during execution. We first assume that constitutes an -rectangular uncertainty set of the form\nwith size parameter . Note that if and , then can be strictly positive even if is not adjacent to . Figure 0(a) ###reference_f1### shows the worst-case policy values output by PLD averaged over independent simulation runs and compares them against the deterministic values output by CPI. We highlight that the standard deviations of the values output by PLD range from to and are therefore practically negligible. As expected from Theorems 3.7 ###reference_heorem7### and 3.14 ###reference_heorem14###, we observe that the two algorithms are consistent. That is, if the uncertainty set is rectangular, then both PLD and CPI succeed in solving the robust policy evaluation problem to global optimality. Figure 0(b) ###reference_f2### visualizes the policy values associated with the iterates of a single simulation run of PLD, illustrating the exploratory nature of the algorithm. Specifically, we see that\nfor large the policy values oscillate around a constant level.\nsize parameter\n\niteration counter\nNext, we assume that constitutes a non-rectangular ambiguity set of the form\nwith size parameter and Hessian matrix . As shown in Appendix A ###reference_###, ellipsoidal uncertainty sets of this type naturally emerge when maximum likelihood estimation is used to construct statistically optimal confidence regions for . Figure 1(a) ###reference_f1### shows the worst-case policy values output by PLD (averaged over 20 independent simulation runs) and CPI. The standard deviations of the values output by PLD range from to and are thus again negligible. We observe that for PLD reports higher worst-case policy values than CPI. This suggests that the deterministic CPI method may get trapped in local maxima, while the randomized PLD method manages to escape local maxima. For the outputs of PLD and CPI match. This is to be expected from Theorem 3.14 ###reference_heorem14### because the uncertainty set converges to the -rectangular product simplex \u2014and thus becomes increasingly rectangular\u2014as grows. Figure 1(a) ###reference_f1### visualizes the policy values associated with the iterates of a single simulation run of PLD.\nWe remark that PLD can outperform CPI by up to 80% on GridWorld problems (not shown).\nTable 1 ###reference_### shows the runtimes of PLD and CPI for non-rectangular uncertainty sets of different sizes. Despite the suboptimal theoretical convergence rate, PLD is empirically faster than CPI while producing more accurate solutions for robust policy evaluation problems with non-rectangular uncertainty sets.\nsize parameter\niteration counter"
52
+ },
53
+ {
54
+ "section_id": "5.2",
55
+ "parent_section_id": "5",
56
+ "section_name": "Garnet MDPs: Rectangular Uncertainty Sets",
57
+ "text": "The purpose of the second experiment is to show that CPI may solve robust policy evaluation problems with -rectangular uncertainty sets faster than the state-of-the-art method for this problem class developed in [54 ###reference_54###]. We use the Generalized Average Reward Non-stationary Environment Test-bench (Garnet) [3 ###reference_3###, 9 ###reference_9###] to generate random reference transition kernels with a prescribed number of states and actions and with a prescribed branching parameter . By definition, determines the proportion of states that are reachable from any given state-action pair in one single transition. We set the branching parameter to and the discount factor to , and we generate the cost corresponding to any and randomly from the uniform distribution on . The initial state follows the uniform distribution over . In addition, we fix a policy defined through , where is sampled uniformly from for every and .\nFinally, we assume that constitutes an -rectangular uncertainty set of the form\nWe solve the resulting instances of the robust policy evaluation problem (2 ###reference_###) with CPI and with [54 ###reference_54###, Algorithm 2], a state-of-the-art projected gradient descent method.\nThe theoretical analysis in Section 3.2 ###reference_### implies that CPI requires iterations to find a -optimal solution for problem (2 ###reference_###). Indeed, if we set , then the output of Algorithm 2 ###reference_### satisfies by Corollary 3.15 ###reference_heorem15###, and the algorithm terminates within iterations by Lemma 3.23 ###reference_heorem23###. Similarly, one can show that [54 ###reference_54###, Algorithm 2] requires iterations to find a -optimal solution for problem (2 ###reference_###); see [54 ###reference_54###, Theorem 4.4]. Thus, the iteration complexity of [54 ###reference_54###, Algorithm 2] includes an extra factor , which grows polynomially with the numbers of states and actions, but lacks a dimensionless factor .\nNote that the iteration complexities of both methods scale with the squared distribution mismatch coefficient . As follows the uniform distribution on , the discounted state visitation distribution must be averaged over . Hence, one can use the trivial bounds and for all to show that . The iteration complexities of CPI and [54 ###reference_54###, Algorithm 2] can thus be expressed as explicit functions of the fundamental parameters , , and .\nIn the second experiment we seek -optimal solutions of (2 ###reference_###) for . To this end, we use CPI with tolerance , stepsizes chosen as in Theorem 3.14 ###reference_heorem14### and initial iterate . In addition, we use [54 ###reference_54###, Algorithm 2] with initial iterate and stepsize as suggested by [54 ###reference_54###, Theorem 4.4]. The above estimates of the iteration complexity and the distribution mismatch coefficient imply that we would have to run [54 ###reference_54###, Algorithm 2] over iterations in order to guarantee that it outputs a -optimal solution. Unfortunately, this is impractical. For example, already our smallest test problem with only states would require more than iterations. We thus use the inequality\nas a heuristic termination criterion. Even though it has\nno theoretical justification, this criterion ensures that [54 ###reference_54###, Algorithm 2] terminates within a reasonable amount of time and outputs similar value estimates as CPI with a maximum difference of .\nThe direction-finding subproblems of CPI as well as the projection subproblems of [54 ###reference_54###, Algorithm 2] are solved with GUROBI. To faithfully assess algorithmic efficiency, we record the solver times for these most time-consuming subroutines. For all other processes we record the wall-clock time. Table 2 ###reference_### reports the overall runtimes of CPI and [54 ###reference_54###, Algorithm 2] (based on the authors\u2019 code available from GitHub111https://github.com/JerrisonWang/ICML-DRPG ###reference_###) averaged over 20 random instances with actions and increasing numbers of states.\nAs expected from the analysis of the iteration complexities, CPI is significantly faster than [54 ###reference_54###, Algorithm 2] on instances with large state spaces. The value estimates of both algorithms differ at most , with CPI outputting a more accurate solution."
58
+ },
59
+ {
60
+ "section_id": "5.3",
61
+ "parent_section_id": "5",
62
+ "section_name": "Machine Replacement: Non-Rectangular Uncertainty Sets",
63
+ "text": "The purpose of the third experiment is to assess the out-of-sample performance of different data-driven policies for MDPs with unknown transition kernels. Our experiment is based on a now standard machine replacement problem described in [16 ###reference_16###, 58 ###reference_58###]. The goal is to find a repair strategy for a machine whose condition is described by eight \u201coperative\u201d states and two \u201crepair\u201d states R1 and R2. The available actions are \u201cdo nothing\u201d or \u201crepair.\u201d The states 8, R1 and R2 incur a cost of 20, 2 and 10 per time period, respectively, whereas no cost is incurred in the other states. The discount factor is set to , and the initial state follows the uniform distribution on . In addition, we define the transition kernel as in [58 ###reference_58###, Section 6]. The optimal value of the resulting (non-robust) policy improvement problem then amounts to .\nIn the following we assume that is unknown but falls within a known structural uncertainty set . We specifically assume that some of the transition probabilities are known to vanish such that , where is an affine function, and is a hypercube of dimension . The components of represent different entries of the transition kernel that are neither known to vanish nor determined by the normalization conditions for all and . Sometimes we will additionally assume that certain transition probabilities are known to be equal, in which case reduces to a hypercube of dimension . Full details about these structural assumptions are provided in [58 ###reference_58###, Section 6].\nIn addition to structural information, there is statistical information about , that is, is indirectly observable through a history of states and actions generated under a known policy . We assume that chooses the actions \u201cdo nothing\u201d and \u201crepair\u201d in each operative state with probabilities and , respectively. In the states and R2, always chooses the action \u201crepair\u201d, and in state R1, always chooses the action \u201cdo nothing.\u201d In the following we use to denote the maximum likelihood estimator for the parameter that generates the unknown true transition kernel . Following [58 ###reference_58###, Section 5], one can use the observation history of length to construct an ellipsoidal confidence region centered at that contains with probability at least for any prescribed . It is then natural to construct an uncertainty set that amalgamates all structural and statistical information about and is guaranteed to contain the data-generating kernel with probability . A related but simpler recipe for constructing uncertainty sets using maximum likelihood estimation is sketched in Appendix A ###reference_### for illustrative purposes. Full details are provided in [58 ###reference_58###, Section 5].\nThe uncertainty set is non-rectangular, and thus the corresponding robust policy improvement problem is hard. A sequential convex optimization procedure that solves a decision rule approximation of the robust policy improvement problem is described in [58 ###reference_58###, Algorithm 4.1]. To our best knowledge, this is the only existing method for addressing robust MDPs with non-rectangular uncertainty sets. Replacing with its -rectangular or even its -rectangular hull leads to a simpler robust policy improvement problem that can be solved exactly and efficiently via dynamic programming. However, the resulting optimal policy is dominated by the policy output by [58 ###reference_58###, Algorithm 4.1] in that it generates up to or even higher out-of-sample net present costs, respectively, see [58 ###reference_58###, Table 3].\nUnlike [58 ###reference_58###, Algorithm 4.1], ACA (Algorithm 3 ###reference_###) uses no decision rule approximation and computes near-optimal solutions to the robust policy improvement problem of any prescribed accuracy (see Theorem 4.9 ###reference_heorem9###). We will now show numerically that the near-optimal policies found by ACA dominate the approximately optimal policies found by [58 ###reference_58###, Algorithm 4.1] in terms of out-of-sample net present cost under . Throughout the experiment we employ ACA with iteration number and stepsize . The critic\u2019s subproblem computes near-optimal solutions to the robust policy evaluation problem by using PLD (Algorithm 1 ###reference_###) with initial iterate , Gibbs parameter , stepsize and iteration number . We work with a variant of PLD that outputs the best iterate found during execution.\nTables 3 ###reference_### and 4 ###reference_### compare the out-of-sample costs of the policies found by ACA and [58 ###reference_58###, Algorithm 4.1] under the assumption of full () and partial () structural information, respectively, as a function of the length of the observation history and the coverage probability of the uncertainty set. The out-of-sample costs corresponding to [58 ###reference_58###, Algorithm 4.1] in Table 3 ###reference_### are directly borrowed from [58 ###reference_58###, Table 3]. Conversely, the out-of-sample costs corresponding to [58 ###reference_58###, Algorithm 4.1] in Table 4 ###reference_### are computed using the authors\u2019 source code in C++ (private communication).\nTable 3 ###reference_### shows that when the transition kernel has only degrees of freedom, both policies generate an out-of-sample cost close to the optimal value of the classical policy improvement problem under the unknown true transition kernel . Moreover, the out-of-sample costs of the two policies differ at most by . These observations are not surprising because kernels with only degrees of freedom are easy to learn and because the uncertainty set is small already for small sample sizes . In this case, the decision rule approximation underlying [58 ###reference_58###, Algorithm 4.1] is highly accurate. Algorithm 3 ###reference_###, which is designed for uncertainty sets of arbitrary size and solves the critic\u2019s subproblem with a randomized PLD scheme, slightly outperforms the benchmark method only for the smallest sample sizes considered.\nTable 4 ###reference_### shows that when the transition kernel has degrees of freedom, then Algorithm 3 ###reference_### outperforms [58 ###reference_58###, Algorithm 4.1] uniformly across all values of and . The advantage is most significant when the uncertainty set is large (i.e., for ).\nWe also highlight that the average wall-clock time for solving all problem instances with Algorithm 3 ###reference_### amounts to seconds. The average solver time consumed by [58 ###reference_58###, Algorithm 4.1], on the other hand, amounts to seconds."
64
+ }
65
+ ],
66
+ "appendix": [
67
+ {
68
+ "section_id": "Appendix 1",
69
+ "parent_section_id": null,
70
+ "section_name": "Appendix A Construction of Uncertainty Sets via Maximum Likelihood Estimation",
71
+ "text": "We now review a standard procedure for constructing an uncertainty set for the transition kernel of an MDP as described in [58 ###reference_58###, Section 5]. This uncertainty set is statistically optimal in a precise sense but fails to be rectangular.\nAssume for ease of exposition that it is possible to move from any state of the MDP to any other state in one single transition, that is, all entries of the unknown transition kernel are strictly positive. The uncertainty set can thus be expressed as the image of a solid parameter set of dimension under an affine function . Specifically, there exists a bijection , and any such bijection can be used to construct a valid function defined through for all , and , and for all and . The largest imaginably uncertainty set of all possible transition kernels can then be expressed as the image of the parameter set\nunder . In the following we assume that the decision maker has access to a state-action observation history of the MDP generated under some known policy and the unknown true transition\nkernel encoded by .\nThe log-likelihood of observing this history under any is given by\nis an irrelevant constant independent of , and represents the initial state distribution. One can show that the maximum likelihood estimator that maximizes over corresponds to the kernel of empirical transition probabilities [58 ###reference_58###, Remark 6]. This means that coincides with number of observed transitions from to , normalized by the length of the observation history. One can use the maximum likelihood estimator as well as the log-likelihood function to construct a confidence set for \nIndeed, this set contains with probability asymptotically for large if is set to one half of the -quantile of the chi-squared distribution with degrees of freedom [58 ###reference_58###, Theorem 5]. This statistical guarantee persists if we approximate the log-likelihood function by its second-order Taylor expansion\nOne can show that, as grows, the scaled Hessian matrix converges in probability to the Fisher information matrix, which we denote as [10 ###reference_10###, Section 2]. In addition, the scaled estimation error converges in distribution to the normal distribution with mean and covariance matrix [10 ###reference_10###, Theorem 2.2].\nA generalization of the classical Cram\u00e9r-Rao inequality ensures that the covariance matrix of any unbiased estimator for is bounded below by in Loewner order asymptotically for large [33 ###reference_33###, Remark 7.9]. In conjunction, these findings suggest that constitutes the smallest possible -confidence set for asymptotically for large . The uncertainty set therefore enjoys a statistical efficiency property. However, it fails to be rectangular [58 ###reference_58###, pp. 173]."
72
+ },
73
+ {
74
+ "section_id": "Appendix 2",
75
+ "parent_section_id": null,
76
+ "section_name": "Appendix B Auxiliary Lemmas",
77
+ "text": "The following elementary results will be used throughout the main text.\nFor any and we have\nfor all ,\nfor all and ,\nfor all and\n\nAs for Assertion (i) ###reference_1###, we have\nwhere the second equality follows from the law of total expectation and (1b ###reference_###), and the last equality follows from the definition of .\nNext, we prove Assertion (iii) ###reference_3###, which will help us to prove Assertion (ii) ###reference_2###. By the definition of we have\nwhere the second equality holds because is a Markov chain and because is independent of this Markov chain conditional on under . The third equality follows from law of total expectation and (1b ###reference_###) together with an index shift , the fourth equality follows from the definition of , and the last equality follows from Assertion (i) ###reference_1###.\nAs for Assertion (ii) ###reference_2###, finally, we have\nwhere the second equality follows from the law of total expectation and (1a ###reference_###), the third equality follows from the definition of , and the fourth equality holds thanks to Assertion (iii) ###reference_3###.\nFor any and we have\n\nWe have\nwhere the third equality follows from Lemma B.1 ###reference_heorem1###(ii) ###reference_2### and the fact that , and the last equality follows from the definition of .\nWe have for any\nFor every we have\nwhere the inequality holds because , which implies that \u200b, and .\nBy the definition of the Frobenius norm we then have Thus, the claim follows.\nInspired by [49 ###reference_49###, Lemma 3], we now prove a generalization of Danskin\u2019s theorem for optimization problems with a smooth but not necessarily convex objective functions.\nLet be an open convex set, \nan arbitrary compact set and a continuous function such that is -smooth in for each and some . In addition, suppose that is continuous in for each . Then, the optimal value function is -weakly convex, and its subdifferential is given by\n\nLet ,\nand observe that\nwhere the equality uses the definition of , and the two inequalities follow from the Cauchy-Schwarz inequality and the -smoothness of respectively. Thus, is convex in [4 ###reference_4###, Proposition 17.10], which in turn implies that is -weakly convex in .\nBy Danskin\u2019s classical theorem for convex objective functions [6 ###reference_6###, Proposition B.25], the value function is convex, and its subdifferential is given by\nAs and , we then find\nThus, the claim follows."
78
+ }
79
+ ],
80
+ "tables": {
81
+ "1": {
82
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T1.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T1.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.5\">\u200b<span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.4.4.5.1\">Runtime [s]</span>\n</th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_right ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T1.4.4.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T1.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.8.8.5\">PLD</th>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S5.T1.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S5.T1.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S5.T1.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_r ltx_border_t\" id=\"S5.T1.8.8.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.12.12.5\">CPI</th>\n<td class=\"ltx_td ltx_align_right ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.11.11.3\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T1.12.12.4\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T1.16.2.1\" style=\"font-size:90%;\">Table 1</span>: </span><span class=\"ltx_text\" id=\"S5.T1.14.1\" style=\"font-size:90%;\">Runtimes of PLD and CPI for non-rectangular uncertainty sets. For PLD we report both means and standard deviations (in parenthesis) over\u00a0 simulation runs.</span></figcaption>\n</figure>",
83
+ "capture": "Table 1: Runtimes of PLD and CPI for non-rectangular uncertainty sets. For PLD we report both means and standard deviations (in parenthesis) over\u00a0 simulation runs."
84
+ },
85
+ "2": {
86
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T2.12\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T2.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.4.4.5.1\">Runtime [s]</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T2.4.4.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T2.8.8\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.5\"><cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib54\" title=\"\">54</a>, Algorithm 2]</cite></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.7.7.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.8.8.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.12.12\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.12.12.5\">CPI</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.10.10.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.11.11.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T2.12.12.4\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T2.18.3.1\" style=\"font-size:90%;\">Table 2</span>: </span><span class=\"ltx_text\" id=\"S5.T2.16.2\" style=\"font-size:90%;\">Runtimes of the projected gradient descent algorithm developed in\u00a0<cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib54\" title=\"\">54</a>]</cite> and CPI on Garnet MDP instances with -rectangular uncertainty sets with .</span></figcaption>\n</figure>",
87
+ "capture": "Table 2: Runtimes of the projected gradient descent algorithm developed in\u00a0[54] and CPI on Garnet MDP instances with -rectangular uncertainty sets with ."
88
+ },
89
+ "3": {
90
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T3.25\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T3.5.5\">\n<th class=\"ltx_td ltx_nopad ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.1.1.1\"><svg height=\"24.91\" overflow=\"visible\" version=\"1.1\" width=\"53.06\"><g transform=\"translate(0,24.91) scale(1,-1)\"><path d=\"M 0,24.91 53.06,0\" stroke=\"black\" stroke-width=\"0.4\"></path><g class=\"ltx_svg_fog\" transform=\"translate(0,0)\"><g transform=\"translate(0,5.96) scale(1, -1)\"><foreignobject height=\"5.96\" overflow=\"visible\" width=\"22.14\">\n<span class=\"ltx_inline-block\" id=\"S5.T3.1.1.1.pic1.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S5.T3.1.1.1.pic1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.1.1.pic1.1.1.1.1\">\u2003</span>\n</span>\n</span></foreignobject></g></g><g class=\"ltx_svg_fog\" transform=\"translate(26.53,15.99)\"><g transform=\"translate(0,8.92) scale(1, -1)\"><foreignobject height=\"8.92\" overflow=\"visible\" width=\"26.53\">\n<span class=\"ltx_inline-block\" id=\"S5.T3.1.1.1.pic1.2.1\">\n<span class=\"ltx_inline-block ltx_align_right\" id=\"S5.T3.1.1.1.pic1.2.1.1\">\n<span class=\"ltx_p\" id=\"S5.T3.1.1.1.pic1.2.1.1.1\"></span>\n</span>\n</span></foreignobject></g></g></g></svg></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.4.4.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.5\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T3.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.7.7.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.8.8.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.9.9.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.10.10.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.12.12.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.13.13.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.14.14.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.15.15.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.16.16.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.17.17.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.18.18.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.19.19.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.20.20.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.25.25\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.21.21.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.22.22.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.23.23.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.24.24.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.25.25.5\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T3.29.2.1\" style=\"font-size:90%;\">Table 3</span>: </span><span class=\"ltx_text\" id=\"S5.T3.27.1\" style=\"font-size:90%;\">\nOut-of-sample costs of the policies found by ACA and <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib58\" title=\"\">58</a>, Algorithm\u00a04.1]</cite> (in parenthesis) under full structural information (kernel with degrees of freedom).\n</span></figcaption>\n</figure>",
91
+ "capture": "Table 3: \nOut-of-sample costs of the policies found by ACA and [58, Algorithm\u00a04.1] (in parenthesis) under full structural information (kernel with degrees of freedom).\n"
92
+ },
93
+ "4": {
94
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S5.T4.25\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S5.T4.5.5\">\n<th class=\"ltx_td ltx_nopad ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.1\"><svg height=\"24.91\" overflow=\"visible\" version=\"1.1\" width=\"53.06\"><g transform=\"translate(0,24.91) scale(1,-1)\"><path d=\"M 0,24.91 53.06,0\" stroke=\"black\" stroke-width=\"0.4\"></path><g class=\"ltx_svg_fog\" transform=\"translate(0,0)\"><g transform=\"translate(0,5.96) scale(1, -1)\"><foreignobject height=\"5.96\" overflow=\"visible\" width=\"22.14\">\n<span class=\"ltx_inline-block\" id=\"S5.T4.1.1.1.pic1.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S5.T4.1.1.1.pic1.1.1.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.1.1.pic1.1.1.1.1\">\u2003</span>\n</span>\n</span></foreignobject></g></g><g class=\"ltx_svg_fog\" transform=\"translate(26.53,15.99)\"><g transform=\"translate(0,8.92) scale(1, -1)\"><foreignobject height=\"8.92\" overflow=\"visible\" width=\"26.53\">\n<span class=\"ltx_inline-block\" id=\"S5.T4.1.1.1.pic1.2.1\">\n<span class=\"ltx_inline-block ltx_align_right\" id=\"S5.T4.1.1.1.pic1.2.1.1\">\n<span class=\"ltx_p\" id=\"S5.T4.1.1.1.pic1.2.1.1.1\"></span>\n</span>\n</span></foreignobject></g></g></g></svg></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.4.4.4\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S5.T4.5.5.5\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S5.T4.10.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.6.6.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.7.7.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.8.8.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.9.9.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.10.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.15.15\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.12.12.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.13.13.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.14.14.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.15.15.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.20.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.16.16.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.17.17.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.18.18.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.19.19.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.20.20.5\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.25.25\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.21.21.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.22.22.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.23.23.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.24.24.4\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.25.25.5\"></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\"><span class=\"ltx_text\" id=\"S5.T4.29.2.1\" style=\"font-size:90%;\">Table 4</span>: </span><span class=\"ltx_text\" id=\"S5.T4.27.1\" style=\"font-size:90%;\">Out-of-sample costs of the policies found by ACA and <cite class=\"ltx_cite ltx_citemacro_cite\">[<a class=\"ltx_ref\" href=\"#bib.bib58\" title=\"\">58</a>, Algorithm\u00a04.1]</cite> (in parenthesis) under partial structural information (kernel with degrees of freedom).\n</span></figcaption>\n</figure>",
95
+ "capture": "Table 4: Out-of-sample costs of the policies found by ACA and [58, Algorithm\u00a04.1] (in parenthesis) under partial structural information (kernel with degrees of freedom).\n"
96
+ }
97
+ },
98
+ "image_paths": {},
99
+ "validation": true,
100
+ "references": [],
101
+ "url": "http://arxiv.org/html/2305.19004v3"
102
+ }
20240123/2306.02869v3.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2306.05739v4.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2306.08877v3.json ADDED
@@ -0,0 +1,491 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment",
3
+ "abstract": "Text-conditioned image generation models often generate incorrect associations between entities and their visual attributes. This reflects an impaired mapping between linguistic binding of entities and modifiers in the prompt and visual binding of the corresponding elements in the generated image. As one example, a query like \u201ca pink sunflower and a yellow flamingo\u201d may incorrectly produce an image of a yellow sunflower and a pink flamingo. To remedy this issue, we propose SynGen, an approach which first syntactically analyses the prompt to identify entities and their modifiers, and then uses a novel loss function that encourages the cross-attention maps to agree with the linguistic binding reflected by the syntax. Specifically, we encourage large overlap between attention maps of entities and their modifiers, and small overlap with other entities and modifier words. The loss is optimized during inference, without retraining or fine-tuning the model. Human evaluation on three datasets, including one new and challenging set, demonstrate significant improvements of SynGen compared with current state of the art methods. This work highlights how making use of sentence structure during inference can efficiently and substantially improve the faithfulness of text-to-image generation.111We make our code publicly available https://github.com/RoyiRa/Syntax-Guided-Generation",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Diffusion models for text-conditioned image generation produce impressive realistic images [1 ###reference_1###, 2 ###reference_2###, 3 ###reference_3###, 4 ###reference_4###]. Users control the generated content through natural-language text prompts that can be rich and complex. Unfortunately, in many cases the generated images are not faithful to the text prompt [5 ###reference_5###, 6 ###reference_6###]. Specifically, one very common failure mode results from improper binding, where modifier words fail to influence the visual attributes of the entity-nouns to which they are grammatically related.\nAs an illustration, consider the prompt \u201ca pink sunflower and a yellow flamingo\u201d.\nGiven this prompt, current models often confuse the modifiers of the two entity-nouns, and generate an image of a yellow sunflower and a pink flamingo (Fig. 1 ###reference_###, bottom left, semantic leak in prompt). In other cases, the attribute may semantically leak to areas in the image that are not even mentioned in the prompt (Fig. 1 ###reference_###, bottom center, semantic leak outside prompt) or the attribute may be completely neglected and missed from the generated image (Fig. 1 ###reference_###, bottom right, attribute neglect). Such mismatch can be addressed by providing non-textual control like visual examples [7 ###reference_7###, 8 ###reference_8###], but the problem of correctly controlling generated images using text remains open.\nA possible reason for these failures is that diffusion models use text encoders like CLIP [9 ###reference_9###], which are known to fail to encode linguistic structures [10 ###reference_10###]. This makes the diffusion process \u201cblind\" to the linguistic bindings, and as a result, generate objects that do not match their attributes.\nBuilding on this intuition, we propose to make the generation process aware of the linguistic structure of the prompt.\nSpecifically, we suggest to intervene with the generation process by steering the cross-attention maps of the diffusion model. These cross-attention map serve as a link between prompt terms and the set of image pixels that correspond to these terms.\nOur linguistics-based approach therefore aims to generate an image where the visual binding between objects and their visual attributes adheres to the syntactic binding between entity-nouns and their modifiers in the prompt.\nSeveral previous work devised solutions to improve the relations between prompt terms and visual components, with some success [11 ###reference_11###, 12 ###reference_12###, 13 ###reference_13###]. They did not focus on the problem of modifier-entity binding. Our approach specifically addresses this issue, by constructing a novel loss function that quantifies the distance between the attention patterns of grammatically-related (modifier, entity-noun) pairs, and the distance between pairs of unrelated words in the prompt. We then optimize the latent denoised image in the direction that separates the attention map of a given modifier from unrelated tokens and bring it closer to its grammatically-related noun. We show that by intervening in the latent code, we markedly improve the pairing between attributes and objects in the generated image while at the same time not compromising the quality of the generated image.\nWe evaluate our method on three datasets. (1) For a natural-language setting, we use the natural compositional prompts in the ABC-6K benchmark [13 ###reference_13###]; (2) To provide direct comparison with previous state-of-the-art in [11 ###reference_11###], we replicate prompts from their setting; (3) Finally, to evaluate binding in a challenging setting, we design a set of prompts that includes a variety of modifiers and entity-nouns. On all datasets, we find that SynGen shows significant improvement in performance based on human evaluation, sometimes doubling the accuracy. Overall, our work highlights the effectiveness of incorporating linguistic information into text-conditioned image generation models and demonstrates a promising direction for future research in this area.\nThe main contributions of this paper are as follows: (1) A novel method to enrich the diffusion process with syntactic information, using inference-time optimization with a loss over cross-attention maps; (2) A new challenge set of prompts containing a rich number and types of modifiers and entities."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Syntax-Guided Generation",
15
+ "text": "###figure_1### Our approach, which we call SynGen, builds on two key ideas. First, it is easy to analyze the syntactic structure of natural language prompts to identify bindings of entity-nouns and their modifiers. Second, one can steer the generation of images to adhere to these bindings by designing an appropriate loss over the cross-attention maps of the diffusion model.\nWe describe the two steps of our approach: extracting syntactic bindings and then using them to control generation."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Identifying entity-nouns and their modifiers",
21
+ "text": "To identify entity-nouns and their corresponding modifiers, we traverse the syntactic dependency graph, which defines the syntactic relation between words in the sentence.\nConcretely, we parse the prompt using spaCy\u2019s transformer-based dependency parser [14 ###reference_14###] and identify all entity-nouns (either proper-nouns or common-nouns) that are not serving as direct modifiers of other nouns.\nThese are the nouns that correspond to objects in the generated image. We then recursively collect all modifiers222We consider modifiers from the set {amod, nmod, compound, npadvmod, acomp, conj}. We exclude conj when determining the top-level nouns. of the noun into its modifier set.\nThe set of modifier-labels includes a range of syntactic relations between nouns and their modifiers, such adjectivial modification (amod; \u201cthe regal dog\u201d), compounds (compound; \u201cthe treasure map\u201d), nominal modification through an intervening marker, adverbial modifiers (npadvmod; \u201cA watermelon-styled chair\u201d), adjectivial complement (acomp; \u201cThe apple is blue\u201d), and coordination between modifiers (conj; \u201cA black and white dog\u201d)."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Controlling generation with language-driven cross-attention losses",
27
+ "text": "###figure_2### Consider a pair of a noun and its modifier. We expect the cross-attention map of the modifier to largely overlap with the cross-attention map of the noun, while remaining largely disjoint with the maps corresponding to other nouns and modifiers. To encourage the denoising process to obey these spatial relations between the attention maps, we design a loss that operates on all cross-attention maps. We then use this loss with\na pretrained diffusion model during inference. Specifically, we optimize the noised latents by taking a gradient step to reduce that loss. See illustration in Fig. 2 ###reference_###. Fig. 3 ###reference_### illustrates the effect of the loss over the cross-attention maps.\nConsider a text prompt with tokens, for which our analysis extracted noun-modifier sets .\nLet represent all pairs of tokens between the noun root and its modifier descendants in the -th set . For illustration, the set of \u201cA black striped dog\u201d contains two pairs (\u201cblack\u201d, \u201cdog\u201d) and (\u201cstriped\u201d, \u201cdog\u201d).\nNext, denote by the attention maps of all tokens in the prompt, and denote by\n a measure of distance (lack of overlap) between attention maps and .\nOur first loss aims to minimize that distance (maximize the overlap) over all pairs of modifiers and their corresponding entity-nouns ,\nWe also construct a loss that compares pairs of modifiers and entity-nouns with the remaining words in the prompt, which are grammatically unrelated to these pairs. In other words, this loss is defined between words within the (modifiers, entity-nouns) set and words outside of it. Formally, let represent the set of unmatched words obtained by excluding the words in from the full set of words and is the corresponding attention map for a given unrelated word . The following loss encourages moving apart grammatically-unrealted pairs of words:\nOur final loss combines the two loss terms:\nFor a measure of distance between attention maps we use a symmetric Kullback-Leibler divergence\n, where , are attention maps normalized to a sum of 1, and are generic indices, and .\nOur test-time optimization approach resembles the one of [11 ###reference_11###], which defined a loss over the cross-attention maps to update the latents at generation time. However, their loss aims to maximize the presence of the smallest attention map at a given timestep to guarantee a set of selected tokens is included in the generated image, and our loss depends on pairwise relations of linguistically-related words and aims to align the diffusion process to the linguistic-structure of the prompt."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "The workflow",
33
+ "text": "We use the loss of Eqs 1-3 to intervene in the first 25 out of 50 denoising steps.\nEmpirically, using a smaller number of steps did not correct well improper binding, and using a larger number generated blurred images, as detailed in Appendix B ###reference_###. In each of the first 25 steps, a pretrained denoiser (U-Net) was first used to denoise the latent variable . Then, we obtained the cross-attention maps as in [15 ###reference_15###]. Next, we used the loss to update the latent representation with a gradient step .\nFinally, the U-Net architecture denoises the updated latent variable for the next timestep."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Experiments",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Compared baseline methods",
45
+ "text": "We compare SynGen with three baseline methods. (1) Stable Diffusion 1.4 (SD) [1 ###reference_1###]; (2) Structured Diffusion [13 ###reference_13###], extracts noun-phrases from the prompt and embeds them separately, to improve the mapping of the semantics in the cross-attention maps; and (3) Attend-and-Excite (A&E) [11 ###reference_11###], a method that given a predetermined set of tokens, updates the latent a certain number of timesteps, to eventually incorporate these tokens in the generated image. To automate token selection in A&E, we follow the recommendation by the authors to select the nouns using a part-of-speech tagger."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Datasets",
51
+ "text": "We evaluate our approach using two existing benchmark datasets, and one new dataset that we designed to challenge methods in this area.\nThis benchmark consists of 3.2K natural compositional prompts from MSCOCO [16 ###reference_16###], which were manually written by humans, using natural language and contain at least two color words modifying different noun-entities. In addition, the dataset contains 3.2K counterparts, where the position of modifiers in the original prompts are swapped. (e.g., \u201ca white bench in front of a green bush\u201d and \u201ca green bench in front of a white bush\u201d). We randomly sample 600 prompts.\nOriginally introduced to evaluate the A&E method which focuses on entity-neglect, this dataset also showed that A&E improved over previous work in terms of improper binding.\nPrompts in this dataset belong to three categories: (1) \u201ca {color} {in-animate object} and a {color} {in-animate object}\u201d; (2) \u201ca {color} {in-animate object} and an {animal}\u201d; (3) \u201can {animal} and an {animal}\u201d. Following the split in A&E, we sample 33 prompts from type (1) and 144 prompts from type (2), but exclude type (3), as it does not contain modifiers. This is a very simple dataset, which we use to facilitate direct comparison with previous work.\nThe above two datasets are limited in terms of number and types of modifiers, and the number of entity-nouns per prompt. To challenge our model, we design a dataset consisting of coordination sentences, in similar fashion to the dataset from A&E, but with strong emphasis on the number and types of modifiers per prompt. Specifically, we aim to compare the models with prompts that contain numerous and uncommon modifiers, creating sentences that would not usually be found in natural language or training data, such as \u201ca pink spotted panda\u201d. DVMP was designed with two key aspects in mind:\nExpanding the set of modifiers: We have extended the number of modifiers referring to an entity-noun from one to up to three. For instance, \u201ca blue furry spotted bird\u201d. We also added types of modifiers besides colors, including material patterns (\u201ca metal chair\u201d), design patterns (\u201ca checkered shoe\u201d), and even nouns modifying other noun-entities (\u201ca baby zebra\u201d).\nVisually verifiable and semantically coherent: The modifiers selected for DVMP are visually verifiable, with a deliberate avoidance of nuanced modifiers. For instance, \u201cbig\u201d is a relative modifier dependent on its spatial context, and emotional states, such as in the prompt \u201can excited dog\u201d, are largely excluded due to their subjective visual interpretation. Simultaneously, DVMP maintains semantic coherence by appropriately matching modifiers to noun-entities, thereby preventing the creation of nonsensical prompts like \u201ca sliced bowl\u201d or \u201ca curved zebra\u201d.\nIn total, we have generated 600 prompts through random sampling. For a comprehensive description of the dataset\u2019s creation, see Appendix F ###reference_###."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Human Evaluation",
57
+ "text": "We evaluate image quality using Amazon Mechanical Turk (AMT). Raters were provided with a multiple-choice task, consisting of a single text prompt and four images, each generated by the baselines and SynGen. Raters could also indicate that all images are \u201cequally good\u201d or \u201cequally bad\u201d. We provided each prompt and its corresponding generations to three raters, and report the majority decision. In cases where there is no majority model winner, we count it toward \u201cno majority winner\u201d.\nWe evaluate generated images in two main aspects: (1) concept separation (sometimes known as editability [17 ###reference_17###]) and (2) visual appeal. Concept separation refers to the ability of the model to distinctly depict different concepts or objects in the generated image. The effectiveness of concept separation is assessed by asking raters, \u201cWhich image best matches the given description?\u201d. To asses visual quality, raters were asked \u201cWhich image is more visually appealing?\u201d. To maintain fairness and reduce biases, the order of images was randomized in each task. Full rater instructions and further details are provided in Section G.1 ###reference_### of the supplemental materials.\nWe also experimented automatic evaluation, but find its quality subpar. For standardized evaluation purposes, it is detailed in Section G.2 ###reference_###.\nIn addition to a multiple-choice task, we evaluate concept separation using the following key metrics: (1) Proper Binding, quantifying how well the model associates attributes with their corresponding objects; (2) Improper Binding, measuring the instances where attributes are incorrectly linked to unrelated objects; and (3) Entity Neglect, capturing the frequency with which the model omits entities specified in the prompt.\nTo this end, we randomly select 200 prompts each from the DVMP and ABC-6K datasets, while using all 177 prompts available in the A&E dataset. Human evaluators were asked to mark if instances have correct or incorrect attribute-object mapping. Importantly, incorrect mappings are counted on a per-attribute basis\u2014multiple incorrect mappings of a single attribute are considered one violation. For example, in the prompt \u201cthe white dog chased the cat up the tree\u201d, if the modifier \u201cwhite\u201d is incorrectly mapped to both \u201ccat\u201d and \u201ctree\u201d, it is counted as one instance of violation. Evaluators also identify the number of entities mentioned in the prompt that are subsequently depicted in the generated image.\nBased on these counts, we define the metric of Proper Binding as the ratio of correctly mapped attributes to the total number of attributes. Similarly, Improper Binding is defined as the ratio of incorrectly mapped attributes to the total number of attributes, while Entity Neglect is the complement of the ratio of mentioned entities that are depicted in the generated image to the total number of entities in the prompt. Rater instructions are provided in Section G.1 ###reference_###."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Results",
63
+ "text": ""
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Quantitative Results",
69
+ "text": "Table 1 ###reference_### provides results of the comparative experiment. SynGen is consistently ranked first in all three datasets, and by a large margin, sometimes double the approval rate of the second ranked method, A&E. These results are observed for concept separation, which measures directly the semantic leak, and for visual appeal.\nThe high number of \u201cno winner\u201d cases reflects the large difficulty of some of the prompts, for which no method provides good enough generated images.\nPopulation results before majority aggregation are given in Section G.1 ###reference_### of the supplemental material. Comparisons with StableDiffusion are given in Fig. 19 ###reference_### of the supplemental.\nTable 2 ###reference_### provides results of the individual experiment. We find that SynGen outperforms all models by a landslide in both proper and improper binding and is on par with state-of-the-art on entity neglect [11 ###reference_11###], despite not directly tackling this problem."
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Qualitative Analysis",
75
+ "text": "Figures 4\u20136 provide qualitative examples from the three datasets, comparing SynGen with the two strongest baselines.\n###figure_3### ###figure_4### ###figure_5### The qualitative examples illustrate several failure modes of our baselines. First, semantic leak in prompt, occurs when a modifier of an entity-noun \u201cleaks\u201d onto a different entity-noun in the prompt, as shown in Fig. 4 ###reference_###, for the prompt \u201ca pink clock and a brown chair\u201d, in columns 3 and 4. In this case, all baselines incorrectly apply pink hues to the chair, despite the prompt explicitly defining it as brown. A more nuanced variant of this issue is semantic leak out of prompt, when a modifier is assigned to an entity-noun that is not mentioned in the prompt. For instance, the \u201cspiky\u201d attribute in \u201ca spiky bowl and a green cat\u201d leaks to a plant, which is not in the prompt, or the green coloration in the background of the images generated by the baselines, as seen in columns 5 and 6 in Fig. 5 ###reference_###.\nAttribute neglect occurs when a modifier from the prompt is absent from the generated image. As exhibited in Fig. 4 ###reference_###, for \u201ca frog and a brown apple\u201d, both baselines do not include a brown color at all.\nEntity casting is another failure type where a modifier is treated as a standalone entity, a phenomenon commonly observed with noun modifiers. For example, the prompt \u201ca wooden crown and a furry baby rabbit\u201d (column 1 in Fig. 5 ###reference_###) has all methods, apart from ours, generate human infants. Presumably, this occurs because \u201cbaby\u201d is interpreted as a noun rather than as a modifier, leading other methods to treat it as a separate object due to the lack of syntactic context. Conversely, SynGen correctly interprets \u201cbaby\u201d as a modifier and accurately binds it to the rabbit. Similarly, in the prompt \u201ca white fire hydrant sitting in a field next to a red building\u201d (column 6 in Fig. 6 ###reference_###), \u201cfire\u201d is wrongly interpreted as an entity-noun, which leads to the unwarranted inclusion of a fire in the scene.\nAll methods, barring SynGen, grapple with entity entanglement [18 ###reference_18###, 19 ###reference_19###, 20 ###reference_20###, 21 ###reference_21###, 22 ###reference_22###], where some objects tend to strongly associate with their most common attribute (e.g., tomatoes are typically red). This is evident in columns 3 and 4 in Fig. 6 ###reference_###, where other methods fail to visually associate the blue attribute with the dog in \u201ca blue and white dog sleeps in front of a black door\u201d. Instead, they resort to typical attributes of the objects, generating a black and white dog.\nFurther qualitative analysis is provided in Section D.1 ###reference_###."
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Ablation study",
81
+ "text": "We evaluated the relative importance of the two terms in our loss Eq. 3 ###reference_###.\nThe positive term , which encourages alignment of the attention map of an object and its modifiers, and the negative loss term, , which discourages alignment with other modifiers and objects.\nWe sampled 100 prompts from the DVMP dataset and generated images with and without each of the two loss terms. See example in Fig. 7 ###reference_###.\nThen, raters were asked to select the best of four variants. Fig. 7 ###reference_###\nshows that raters preferred the variant that combined both the positive and the negative terms.\nMore examples are given in the supplemental\nAppendix B ###reference_###.\n###figure_6###"
82
+ },
83
+ {
84
+ "section_id": "5",
85
+ "parent_section_id": null,
86
+ "section_name": "Related Work",
87
+ "text": "Semantic leakage.\n[2 ###reference_2###] pointed out cases of semantic leakage in diffusion models, where properties of one entity-noun influence the depiction of another. [23 ###reference_23###] attributed this issue to a lack of understanding of syntax, specifically noting failures when processing texts requiring subtle syntactic binding comprehension. [6 ###reference_6###] identified semantic leakage issues in DALL-E, where properties of one entity-noun influence how other entity nouns are depicted. In this work, we pinpoint semantic leakage as a consequence of improper mapping between syntactic and visual binding.\nAttention-based interventions.\n[15 ###reference_15###] demonstrated that the cross-attention mechanism determines the spatial layout of entities in generated images.\nThis result suggested that cross-attention is causally involved in the aforementioned issues. A&E [11 ###reference_11###] addresses the problem of entity omission, where certain entities mentioned in the prompt do not appear in the generated image. They propose a loss function that encourages each noun token in the image to significantly attend to a corresponding image patch, thereby preventing its omission. Our approach is similar to [11 ###reference_11###] in that it updates the latent representation through a loss function over attention maps, during image generation.\nSyntax-based generation was also explored in [13 ###reference_13###], proposing the Structured Diffusion method. It aims to address the problem of missing entities and semantic leakage of attributes. This is achieved by parsing the prompt, extracting phrases corresponding to nouns and modifiers, and encoding them separately. They also intervene in the attention patterns, ensuring that each individual phrase influences the attention patterns. Our experiments show that it is better to implicitly influence the attention patterns through our loss which we dynamically optimize. In contrast, their intervention remains fixed.\nConcurrent to this work, [24 ###reference_24###] proposed an alternative approach to combine syntactic control and attention-based optimization. They extract nouns from prompts and train a layout predictor to identify the corresponding pixels for each noun. Then, they optimize the latents by encouraging the pixels corresponding to the objects to attend to CLIP representations of phrases containing those objects. While similar in spirit, the current paper demonstrates intervention in the generation process solely based on syntax, without explicitly learning the correspondence between image entities and tokens."
88
+ },
89
+ {
90
+ "section_id": "6",
91
+ "parent_section_id": null,
92
+ "section_name": "Limitations",
93
+ "text": "Like previous methods, the performance of SynGen degrades with the number of attributes to be depicted (see supplemental Fig. 12 ###reference_###). However, its decline is remarkably less pronounced compared to other methods. This decay in performance can be attributed to two primary factors: (1) an image begins to lose its visual appeal when the negative loss term becomes excessively large; (2) an overly cluttered image poses challenges in crafting a cohesive \u201cnarrative\u201d for all the concepts. We expect that some of these issues can be addressed with more hyper-parameter tuning.\nNaturally, the effectiveness of our method is intrinsically tied to the quality of the parser. When the parser fails to extract the stipulated syntactic relations, our method essentially operates akin to SD.\nFinally, SynGen takes longer to generate images with modifiers in the prompt than SD and slightly slower than than A&E (see Appendix A ###reference_###)."
94
+ },
95
+ {
96
+ "section_id": "7",
97
+ "parent_section_id": null,
98
+ "section_name": "Conclusions",
99
+ "text": "In this work, we target the improper binding problem, a common failure mode of text-conditioned diffusion models, where objects and their attributes incorrectly correspond to the entity-nouns and their modifiers in the prompt. To address it, we propose SynGen, an inference-time intervention method, with a loss function that encourages syntax-related modifiers and entity-nouns to have overlapping cross-attention maps, and discourages an overlap from cross-attention maps of other words in the prompt. We challenge our method with three datasets, including DVMP \u2013 a new dataset that is specially-designed to draw out hard cases of improper-binding problem. Our method demonstrates improvement of over 100% across all three datasets over the previous state-of-the-art. Finally, our work highlights the importance of linguistic structure during denoising for attaining faithful text-to-image generation, suggesting promising avenues for future research."
100
+ }
101
+ ],
102
+ "appendix": [
103
+ {
104
+ "section_id": "Appendix 1",
105
+ "parent_section_id": null,
106
+ "section_name": "Appendix A Implementation Details",
107
+ "text": ""
108
+ },
109
+ {
110
+ "section_id": "Appendix 2",
111
+ "parent_section_id": null,
112
+ "section_name": "Appendix B Additional Ablation Experiments",
113
+ "text": "###figure_7### In Section 4.3 ###reference_###, we investigate the importance of the positive and negative loss function terms using a human rater. Here, we accompany the rating with a qualitative analysis, to examine the effect of each term. To this end, we generate images for 15 randomly selected prompts, five from each dataset. Fig. 8 ###reference_### depicts a sample of the generated prompts.\nWe find that proper binding necessitates both the positive and negative terms: excluding the negative term from the loss function results in two noteworthy observations. First, the number of missing objects increase, evident by the missing crown, cat, metal chair, and tomato, in columns 1, 2, 4, and 5 in Fig. 8 ###reference_###. One consequence of missing objects is the apparent improper binding, indicated by the red backpack and black shirt in columns 1 and 3.\nOn the other hand, excluding the positive term results in fuzzier separation between objects. For instance, the cat is not completely formed, and is \u201cmerged\u201d with the pillow; and while it appears that there is some green residue on the dog, it is not colored green. Moreover, the grass is green, which indicates a semantic leakage.\nPutting these insights together, we observe that to some extent, the effect the loss terms is complementary. In addition to the increase of objects and proper binding, the images are more coherent (less cases of objects mixed into each other, such as the cat in the only-negative loss or the elephant in the only-positive loss).\n###figure_8### Recall that our method intervenes in latent denoising generation. In this appendix, we study the effect of the hyperparameters determining the number of steps in which we intervene.\nTo identify an ideal number of timesteps to intervene, we experiment with 100 randomly selected prompts from the DVMP dataset, a fixed random seed, and a number of update steps from 5 to 50, in increments of 5. Examples of this experiment are shown in Fig. 9 ###reference_###.\nWe observe that when intervening in a small number of timesteps, our method failed to adequately mitigate semantic leakage or that images are not completely formed. For instance, the apple in column 1 in the 15-steps example is cartoon-ish, while the dog is not. Conversely, intervening for the full 50 timesteps resulted in an increase rate of blurred images (potentially due to the significant modification of the latent, which shifts it away from the learned distribution). We conclude that the optimal number of timesteps for intervention is 25, as this allows for effective mitigation of improper binding, while still generating visually appealing images.\nThe scale factor affects the update step size.\nRecall the update step stated in Section 2 ###reference_### . Here, is the scale-factor.\nTo determine a good selection for the scale-factor, we generate 100 randomly sampled prompts from the DVMP dataset, with a scale-factor value from 1 to 40, in increments of 10.\nAs can be seen in Fig. 10 ###reference_###, we observe that merely updating the latent using a scale-factor of 1 yields relatively good results in terms of improper binding, which confirms the utility of our loss function. However, such a low scale-factor also consistently leads to missing objects.\nInterestingly, for greater scale-factor values, the generations become alike in their overall look, but are nonetheless very different. As an example, for both values, 10 and 30, the sliced banana is missing from the image in column 2, but the 30-value does result in a spotted teal skateboard. In column 3, values below 20 lead to images that contain two pandas (none of which are spotted), which indicates the proper binding process, and that the latent was not updated enough. On the other hand, a value greater than 20 leads to an image of a striped rabbit, instead of a spotted rabbit.\nOne interesting conclusion from this experiment is that the greater the scale-factor, the stronger the concept separation. However, this is only true to a point. For a great enough value, generations become too blurred or simply lose their visual appeal.\n###figure_9###"
114
+ },
115
+ {
116
+ "section_id": "Appendix 3",
117
+ "parent_section_id": null,
118
+ "section_name": "Appendix C Additional Quantitative Analyses",
119
+ "text": "To study the efficacy of SynGen relative to the baselines in improper binding setting, we analyze the results under three perspectives. (1) as a function of repeating entities and modifiers; (2) as a function of the number of modifiers; and (3) degree of entanglement.\nSamples of generations are shown in Fig. 14 ###reference_###."
120
+ },
121
+ {
122
+ "section_id": "Appendix 4",
123
+ "parent_section_id": null,
124
+ "section_name": "Appendix D Additional Qualitative Results",
125
+ "text": "###figure_10### In Fig. 15 ###reference_###, examples from the DVMP challenge set include 2 to 6 modifiers.\nWhile errors of all types are prevalent regardless of the number of modifiers, their frequency tends to rise as more modifiers are added.\nAs for SynGen, although it does not display semantic leakages at an increased rate compared to the baselines (as quantitatively demonstrated in Fig. 12 ###reference_###), it does show a tendency to generate more than the specified number of entities as the modifier count increases. This behavior is observable in rows 8 and 10 for SynGen, and in rows 7 through 10 for the baselines.\n###figure_11### As described in Section 5 ###reference_###, concurrent to this work, [24 ###reference_24###] developed a method to optimize the latents. While they primarily attend spatial and temporal relations, they too report on improper binding, namely, attribute mismatch. Thus, we extend the tables from Section 4 ###reference_###, to include Spatial-Temporal Diffusion, see Fig. 16 ###reference_###, Fig. 17 ###reference_###, Fig. 18 ###reference_###.\nBased on these 18 images, we observe that Spatial-Temporal Diffusion consistently misses at least one entity from the prompt. As an example, see Fig. 16 ###reference_###. The images in columns 1 and 2 miss a crown (but include \u201cwooden\u201d objects), and columns 3 and 4 miss a lion and exhibit semantic leakage.\nIn other cases, we note many cases of semantic leakage in and out of the prompt. For instance, in Fig. 18 ###reference_###, in column 2 the clock is brown and the wall is pink, and in column 3, the chair is pink.\n###figure_12### ###figure_13### ###figure_14### A comparison between Stable Diffusion and Structured Diffusion is depicted in Fig. 19 ###reference_###. The findings from the study by [11 ###reference_11###] suggest that the generated images from Structured Diffusion are often similar to those generated by Stable Diffusion, with limited improvements in addressing semantic flaws and enhancing image quality. This is further supported by the comparable results presented in our findings Table 1 ###reference_###. Therefore, while we include all baselines in our evaluations, our qualitative analysis only showcases images produced by the slightly superior Structured Diffusion.\n###figure_15###"
126
+ },
127
+ {
128
+ "section_id": "Appendix 5",
129
+ "parent_section_id": null,
130
+ "section_name": "Appendix E SynGen Failures",
131
+ "text": "We observe three recurring types of failure SynGen displays Fig. 20 ###reference_###. First, when there are many modifiers and entities in the prompt, despite the results in Fig. 12 ###reference_###, we note that sometimes the negative loss component becomes exceedingly large, and thus, pushes the latent out of the distribution the decoder was trained on. Consequently, images become blurred, or contain concepts which are successfully separated, but are incoherent. This is likely because our method over-fixates on incorporating all elements described in the prompt.\nSecond, while SynGen typically successfully addresses the possible error cases described in Section 4.2 ###reference_###, at times it can neglect generating all objects, unify separate entities, or neglect generating attributes. We conjecture that it is because the cross-attention maps of the modifier and its corresponding entity do not overlap enough. We note it usually occurs when there are many modifiers that refer to the same entity.\nFinally, as common with many diffusion models, we report a recurring issue with faithfulness to the number of units specified in the prompt, for a certain entity. For instance, upon receiving prompts containing \u201ca strawberry\u201d, SynGen generates images with multiple strawberries, instead of just one. One explanation to this problem is that the representation of a certain entity begins \u201cscattered\u201d, and is never quite formed into a single cluster. Interestingly, the opposite problem, where multiple units are \u201cmerged\u201d into one, occurs far less in the generations of SynGen. Possibly, because of the inherent objective function of our loss, which \u201cpushes away\u201d foreign concepts from one another.\n###figure_16###"
132
+ },
133
+ {
134
+ "section_id": "Appendix 6",
135
+ "parent_section_id": null,
136
+ "section_name": "Appendix F The Diverse Multiple Modifiers Prompts (DVMP) dataset",
137
+ "text": "In Section 3.2 ###reference_SSS0.Px3### we describe DVMP, a new dataset containing rich and challenging combinations, for the purpose of evaluating improper binding.\nIn total, DVMP has 18 types of objects, 16 types of animals, and 4 types of fruit. There are four animal modifiers, 7 object modifiers, two fruit modifiers, and 13 colors. A comprehensive account of the entities and their possible modifiers is shown in Table 4 ###reference_###.\nbackpack, crown, suitcase, chair, balloon, bow, car, bowl, bench, clock, camera, umbrella, guitar, shoe, hat, surfboard, skateboard, bicycle\nmodern, spotted, wooden, metal, curved, spiky, checkered\napple, tomato, banana, strawberry\nsliced, skewered\ncat, dog, bird, bear, lion, horse, elephant, monkey, frog, turtle, rabbit, mouse, panda, zebra, gorilla, penguin\nfurry, baby, spotted, sleepy\nred, orange, yellow, green, blue, purple, pink, brown, gray, black, white, beige, teal"
138
+ },
139
+ {
140
+ "section_id": "Appendix 7",
141
+ "parent_section_id": null,
142
+ "section_name": "Appendix G Extended Evaluation",
143
+ "text": "In the manual evaluation procedure detailed in Section 3.3 ###reference_### the evaluator is tasked with comparing various image generations and selecting the optimal image based on multiple criteria. The guidelines and examples given to the evaluators are presented in Fig. 21 ###reference_### and Fig. 22 ###reference_###. Fig. 23 ###reference_### provides a screenshot of the rating interface.\nThe full results of the human evaluation are given in Table 5 ###reference_###\nA common approach to automatically assess text-based image generation is by computing the cosine similarity between an image and prompt, using a vision-language model like CLIP [9 ###reference_9###]. However, the very challenge we tackle here is rooted in CLIP\u2019s failure in establishing correct mapping between syntactic bindings and visual bindings, functioning like a bag-of-words model [10 ###reference_10###]. As an example, suppose CLIP is prompted with \u201ca blue room with a yellow window\u201d. If we present CLIP with an image of a yellow room with a blue window, it may yield a similar score to an image that accurately depicts a blue room with a yellow window.\nIn an attempt to address this flaw, we segment prompts to phrases containing entity-nouns and their corresponding modifiers (e.g., \u201ca blue room\u201d and \u201ca yellow window\u201d), and compute the similarity between these segmented phrases and the image. We then aggregate the result to a single score by computing the mean. With this approach, we expect CLIP to properly associate the modifiers (e.g., \u201cblue\u201d and \u201cyellow\u201d) with the correct entity-noun (i.e., \u201croom\u201d and \u201cwindow\u201d) as there is only one entity-noun in each segment. Unfortunately, this metric achieves relatively low agreement with the majority selection of human evaluation, only 43.5% of the time, where 25% is random selection. Despite the low agreement, we note the overall trend of selections of this automatic metric is very similar to the human majority selection. Table 6 ###reference_### shows the results of our automatic evaluation.\n###table_1###"
144
+ }
145
+ ],
146
+ "tables": {
147
+ "1": {
148
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T1.30\" style=\"width:235.1pt;height:320.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-1.2pt,1.6pt) scale(0.99,0.99) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.30.30\">\n<tr class=\"ltx_tr\" id=\"S4.T1.30.30.31\">\n<td class=\"ltx_td ltx_border_tt\" colspan=\"2\" id=\"S4.T1.30.30.31.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T1.30.30.31.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Concept</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T1.30.30.31.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Visual</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.30.30.32\">\n<td class=\"ltx_td\" colspan=\"2\" id=\"S4.T1.30.30.32.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.30.30.32.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Separation</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.30.30.32.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Appeal</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.30.30.33\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.30.30.33.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Dataset</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.30.30.33.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Model</td>\n<td class=\"ltx_td\" id=\"S4.T1.30.30.33.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td\" id=\"S4.T1.30.30.33.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.2.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.2.2.3\" rowspan=\"5\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.2.2.2.3.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"S4.T1.2.2.2.3.1.1\" style=\"width:56.9pt;\">\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T1.2.2.2.3.1.1.1\">A&amp;E</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.2.2.2.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">SynGen (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.1.1.1.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T1.1.1.1.1.1\">38.42</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.2.2.2.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T1.2.2.2.2.1\">37.85</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.4.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.4.4.4.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">A&amp;E</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.3.3.3.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.4.4.4.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.6.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.6.6.6.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.5.5.5.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.6.6.6.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.8.8.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.8.8.8.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.7.7.7.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.8.8.8.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.10.10.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.10.10.10.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">No majority winner</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.9.9.9.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.10.10.10.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.12.12.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.12.12.12.3\" rowspan=\"5\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.12.12.12.3.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"S4.T1.12.12.12.3.1.1\" style=\"width:56.9pt;\">\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T1.12.12.12.3.1.1.1\">DVMP</span>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T1.12.12.12.3.1.1.2\">(challenge set)</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.12.12.12.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">SynGen (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.11.11.11.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T1.11.11.11.1.1\">24.84</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.12.12.12.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T1.12.12.12.2.1\">16.00</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.14.14.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.14.14.14.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">A&amp;E</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.13.13.13.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.14.14.14.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.16.16.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.16.16.16.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.15.15.15.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.16.16.16.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.18.18.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.18.18.18.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.17.17.17.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.18.18.18.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.20.20.20\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.20.20.20.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">No majority winner</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.19.19.19.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.20.20.20.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.22.22.22\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S4.T1.22.22.22.3\" rowspan=\"5\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S4.T1.22.22.22.3.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"S4.T1.22.22.22.3.1.1\" style=\"width:56.9pt;\">\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T1.22.22.22.3.1.1.1\">ABC-6K</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T1.22.22.22.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">SynGen (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.21.21.21.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T1.21.21.21.1.1\">28.00</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T1.22.22.22.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T1.22.22.22.2.1\">18.34</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.24.24.24\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.24.24.24.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">A&amp;E</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.23.23.23.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.24.24.24.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.26.26.26\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.26.26.26.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.25.25.25.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.26.26.26.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.28.28.28\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T1.28.28.28.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.27.27.27.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T1.28.28.28.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.30.30.30\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T1.30.30.30.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">No majority winner</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.29.29.29.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T1.30.30.30.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Human evaluation of all methods on the three datasets.\nThe table reports scores for concept separation (how well the image matches the prompt) and visual appeal.\nValues are the fraction of majority vote of three raters, normalized to sum to 100.\n</figcaption>\n</figure>",
149
+ "capture": "Table 1: Human evaluation of all methods on the three datasets.\nThe table reports scores for concept separation (how well the image matches the prompt) and visual appeal.\nValues are the fraction of majority vote of three raters, normalized to sum to 100.\n"
150
+ },
151
+ "2": {
152
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>\nResults of the fine-grained concept separation experiment. Proper Binding should be maximized to 100, while Improper Binding and Entity Neglect should be minimized to 0.\n</figcaption>\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T2.39\" style=\"width:386.7pt;height:249.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-2.0pt,1.3pt) scale(0.99,0.99) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.39.39\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.3\">\n<td class=\"ltx_td ltx_border_tt\" colspan=\"2\" id=\"S4.T2.3.3.3.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T2.1.1.1.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Proper Binding \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T2.2.2.2.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Improper Binding \n</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"S4.T2.3.3.3.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Entity Neglect \n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.39.39.40\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.39.39.40.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Dataset</td>\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.39.39.40.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Model</td>\n<td class=\"ltx_td\" id=\"S4.T2.39.39.40.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td\" id=\"S4.T2.39.39.40.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td\" id=\"S4.T2.39.39.40.5\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.6.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.6.6.6.4\" rowspan=\"4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.6.6.6.4.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"S4.T2.6.6.6.4.1.1\" style=\"width:56.9pt;\">\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T2.6.6.6.4.1.1.1\">A&amp;E</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.6.6.6.5\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">SynGen (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.4.4.4.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T2.4.4.4.1.1\">94.76</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.5.5.5.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T2.5.5.5.2.1\">23.81</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.6.6.6.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.9.9.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.9.9.9.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">A&amp;E</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.7.7.7.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.8.8.8.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.9.9.9.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T2.9.9.9.3.1\">01.41</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.12.12.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.12.12.12.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.10.10.10.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.11.11.11.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.12.12.12.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.15.15.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.15.15.15.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.13.13.13.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.14.14.14.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.15.15.15.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.18.18.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.18.18.18.4\" rowspan=\"4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.18.18.18.4.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"S4.T2.18.18.18.4.1.1\" style=\"width:56.9pt;\">\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T2.18.18.18.4.1.1.1\">DVMP</span>\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T2.18.18.18.4.1.1.2\">(challenge set)</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.18.18.18.5\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">SynGen (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.16.16.16.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T2.16.16.16.1.1\">74.90</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.17.17.17.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T2.17.17.17.2.1\">19.49</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.18.18.18.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.21.21.21\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.21.21.21.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">A&amp;E</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.19.19.19.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.20.20.20.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.21.21.21.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T2.21.21.21.3.1\">10.77</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.24.24.24\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.24.24.24.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.22.22.22.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.23.23.23.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.24.24.24.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.27.27.27\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.27.27.27.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.25.25.25.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.26.26.26.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.27.27.27.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.30.30.30\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"S4.T2.30.30.30.4\" rowspan=\"4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text\" id=\"S4.T2.30.30.30.4.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"S4.T2.30.30.30.4.1.1\" style=\"width:56.9pt;\">\n<span class=\"ltx_p ltx_align_center\" id=\"S4.T2.30.30.30.4.1.1.1\">ABC-6K</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T2.30.30.30.5\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">SynGen (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.28.28.28.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T2.28.28.28.1.1\">63.68</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.29.29.29.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T2.29.29.29.2.1\">14.37</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"S4.T2.30.30.30.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.33.33.33\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.33.33.33.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">A&amp;E</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.31.31.31.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.32.32.32.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.33.33.33.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_bold\" id=\"S4.T2.33.33.33.3.1\">33.18</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.36.36.36\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T2.36.36.36.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.34.34.34.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.35.35.35.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"S4.T2.36.36.36.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.39.39.39\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T2.39.39.39.4\" style=\"padding-left:1.0pt;padding-right:1.0pt;\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.37.37.37.1\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.38.38.38.2\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"S4.T2.39.39.39.3\" style=\"padding-left:1.0pt;padding-right:1.0pt;\"></td>\n</tr>\n</table>\n</span></div>\n</figure>",
153
+ "capture": "Table 2: \nResults of the fine-grained concept separation experiment. Proper Binding should be maximized to 100, while Improper Binding and Entity Neglect should be minimized to 0.\n"
154
+ },
155
+ "3": {
156
+ "table_html": "<figure class=\"ltx_table\" id=\"A6.T4\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>List of entities and their modifiers in the DVMP dataset. Colors are not restricted to categories.</figcaption><div class=\"ltx_flex_figure ltx_flex_table\">\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_flex_size_1 ltx_align_middle\" id=\"A6.T4.1\">\n<tr class=\"ltx_tr\" id=\"A6.T4.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A6.T4.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"A6.T4.1.1.1.1\">Category</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"A6.T4.1.1.2\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"A6.T4.1.1.2.1\">Entities</span></td>\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"A6.T4.1.1.3\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"A6.T4.1.1.3.1\">Modifiers</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T4.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A6.T4.1.2.1\">General</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"A6.T4.1.2.2\">\n<p class=\"ltx_p ltx_align_center ltx_align_top\" id=\"A6.T4.1.2.2.1\">backpack, crown, suitcase, chair, balloon, bow, car, bowl, bench, clock, camera, umbrella, guitar, shoe, hat, surfboard, skateboard, bicycle</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"A6.T4.1.2.3\">\n<p class=\"ltx_p ltx_align_center ltx_align_top\" id=\"A6.T4.1.2.3.1\">modern, spotted, wooden, metal, curved, spiky, checkered</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T4.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A6.T4.1.3.1\">Fruit</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"A6.T4.1.3.2\">\n<p class=\"ltx_p ltx_align_center ltx_align_top\" id=\"A6.T4.1.3.2.1\">apple, tomato, banana, strawberry</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_t\" id=\"A6.T4.1.3.3\">\n<p class=\"ltx_p ltx_align_center ltx_align_top\" id=\"A6.T4.1.3.3.1\">sliced, skewered</p>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T4.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"A6.T4.1.4.1\">Animals</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"A6.T4.1.4.2\">\n<p class=\"ltx_p ltx_align_center ltx_align_top\" id=\"A6.T4.1.4.2.1\">cat, dog, bird, bear, lion, horse, elephant, monkey, frog, turtle, rabbit, mouse, panda, zebra, gorilla, penguin</p>\n</td>\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"A6.T4.1.4.3\">\n<p class=\"ltx_p ltx_align_center ltx_align_top\" id=\"A6.T4.1.4.3.1\">furry, baby, spotted, sleepy</p>\n</td>\n</tr>\n</table>\n</div>\n<div class=\"ltx_flex_break\"></div>\n<div class=\"ltx_flex_cell ltx_flex_size_1\">\n<table class=\"ltx_tabular ltx_flex_size_1 ltx_align_middle\" id=\"A6.T4.2\">\n<tr class=\"ltx_tr\" id=\"A6.T4.2.1\">\n<td class=\"ltx_td ltx_align_justify ltx_border_tt\" id=\"A6.T4.2.1.1\"><span class=\"ltx_text ltx_font_bold ltx_align_top\" id=\"A6.T4.2.1.1.1\">Color Modifiers</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T4.2.2\">\n<td class=\"ltx_td ltx_align_justify ltx_border_bb ltx_border_t\" id=\"A6.T4.2.2.1\">\n<p class=\"ltx_p ltx_align_center ltx_align_top\" id=\"A6.T4.2.2.1.1\">red, orange, yellow, green, blue, purple, pink, brown, gray, black, white, beige, teal</p>\n</td>\n</tr>\n</table>\n</div>\n</div>\n</figure>",
157
+ "capture": "Table 4: List of entities and their modifiers in the DVMP dataset. Colors are not restricted to categories."
158
+ },
159
+ "4": {
160
+ "table_html": "<figure class=\"ltx_table\" id=\"A7.T5\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>The population vote of three raters was normalized to sum to 100 and the standard error mean was added. The table reports the scores for concept separation (how well the image matches the prompt) and visual appeal for different models on each dataset.</figcaption>\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A7.T5.30\">\n<tr class=\"ltx_tr\" id=\"A7.T5.30.31\">\n<td class=\"ltx_td ltx_border_tt\" colspan=\"2\" id=\"A7.T5.30.31.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"A7.T5.30.31.2\">Concept Separation</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"A7.T5.30.31.3\">Visual Appeal</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.30.32\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.30.32.1\">Dataset</td>\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.30.32.2\">Model</td>\n<td class=\"ltx_td\" id=\"A7.T5.30.32.3\"></td>\n<td class=\"ltx_td\" id=\"A7.T5.30.32.4\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A7.T5.2.2.3\" rowspan=\"5\"><span class=\"ltx_text\" id=\"A7.T5.2.2.3.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"A7.T5.2.2.3.1.1\" style=\"width:56.9pt;\">\n<span class=\"ltx_p ltx_align_center\" id=\"A7.T5.2.2.3.1.1.1\">A&amp;E</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A7.T5.2.2.4\">SynGen (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A7.T5.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A7.T5.2.2.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.4.4.3\">A&amp;E</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.3.3.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.4.4.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.6.6.3\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.5.5.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.6.6.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.8.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.8.8.3\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.8.8.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.10.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.10.10.3\">No majority winner</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.9.9.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.10.10.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.12.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A7.T5.12.12.3\" rowspan=\"5\"><span class=\"ltx_text\" id=\"A7.T5.12.12.3.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"A7.T5.12.12.3.1.1\" style=\"width:56.9pt;\">\n<span class=\"ltx_p ltx_align_center\" id=\"A7.T5.12.12.3.1.1.1\">DVMP</span>\n<span class=\"ltx_p ltx_align_center\" id=\"A7.T5.12.12.3.1.1.2\">(challenge set)</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A7.T5.12.12.4\">SynGen (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A7.T5.11.11.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A7.T5.12.12.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.14.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.14.14.3\">A&amp;E</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.13.13.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.14.14.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.16.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.16.16.3\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.15.15.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.16.16.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.18.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.18.18.3\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.17.17.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.18.18.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.20.20\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.20.20.3\">No majority winner</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.19.19.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.20.20.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.22.22\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_t\" id=\"A7.T5.22.22.3\" rowspan=\"5\"><span class=\"ltx_text\" id=\"A7.T5.22.22.3.1\">\n<span class=\"ltx_inline-block ltx_parbox ltx_align_middle\" id=\"A7.T5.22.22.3.1.1\" style=\"width:56.9pt;\">\n<span class=\"ltx_p ltx_align_center\" id=\"A7.T5.22.22.3.1.1.1\">ABC-6K</span>\n</span></span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A7.T5.22.22.4\">SynGen (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A7.T5.21.21.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A7.T5.22.22.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.24.24\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.24.24.3\">A&amp;E</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.23.23.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.24.24.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.26.26\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.26.26.3\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.25.25.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.26.26.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.28.28\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T5.28.28.3\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.27.27.1\"></td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T5.28.28.2\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T5.30.30\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A7.T5.30.30.3\">No majority winner</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A7.T5.29.29.1\"></td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A7.T5.30.30.2\"></td>\n</tr>\n</table>\n</figure>",
161
+ "capture": "Table 5: The population vote of three raters was normalized to sum to 100 and the standard error mean was added. The table reports the scores for concept separation (how well the image matches the prompt) and visual appeal for different models on each dataset."
162
+ },
163
+ "5": {
164
+ "table_html": "<figure class=\"ltx_table\" id=\"A7.T6\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>\nAutomatic evaluation of all methods on the three datasets. The table reports scores for concept separation (how well the image matches the prompt) and visual appeal. Values are the fraction of majority vote of three raters, normalized to sum to 100.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"A7.T6.1\">\n<tr class=\"ltx_tr\" id=\"A7.T6.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A7.T6.1.1.1\">Method</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"A7.T6.1.1.2\">DVMP (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"A7.T6.1.1.3\">ABC-6K</td>\n<td class=\"ltx_td ltx_align_right ltx_border_tt\" id=\"A7.T6.1.1.4\">A&amp;E</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T6.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A7.T6.1.2.1\">SynGen (ours)</td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A7.T6.1.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"A7.T6.1.2.2.1\">47.33</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A7.T6.1.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"A7.T6.1.2.3.1\">41.33</span></td>\n<td class=\"ltx_td ltx_align_right ltx_border_t\" id=\"A7.T6.1.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"A7.T6.1.2.4.1\">44.63</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T6.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T6.1.3.1\">A&amp;E</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T6.1.3.2\">27.66</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T6.1.3.3\">24.33</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T6.1.3.4\">27.11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T6.1.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A7.T6.1.4.1\">Structured Diffusion</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T6.1.4.2\">12.84</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T6.1.4.3\">17.84</td>\n<td class=\"ltx_td ltx_align_right\" id=\"A7.T6.1.4.4\">11.87</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A7.T6.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A7.T6.1.5.1\">Stable Diffusion</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A7.T6.1.5.2\">12.17</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A7.T6.1.5.3\">16.50</td>\n<td class=\"ltx_td ltx_align_right ltx_border_bb\" id=\"A7.T6.1.5.4\">16.39</td>\n</tr>\n</table>\n</figure>",
165
+ "capture": "Table 6: \nAutomatic evaluation of all methods on the three datasets. The table reports scores for concept separation (how well the image matches the prompt) and visual appeal. Values are the fraction of majority vote of three raters, normalized to sum to 100."
166
+ }
167
+ },
168
+ "image_paths": {
169
+ "1": {
170
+ "figure_path": "2306.08877v3_figure_1.png",
171
+ "caption": "Figure 1: Visual bindings of objects and their attributes may fail to match the linguistic bindings between entities and their modifiers. Our approach, SynGen, corrects these errors by matching the cross-attention maps of entities and their modifiers.",
172
+ "url": "http://arxiv.org/html/2306.08877v3/x1.png"
173
+ },
174
+ "2": {
175
+ "figure_path": "2306.08877v3_figure_2.png",
176
+ "caption": "Figure 2: The SynGen workflow and architecture. (a) The text prompt is analyzed to extract entity-nouns and their modifiers. (b) SynGen adds intermediates steps to the diffusion denoising process. In that step, we update the latent representation to minimize a loss over the cross attention maps of entity-nouns and their modifiers (Eq 3).",
177
+ "url": "http://arxiv.org/html/2306.08877v3/x2.png"
178
+ },
179
+ "3": {
180
+ "figure_path": "2306.08877v3_figure_3.png",
181
+ "caption": "Figure 3: Evolution of cross-attention maps and latent representation along denoising steps, for the prompt \u201ca red crown and a golden strawberry\u201d.\nAt first, the attention maps of all modifiers and entity-nouns are intertwined, regardless of the expected binding. During denoising, attention maps gradually becomes separated, adhering the syntactic bindings. The vertical line indicates that after 25 steps intervention stops, but the attention maps remain separated.",
182
+ "url": "http://arxiv.org/html/2306.08877v3/x3.png"
183
+ },
184
+ "4": {
185
+ "figure_path": "2306.08877v3_figure_4.png",
186
+ "caption": "Figure 4: Qualitative comparison for prompts from the Attend-and-Excite dataset.\nFor every prompt, the same three seeds are used for all methods.",
187
+ "url": "http://arxiv.org/html/2306.08877v3/x4.png"
188
+ },
189
+ "5": {
190
+ "figure_path": "2306.08877v3_figure_5.png",
191
+ "caption": "Figure 5: Qualitative comparison for prompts from the DVMP dataset.\nFor every prompt, the same three seeds are used for all methods.",
192
+ "url": "http://arxiv.org/html/2306.08877v3/x5.png"
193
+ },
194
+ "6": {
195
+ "figure_path": "2306.08877v3_figure_6.png",
196
+ "caption": "Figure 6: Qualitative examples for ABC-6K prompts.\nFor every prompt, all methods use the same three seeds.",
197
+ "url": "http://arxiv.org/html/2306.08877v3/x6.png"
198
+ },
199
+ "7": {
200
+ "figure_path": "2306.08877v3_figure_7.png",
201
+ "caption": "Table 3: Ablation of loss components. Values are percent preferred by human raters.",
202
+ "url": "http://arxiv.org/html/2306.08877v3/x7.png"
203
+ },
204
+ "8": {
205
+ "figure_path": "2306.08877v3_figure_8.png",
206
+ "caption": "Figure 8: We examine the effect of employing only one of the two losses instead of both. All images were generated using the same random seed.",
207
+ "url": "http://arxiv.org/html/2306.08877v3/x8.png"
208
+ },
209
+ "9": {
210
+ "figure_path": "2306.08877v3_figure_9.png",
211
+ "caption": "Figure 9: We experiment with varying number of diffusion steps and examine the effect of changing the number of diffusion steps for which we intervene with the cross attention maps. All images were generated using the same random seed.",
212
+ "url": "http://arxiv.org/html/2306.08877v3/x9.png"
213
+ },
214
+ "10": {
215
+ "figure_path": "2306.08877v3_figure_10.png",
216
+ "caption": "Figure 10: Qualitative comparison between scale factor values for SynGen.\nFor every prompt, the same seeds are applied. We anecdotally show our scale-factor value (we use the value 20) provides superior results.",
217
+ "url": "http://arxiv.org/html/2306.08877v3/x10.png"
218
+ },
219
+ "11(a)": {
220
+ "figure_path": "2306.08877v3_figure_11(a).png",
221
+ "caption": "(a)\nFigure 11: The performance of SynGen and the baselines in concept separation on prompts containing (a) repeating modifiers; and (b) repeating entities in the DVMP dataset.",
222
+ "url": "http://arxiv.org/html/2306.08877v3/extracted/5358687/Analysis_files/max_repeating_modifiers__concept_majority__winners_only.png"
223
+ },
224
+ "11(b)": {
225
+ "figure_path": "2306.08877v3_figure_11(b).png",
226
+ "caption": "(b)\nFigure 11: The performance of SynGen and the baselines in concept separation on prompts containing (a) repeating modifiers; and (b) repeating entities in the DVMP dataset.",
227
+ "url": "http://arxiv.org/html/2306.08877v3/extracted/5358687/Analysis_files/max_repeating_entities__concept_majority__winners_only.png"
228
+ },
229
+ "12": {
230
+ "figure_path": "2306.08877v3_figure_12.png",
231
+ "caption": "Figure 12: Concept Separation as a function of number of modifiers in a prompt in the DVMP dataset, introduced in Section 3.2. Only the top-competing method (Attend-and-Excite) is plotted for readability.",
232
+ "url": "http://arxiv.org/html/2306.08877v3/extracted/5358687/Analysis_files/num_modifiers__concept_majority__winners_only.png"
233
+ },
234
+ "13": {
235
+ "figure_path": "2306.08877v3_figure_13.png",
236
+ "caption": "Figure 13: The performance of SynGen and the baselines in concept separation when grouping the prompts with respect to entangled modifiers in the DVMP dataset.",
237
+ "url": "http://arxiv.org/html/2306.08877v3/extracted/5358687/Analysis_files/entangled_groups__concept_majority__winners_only.png"
238
+ },
239
+ "14": {
240
+ "figure_path": "2306.08877v3_figure_14.png",
241
+ "caption": "Figure 14: Samples from the analyses in Appendix C. (a) a case of recurring entity (strawberry); (b) a recurring modifier (black) and entity (apple); (c) and (d) contain entangled entities (a blue bear and a purple strawberry); (e), (f), (g) are examples of prompts with more than two modifiers.",
242
+ "url": "http://arxiv.org/html/2306.08877v3/x11.png"
243
+ },
244
+ "15": {
245
+ "figure_path": "2306.08877v3_figure_15.png",
246
+ "caption": "Figure 15: \nExtended qualitative comparison for prompts from the DVMP challenge set.",
247
+ "url": "http://arxiv.org/html/2306.08877v3/x12.png"
248
+ },
249
+ "16": {
250
+ "figure_path": "2306.08877v3_figure_16.png",
251
+ "caption": "Figure 16: Extended qualitative comparison for prompts from the DVMP dataset. SynGen and Spatial-Temporal Diffusion [24].",
252
+ "url": "http://arxiv.org/html/2306.08877v3/x13.png"
253
+ },
254
+ "17": {
255
+ "figure_path": "2306.08877v3_figure_17.png",
256
+ "caption": "Figure 17: Extended qualitative comparison for prompts from the ABC6K dataset. SynGen and Spatial-Temporal Diffusion [24].",
257
+ "url": "http://arxiv.org/html/2306.08877v3/x14.png"
258
+ },
259
+ "18": {
260
+ "figure_path": "2306.08877v3_figure_18.png",
261
+ "caption": "Figure 18: Extended qualitative comparison for prompts from the Attend-and-Excite dataset. SynGen and Spatial-Temporal Diffusion [24].",
262
+ "url": "http://arxiv.org/html/2306.08877v3/x15.png"
263
+ },
264
+ "19": {
265
+ "figure_path": "2306.08877v3_figure_19.png",
266
+ "caption": "Figure 19: Side-by-side generations of StableDiffusion and StructureDiffusion.",
267
+ "url": "http://arxiv.org/html/2306.08877v3/x16.png"
268
+ },
269
+ "20": {
270
+ "figure_path": "2306.08877v3_figure_20.png",
271
+ "caption": "Figure 20: Frequent failure modes in SynGen. (a) depicts a case of blurred image, (b) incoherent image which maintains concept separation. Both are a result of excessive updates to the latent, resulting from a large negative loss term. In example (c), the zebra and lion are merged into a single entity and (d) omits the sleepy lion. We conjecture (c) and (d) are a result of too little updates. (e) and (f) exhibit the well-known issue of flawed mapping between the number of units an entity is mentioned in the prompt to the generated image.",
272
+ "url": "http://arxiv.org/html/2306.08877v3/x17.png"
273
+ },
274
+ "21": {
275
+ "figure_path": "2306.08877v3_figure_21.png",
276
+ "caption": "Figure 21: The instructions that were given to the raters.",
277
+ "url": "http://arxiv.org/html/2306.08877v3/extracted/5358687/Figures/amt_instructions_1.png"
278
+ },
279
+ "22": {
280
+ "figure_path": "2306.08877v3_figure_22.png",
281
+ "caption": "Figure 22: Examples given to raters in their instructions. Each example consists of a prompt and two images: A good match (top) and a bad match (bottom) for the concept separation criterion. These examples were accompanied by text explaining why the images are considered a good (or bad) match to the prompt.",
282
+ "url": "http://arxiv.org/html/2306.08877v3/x18.png"
283
+ },
284
+ "23": {
285
+ "figure_path": "2306.08877v3_figure_23.png",
286
+ "caption": "Figure 23: A screenshot of the AMT task. The order of images was randomized per HIT. \u201cequally good\u201d and \u201cequally bad\u201d were merged during post-processing into \"no winner\", to simplify presentation of results.",
287
+ "url": "http://arxiv.org/html/2306.08877v3/extracted/5358687/Figures/amt_instructions_2.png"
288
+ },
289
+ "24": {
290
+ "figure_path": "2306.08877v3_figure_24.png",
291
+ "caption": "Figure 24: A screenshot of the fine-grained AMT task.",
292
+ "url": "http://arxiv.org/html/2306.08877v3/extracted/5358687/Figures/quantifying_qa_instructions.png"
293
+ }
294
+ },
295
+ "validation": true,
296
+ "references": [
297
+ {
298
+ "1": {
299
+ "title": "High-resolution image synthesis with latent diffusion models.",
300
+ "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.",
301
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684\u201310695, 2022.",
302
+ "url": null
303
+ }
304
+ },
305
+ {
306
+ "2": {
307
+ "title": "Hierarchical text-conditional image generation with clip latents.",
308
+ "author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.",
309
+ "venue": "arXiv preprint arXiv:2204.06125, 2022.",
310
+ "url": null
311
+ }
312
+ },
313
+ {
314
+ "3": {
315
+ "title": "Photorealistic text-to-image diffusion models with deep language understanding, 2022.",
316
+ "author": "Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi.",
317
+ "venue": null,
318
+ "url": null
319
+ }
320
+ },
321
+ {
322
+ "4": {
323
+ "title": "ediffi: Text-to-image diffusion models with an ensemble of expert denoisers.",
324
+ "author": "Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al.",
325
+ "venue": "arXiv preprint arXiv:2211.01324, 2022.",
326
+ "url": null
327
+ }
328
+ },
329
+ {
330
+ "5": {
331
+ "title": "Testing relational understanding in text-guided image generation.",
332
+ "author": "Colin Conwell and Tomer Ullman.",
333
+ "venue": "arXiv preprint arXiv:2208.00005, 2022.",
334
+ "url": null
335
+ }
336
+ },
337
+ {
338
+ "6": {
339
+ "title": "DALLE-2 is seeing double: Flaws in word-to-concept mapping in Text2Image models.",
340
+ "author": "Royi Rassin, Shauli Ravfogel, and Yoav Goldberg.",
341
+ "venue": "In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 335\u2013345, Abu Dhabi, United Arab Emirates (Hybrid), December 2022. Association for Computational Linguistics.",
342
+ "url": null
343
+ }
344
+ },
345
+ {
346
+ "7": {
347
+ "title": "Adding conditional control to text-to-image diffusion models, 2023.",
348
+ "author": "Lvmin Zhang and Maneesh Agrawala.",
349
+ "venue": null,
350
+ "url": null
351
+ }
352
+ },
353
+ {
354
+ "8": {
355
+ "title": "Key-locked rank one editing for text-to-image personalization.",
356
+ "author": "Yoad Tewel, Rinon Gal, Gal Chechik, and Yuval Atzmon.",
357
+ "venue": "In ACM SIGGRAPH 2023 Conference Proceedings, SIGGRAPH \u201923, 2023.",
358
+ "url": null
359
+ }
360
+ },
361
+ {
362
+ "9": {
363
+ "title": "Learning transferable visual models from natural language supervision, 2021.",
364
+ "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.",
365
+ "venue": null,
366
+ "url": null
367
+ }
368
+ },
369
+ {
370
+ "10": {
371
+ "title": "When and why vision-language models behave like bags-of-words, and what to do about it?",
372
+ "author": "Mert Yuksekgonul, Federico Bianchi, Pratyusha Kalluri, Dan Jurafsky, and James Zou.",
373
+ "venue": "In The Eleventh International Conference on Learning Representations, 2023.",
374
+ "url": null
375
+ }
376
+ },
377
+ {
378
+ "11": {
379
+ "title": "Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models.",
380
+ "author": "Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or.",
381
+ "venue": "arXiv preprint arXiv:2301.13826, 2023.",
382
+ "url": null
383
+ }
384
+ },
385
+ {
386
+ "12": {
387
+ "title": "Compositional visual generation with composable diffusion models.",
388
+ "author": "Nan Liu, Shuang Li, Yilun Du, Antonio Torralba, and Joshua B Tenenbaum.",
389
+ "venue": "In Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part XVII, pages 423\u2013439. Springer, 2022.",
390
+ "url": null
391
+ }
392
+ },
393
+ {
394
+ "13": {
395
+ "title": "Training-free structured diffusion guidance for compositional text-to-image synthesis.",
396
+ "author": "Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, and William Yang Wang.",
397
+ "venue": "arXiv preprint arXiv:2212.05032, 2022.",
398
+ "url": null
399
+ }
400
+ },
401
+ {
402
+ "14": {
403
+ "title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing.",
404
+ "author": "Matthew Honnibal and Ines Montani.",
405
+ "venue": "To appear, 2017.",
406
+ "url": null
407
+ }
408
+ },
409
+ {
410
+ "15": {
411
+ "title": "Prompt-to-prompt image editing with cross attention control.",
412
+ "author": "Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or.",
413
+ "venue": "arXiv preprint arXiv:2208.01626, 2022.",
414
+ "url": null
415
+ }
416
+ },
417
+ {
418
+ "16": {
419
+ "title": "Microsoft coco: Common objects in context, 2015.",
420
+ "author": "Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Doll\u00e1r.",
421
+ "venue": null,
422
+ "url": null
423
+ }
424
+ },
425
+ {
426
+ "17": {
427
+ "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion, 2022.",
428
+ "author": "Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, and Daniel Cohen-Or.",
429
+ "venue": null,
430
+ "url": null
431
+ }
432
+ },
433
+ {
434
+ "18": {
435
+ "title": "A causal view of compositional zero-shot recognition.",
436
+ "author": "Yuval Atzmon, Felix Kreuk, Uri Shalit, and Gal Chechik.",
437
+ "venue": "Advances in Neural Information Processing Systems, 33:1462\u20131473, 2020.",
438
+ "url": null
439
+ }
440
+ },
441
+ {
442
+ "19": {
443
+ "title": "Open world compositional zero-shot learning.",
444
+ "author": "Massimiliano Mancini, Muhammad Ferjad Naeem, Yongqin Xian, and Zeynep Akata.",
445
+ "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5222\u20135230, 2021.",
446
+ "url": null
447
+ }
448
+ },
449
+ {
450
+ "20": {
451
+ "title": "Learning to generalize to new compositions in image understanding.",
452
+ "author": "Yuval Atzmon, Jonathan Berant, Vahid Kezami, Amir Globerson, and Gal Chechik.",
453
+ "venue": "arXiv preprint arXiv:1608.07639, 2016.",
454
+ "url": null
455
+ }
456
+ },
457
+ {
458
+ "21": {
459
+ "title": "CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning.",
460
+ "author": "Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross B. Girshick.",
461
+ "venue": "CoRR, abs/1612.06890, 2016.",
462
+ "url": null
463
+ }
464
+ },
465
+ {
466
+ "22": {
467
+ "title": "From red wine to red tomato: Composition with context.",
468
+ "author": "Ishan Misra, Abhinav Gupta, and Martial Hebert.",
469
+ "venue": "In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1160\u20131169, 2017.",
470
+ "url": null
471
+ }
472
+ },
473
+ {
474
+ "23": {
475
+ "title": "Dall-e 2 fails to reliably capture common syntactic processes.",
476
+ "author": "Evelina Leivada, Elliot Murphy, and Gary Marcus.",
477
+ "venue": "arXiv preprint arXiv:2210.12889, 2022.",
478
+ "url": null
479
+ }
480
+ },
481
+ {
482
+ "24": {
483
+ "title": "Harnessing the spatial-temporal attention of diffusion models for high-fidelity text-to-image synthesis.",
484
+ "author": "Qiucheng Wu, Yujian Liu, Handong Zhao, Trung Bui, Zhe Lin, Yang Zhang, and Shiyu Chang.",
485
+ "venue": "arXiv preprint arXiv:2304.03869, 2023.",
486
+ "url": null
487
+ }
488
+ }
489
+ ],
490
+ "url": "http://arxiv.org/html/2306.08877v3"
491
+ }
20240123/2306.14451v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2306.14624v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2306.17396v2.json ADDED
@@ -0,0 +1,609 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Koopman operator learning using invertible neural networks1footnote 11footnote 1This work is partially supported by the National Natural Science Foundation of China (NSFC) under grant number 12101407, the Chongqing Entrepreneurship and Innovation Program for Returned Overseas Scholars under grant number CX2023068, and the Fundamental Research Funds for the Central Universities under grant number 2023CDJXY-042.",
3
+ "abstract": "In Koopman operator theory, a finite-dimensional nonlinear system is transformed into an infinite but linear system using a set of observable functions. However, manually selecting observable functions that span the invariant subspace of the Koopman operator based on prior knowledge is inefficient and challenging, particularly when little or no information is available about the underlying systems. Furthermore, current methodologies tend to disregard the importance of the invertibility of observable functions, which leads to inaccurate results. To address these challenges, we propose the so-called FlowDMD, aka Flow-based Dynamic Mode Decomposition, that utilizes the Coupling Flow Invertible Neural Network (CF-INN) framework. FlowDMD leverages the intrinsically invertible characteristics of the CF-INN to learn the invariant subspaces of the Koopman operator and accurately reconstruct state variables. Numerical experiments demonstrate the superior performance of our algorithm compared to state-of-the-art methodologies.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Nonlinear dynamic systems are widely prevalent in both theory and engineering applications. Since the governing equations are generally unknown in many situations, it can be challenging to study the systems directly based on the first principles. Fortunately, the data about the systems of interest could be available by experiments or observations. Instead, one could seek to understand the behavior of the nonlinear system through the data-driven approaches Brunton et al. (2016 ###reference_1###); Long et al. (2018 ###reference_2###); Raissi (2018 ###reference_3###); Fuentes et al. (2021 ###reference_4###); Kim et al. (2021 ###reference_5###).\nThe Koopman operator Koopman (1931 ###reference_6###), which embeds the nonlinear system of interest into an infinite dimensional linear space by observable functions has attracted lots of attention. The Koopman operator acts on the infinite dimensional Hilbert space and aims to capture the full representations of the nonlinear systems. Dynamic mode decomposition (DMD) calculates the spectral decomposition of the Koopman operator numerically by extracting dynamic information from the collected data. Concretely, DMD devises a procedure to extract the spectral information directly from a data sequence without an explicit formulation of the Koopman operator, which is efficient for handling high dimensional data Schmid (2022 ###reference_7###). Variants of DMD are proposed to address challenges in different scenarios Tu et al. (2014 ###reference_8###); Jovanovi\u0107 et al. (2014 ###reference_9###); Takeishi et al. (2017 ###reference_10###); Arbabi and Mezic (2017 ###reference_11###); Le Clainche and Vega (2017 ###reference_12###); Erichson et al. (2019 ###reference_13###); Zhang et al. (2019 ###reference_14###); Colbrook et al. (2023 ###reference_15###).\nThe selection of observable functions plays an essential role in the DMD algorithm. Exact DMD Tu et al. (2014 ###reference_8###) exploits the identity mapping as the observables. This implies that one uses a linear system to approximate a nonlinear system with given data Kutz et al. (2016 ###reference_16###). This would yield inaccurate or even completely mistaken outcomes. Furthermore, the short-term prediction of Exact DMD might be acceptable for some cases, but the long-term prediction is probably unreliable. Typically, prior knowledge is required to select the observable functions that span the invariant subspace of the Koopman operator. However, the invariant subspace is not simply available. In order to overcome the limitations of the Exact DMD algorithm and capture the full feature of the nonlinear system, several data-driven selection strategies for observable functions have been proposed. Extended DMD (EDMD) Williams et al. (2015a ###reference_17###) lifts the state variables from the original space into a higher dimensional space using the dictionary functions. The accuracy and rate of convergence of EDMD depend on the choice of the dictionary functions. Therefore, EDMD needs as many dictionary functions as possible. This implies that the set of dictionary functions (nonlinear transformations) should be sufficiently complex, which results in enormous computational cost. Kernel based DMD (KDMD) Williams et al. (2015b ###reference_18###) differs from EDMD in that it utilizes the kernel trick to exploit the implicit expression of dictionary functions, whereas EDMD uses the explicit expression of dictionary functions. Nonetheless, both EDMD and KDMD are prone to overfitting Otto and Rowley (2019 ###reference_19###), which leads to large generalization error. How to efficiently choose the observable functions that span the invariant subspace of the Koopman operator becomes a significant challenge.\nIn contrast to EDMD and KDMD, observable functions can be represented by neural networks. Dictionary learning Li et al. (2017 ###reference_20###) couples the EDMD with a set of trainable dictionary functions, where dictionary functions are represented by a fully connected neural network (FNN) and an untrainable component. Fixing the partial dictionary function facilitates the reconstruction of the state variables, however, this setting implicitly assumes that linear term lies in the invariant subspace of the Koopman operator. Yeung et al. (2019 ###reference_21###) select low-dimensional dictionary functions more efficiently using deep neural networks.\nAutoencoder (AE) neural networks have been widely applied to learn the optimal observable functions and reconstruction functions in Koopman embedding Otto and Rowley (2019 ###reference_19###); Takeishi et al. (2017 ###reference_22###); Lusch et al. (2018 ###reference_23###); Azencot et al. (2020 ###reference_24###); Pan and Duraisamy (2020 ###reference_25###); Li and Jiang (2021 ###reference_26###). Concretely, the invariant subspace of the Koopman operator and reconstruction functions are represented by the encoder and decoder network in AE, respectively. Lusch et al. (2018 ###reference_23###) utilize neural networks to identify the Koopman eigenfunctions and introduced an auxiliary network to cope with the dynamic systems with continuous spectrum. Azencot et al. (2020 ###reference_24###) propose the Consistent Koopman AE model that combines the forward-backward DMD method Dawson et al. (2016 ###reference_27###) with the AE model. This approach extracts the latent representation of high-dimensional nonlinear data and eliminates the effect of noise in the data simultaneously. Pan and Duraisamy (2020 ###reference_25###) parameterize the structure of the transition matrix in linear space and construct an AE model to learn the residual of the DMD. Li and Jiang (2021 ###reference_26###) utilize deep learning and the Koopman operator to model the nonlinear multiscale dynamical problems, where coarse-scale data is used to learn the fine-scale information through a set of multiscale basis functions. Wang et al. (2023 ###reference_28###) propose Koopman Neural Forecaster combining AE with Koopman operator theory to predict the data with distributional shifts.\nRepresenting Koopman embedding by dictionary learning or AE networks has several drawbacks. Firstly, the reconstruction in dictionary learning partially fixes the dictionary functions, which leads to a low level of interpretability of the model. Secondly, the encoder and decoder in an AE model are trained simultaneously, but neither of them is invertible, cf. Alford-Lago et al. (2022 ###reference_29###) for more details. Moreover, due to the structural noninvertibility of the encoder and decoder, it typically requires a large amount of training data in order to obtain accurate representations, which makes the AE model prone to overfitting. Alford-Lago et al. (2022 ###reference_29###) analyze the property of both the encoder and decoder in AE and proposed the deep learning dynamic mode decomposition. Bevanda et al. (2022 ###reference_30###) constructed a conjugate map between the nonlinear system and its Jacobian linearization, which is learned by a diffeomorphic neural network.\nIn this paper, we develop a novel architecture called FlowDMD, aka Flow-based Dynamic Mode Decomposition, to learn the Koopman embedding. Specifically, we apply the coupling flow invertible neural networks to learn the observable functions and reconstruction functions. The invertibility of the learned observable functions makes our method more flexible than dictionary learning and AE learning. Our contributions are three-folds:\nThe state reconstruction is accomplished by the backward direction (inversion) of the CF-INN, which increases the interpretability of the neural network and alleviates the overfitting of AE.\nDue to the structural invertibility of CF-INN, the loss function for the state reconstruction is simplified compared with AE, which makes the network training easier.\nThe parameters to be optimized are reduced dramatically since the learned mappings and their inverse share the same parameters.\nThis paper is organized as follows. In Section 2 ###reference_###, we briefly review the Koopman operator theory and DMD. In Section 3 ###reference_###, we present the structure of CF-INN and introduce how to learn the invariant subspace of the Koopman operator and the reconstruction functions. In Section 4 ###reference_###, several numerical experiments are performed to demonstrate the performance of our method, and we summarize our work in Section 5 ###reference_###."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Koopman operator theory",
21
+ "text": "Consider the nonlinear autonomous system in discrete form,\nwhere represents the set of state space, is an unknown nonlinear map, and is the time index.\nFor the nonlinear system (1 ###reference_###), the Koopman operator is an infinite-dimensional linear operator that acts on all observable functions such that\nHere, and represents the infinite dimensional Hilbert space.\nThrough the observable functions, the nonlinear system (1 ###reference_###) could be transformed into an infinite-dimensional linear system using the Koopman operator,\nNote that the Koopman operator is linear, i.e., , with and . As is an infinite-dimensional operator, we denote its eigenfunctions and eigenvalues by such that , where , .\nThe Koopman eigenfunctions define a set of intrinsic measurement coordinates, then a vector-valued observable function could be written in terms of the Koopman eigenfunctions,\nwhere refers to the -th Koopman mode with respect to the Koopman eigenfunction . Combining (2 ###reference_###) and (3 ###reference_###), we have the decomposition of a vector-valued observable functions\nFurthermore, the decomposition could be rewritten as\nIn practice, we need a finite-dimensional representation of the infinite-dimensional Koopman operator. Denote the -dimensional invariant subspace of the Koopman operator by , i.e., . Let be one set of basis of , this induces a finite-dimensional linear operator Kutz et al. (2016 ###reference_16###), which projects the Koopman operator onto , i.e., for the -dimensional vector-valued observable functions , we have"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Dynamic mode decomposition",
27
+ "text": "DMD approximates the spectral decomposition of the Koopman operator numerically. Given the state variables and a vector-valued observable function , then we get the sequence , where each is the observable snapshot of the -th time step. According to (4 ###reference_###), we have\nwhere is the matrix form of the finite-dimensional operator.\nFor the two data matrices, and , where and are both in , which satisfies . Therefore, can be represented by\nwhere denotes the Moore-Penrose inverse of .\nThe Exact DMD algorithm developed by Tu et al. (2014 ###reference_8###) computes dominant eigen-pairs (eigenvalue and eigenvector) of without the explicit formulation of . In Algorithm 1 ###reference_###, we present the DMD algorithm on the observable space, which is a general form of the Exact DMD algorithm. When using the identical mapping as the observable functions, i.e., , Algorithm 1 ###reference_### is identical to the Exact DMD algorithm."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "State reconstruction",
33
+ "text": "Koopman operator theory utilizes observable functions to transform the nonlinear system (1 ###reference_###) into a linear system while preserving the nonlinearity. Evolving the nonlinear system (1 ###reference_###) is computationally expensive or even impossible when is unknown, whereas evolving through the Koopman operator (2 ###reference_###) offers a promising and computationally efficient approach.\nFigure 1 ###reference_### illustrates the relation between the nonlinear evolution and the Koopman operator evolution where the system evolves linearly in the observation space . By computing the Koopman eigenvalues and modes, we can make predictions of the observable functions . We could reconstruct the state by the inverse of the observable functions provided that is invertible. The invertibility of observable functions is essential to ensure the reconstruction accuracy and the interpretability of the outcomes.\n###figure_1### Typical observable functions selection are performed manually based on prior knowledge.\nExact DMD takes the identical mapping,\nwhile the EDMD utilizes a set of pre-defined functions such as polynomials, Fourier modes, radial basis functions, and so forth Williams et al. (2015a ###reference_17###). However, these methods can be inaccurate and inefficient for Koopman embeddings learning. Deep neural networks, as efficient global nonlinear approximators, could be applied to represent the observable function and the reconstruction function .\nSeveral studies have demonstrated that the encoder and decoder networks in AE correspond to and , respectively Otto and Rowley (2019 ###reference_19###); Takeishi et al. (2017 ###reference_22###); Lusch et al. (2018 ###reference_23###); Azencot et al. (2020 ###reference_24###); Pan and Duraisamy (2020 ###reference_25###); Li and Jiang (2021 ###reference_26###).\nIn practical applications, it is not always guaranteed that is invertible. In the learning Koopman embedding via AE, the invertibility of is enforced through numerical constraints, i.e., the reconstruction error , which tends to result in overfitting and suboptimal performance Alford-Lago et al. (2022 ###reference_29###). Besides, the reconstruction error is trained simultaneously with the prediction error and the linearity error Lusch et al. (2018 ###reference_23###). The weights assigned to each loss term are hyperparameters that can be challenging to tune. In this paper, we propose a structurally invertible mapping learning framework, which eliminates the need for the reconstruction term in the loss function and yields more robust and accurate results. We present the details of our method in Section 3 ###reference_###."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "Learning Koopman embedding by invertible neural networks",
39
+ "text": "In this section, we first briefly review the AE neural network and demonstrate the limitation of this class of neural networks in the Koopman embedding learning. Then, we introduce our method to overcome this limitation.\nFor notational simplicity, we introduce some notations herein. For two mappings or functions and , their composite is denoted by . Given two vectors , their Hadamard product is the element-wise multiplication, represented by . Identically, their Hadamard division is defined as the element-wise division, i.e., ."
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Drawback of AE in the Koopman embedding learning",
45
+ "text": "Most of the work use the AE neural networks as the backbone to learn the invariant subspace of the Koopman operator and reconstruct the state variables. AE as the frequently-used unsupervised learning structure of neural networks, consists of two parts, i.e., the encoder and the decoder .\nAE learns these two mappings (functions) and by optimizing\nHere denotes the distribution of the input data, describes the difference between and , and represents the expectation.\nLet be an arbitrary mapping, and it is said to be invertible if there exists a mapping such that\nwhere is the identity mapping. Then, is said to be the inverse mapping of .\nLet and be two mappings learned by AE such that . However, the reverse order of the mapping is not always a good approximation to the identity mapping, moreover, and are generally not invertible Alford-Lago et al. (2022 ###reference_29###). The main reason is that while AE strives to reach , it omits the additional constraint which requires the latent variable data to train. Unfortunately, the latent variables are not accessible, thus rendering it impossible for AE to satisfy and simultaneously.\nAE learns an identity mapping from a training data set , i.e., for any . For data out of the set , the mapping learned by AE may perform badly. In other words, AE may have poor generalization capability. Next, we use a preliminary experiment to demonstrate this limitation. The details of this numerical example are given in Section 4.1 ###reference_###.\nWe use the structure of AE defined in Li and Jiang (2021 ###reference_26###) and randomly generate 120 trajectories to train the AE, and the results are depicted by Figure 2 ###reference_###.\n###figure_2### ###figure_3### ###figure_4### ###figure_5### Figure 2 ###reference_### compares the input data points out of the distribution of the training data with the corresponding reconstructed data points using the trained AE model. Figure 2 ###reference_###(a) shows the density distribution of training data set , which provides a rough illustration of the data space . For the reconstruction test of AE, we generate three types of data, i.e., the sin-shaped scatters, the S-shaped scatters, and scatters from the standard 2-d normal distribution. We plot the corresponding input points (blue) and reconstructed data points (red) of the AE. The results shown in the next three subfigures illustrate that AE can reconstruct the input data points nearby the training data set very well. But for the data points far away from , AE performs badly. The same situation happens in learning the Koopman embedding. Specifically, in the training process of AE, one aims to find the Koopman invariant space by minimizing the error of the Koopman embedding learning and the reconstruction error. However, minimizing the error between latent variables and their corresponding reconstruction denoted by is intractable. This result is in poor stability and generalization capability."
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Structure of CF-INN",
51
+ "text": "We have shown that the mapping learned by AE performs poorly, which inspires us that invertibility can greatly reduce computational complexity and yields better generalization capability.\nNext, we introduce an invertible neural network to overcome the drawback of AE.\nLet denote the input-output mapping of the invertible neural network, where represents the parameters of the neural network. Let be the inverse mapping of which shares the same parameters with . Then we can reconstruct in the backward direction by .\nIn generative tasks of machine learning, the forward generating direction is called the flow direction and the backward direction is called the normalizing direction. Next, we introduce the concept of coupling flow (CF), which belongs to the category of invertible neural networks.\nLet and , we partition a vector as with and for .\nThe coupling flow is defined by\nwhere is an arbitrary mapping, and is a bijective mapping for any .\nThe CF given by Definition 3 ###reference_nition3### is invertible (bijective) if and only if is bijective Kobyzev et al. (2021 ###reference_33###).\nSpecifically, let and partition it in the same manner with , i.e., , where , and .\nThen we can obtain the inverse of (5 ###reference_###) given by,\nOne of the mostly used CF is the affine coupling flow (ACF) Dinh et al. (2014 ###reference_34###, 2017 ###reference_35###); Kingma and Dhariwal (2018 ###reference_36###), where is an element-wise invertible (bijective) function.\nGiven with and for , the affine coupling flow is defined by\nwhere are two arbitrary mappings.\nEquation (6 ###reference_###) is often referred to as the forward direction computations of ACF. Let , we can give its corresponding backward direction computations by\nwhere is partitioned in the same manner with . Additionally, the mappings and in Definition 4 ###reference_nition4### can be any nonlinear functions or neural networks such as FNN.\nLet be a sequence of ACFs and define , where represents the parameters of . Thus, results in an invertible neural network and is called by CF-INN in this paper. Moreover, the division index of the input vector is user-guided. In this paper, we set , where is the rounding function. Furthermore, in order to mix the information propagated in the network sufficiently, we could define a flipped ACF denoted by which is obtained by simply flipping two input parts of ACF. The forward direction computation of a flipped ACF is given by\nWe can compose an ACF block denoted by using a standard ACF and a flipped ACF. The structure of an ACF , a flipped ACF , and an ACF block are shown in Figure 3 ###reference_### , where the left and middle columns represent the forward and backward computations of an ACF and a flipped ACF, respectively. The right column shows the structure of an ACF block, which is a CF-INN of depth 2.\n###figure_6### When the depth of a CF-INN model, i.e., , is large, its training becomes challenging. The main curse is that the dividend term is too small in in the backward direction computations. This issue can be solved by replacing the ACF with a residual coupling flow (RCF). Similar idea has also been applied in the residual term of ResNet.\nGiven with and for , the residual coupling flow is defined by\nwhere is an arbitrary mapping.\nRCFs are simplifications of ACFs and similar with an ACF block, we can obtain a RCF block in composition of a RCF and a flipped RCF, which is a simplified ACF block."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Loss function of FlowDMD for Koopman embedding",
57
+ "text": "In this paper, we use the CF-INN to learn the Koopman invariant subspace and the reconstructions simultaneously, where the forward direction of CF-INN is represented by and its backward direction is represented by . Our method is called FlowDMD as it integrates CF-INN and DMD to compute the finite dimensional Koopman operator approximation and reconstruct system states.\nIt is noteworthy that the dimensions of input and output of CF-INN are inherently the same, which implies that the system states and the Koopman invariant subspace share the same dimension for FlowDMD. This does not generally hold true for the Koopman operator learning. However, this does not simply imply that FlowDMD fails for the general cases, vice versa, our approach is generally applicable. Next, we discuss why this holds true.\nConsider the general case that dimension of the states being and the dimension of the Koopman invariant subspace being .\nCase 1: , the output of CF-INN is an -dimensional vector functions which in turn gives an -dimensional Koopman invariant subspace that already contains the -dimensional Koopman invariant subspace. One can directly perform computations in this -dimensional subspace which gives more accurate results than computing in the -dimensional subspace without any extra cost. If strictly restricted to the -dimensional Koopman invariant subspace, one can first project from the -dimensional subspace to this subspace then perform computations and finally project back to the -dimensional subspace for the state reconstruction using the backward direction of CF-INN. This procedure applies at the beginning and end of Algorithm 1 in Figure 4 ###reference_### and no modifications of CF-INN is needed. Note that the Koopman operator theory transforms nonlinear systems to linear systems by lifting system dimension which usually gives .\nCase 2: , we can augment the states by appending at least zeros in total, either (both) preceding or (and) succeeding the original states, which can be represented by . Using this simple technique, the CF-INN can be directly applied without any adjustment or even modifications of the loss function. The reconstructed states has the same pattern with by prescription. Such methodology is analogous to the zero-padding technique commonly employed by image processing.\nThe loss function of FlowDMD has two components which consists of the DMD approximation error and the state reconstruction error. Firstly, the observable functions evolve linearly in the Koopman invariant subspace. Hence, the linearity constrained loss function that represents the DMD approximation error is given by\nwhere is the DMD approximation of the observable functions by using Algorithm 1 ###reference_###.\nSecondly, the inverse mapping of , i.e, the backward direction of CF-INN, , are used to reconstruct the states . Here, shares the same network structure and parameters with . Therefore, the computational cost is greatly reduced, compared with AE that another neural network is required to parameterize the inverse mapping of . The reconstruction loss due to the DMD approximation error is given by\nThe optimal parameters is determined by\nwhere is a user-guard hyperparameter.\n###figure_7### We summarize our FlowDMD framework for the Koopman embedding learning in Figure 4 ###reference_###. In other Koopman embedding learning frameworks Lusch et al. (2018 ###reference_23###); Li and Jiang (2021 ###reference_26###), the reconstruction error induced by the noninvertibility of neural networks denoted by also needs to be considered. However, in our model, this term vanishes due to the invertibility of CF-INN, resulting in a notably simplified loss function, which makes the network training easier compared with Lusch et al. (2018 ###reference_23###); Li and Jiang (2021 ###reference_26###)."
58
+ },
59
+ {
60
+ "section_id": "4",
61
+ "parent_section_id": null,
62
+ "section_name": "Numerical experiments",
63
+ "text": "In this section, we use three numerical examples to demonstrate the efficiency of our method for learning the Koopman embedding and compare its performance with LIR-DMD Li and Jiang (2021 ###reference_26###), Exact DMD, and EDMD. We use the Python library FEniCS Logg et al. (2012 ###reference_38###) to compute the numerical solutions of PDEs, the Python library PyDMD Demo et al. (2018 ###reference_39###) to complete the calculations of Exact DMD, and the Python library PyTroch Paszke et al. (2019 ###reference_40###) to train the neural networks, respectively. Besides, we employ the publicly available implementation of EDMD 444https://github.com/MLDS-NUS/KoopmanDL ###reference_### , whose observable functions are . Here, represents radial basis functions (RBF) dictionary, which consists of thin-plate RBF functions with centers placed on the training data using k-means clustering. The Xavier normal initialization scheme Glorot and Bengio (2010 ###reference_41###) is utilized to initialize the weights of all neural networks, while the biases of all nodes are set to zero. All the networks are trained by the Adam optimizer Kingma and Ba (2015 ###reference_42###) with an initial learning rate of . In order to find the optimal parameters of the network, we use ReduceLROnPlateau Ruder (2016 ###reference_43###) to adjust the learning rate during the training process for all numerical examples.\nFor fairness, all the methods share the same training strategies. Denote as the \u201ctrue\u201d value of the states and as its reconstruction. We use three metrics to evaluate different methods synthetically, i.e., the relative error\nthe mean squared error\nand the total relative error"
64
+ },
65
+ {
66
+ "section_id": "4.1",
67
+ "parent_section_id": "4",
68
+ "section_name": "Fixed-point attractor",
69
+ "text": "The fixed-point attractor example Lusch et al. (2018 ###reference_23###) is given by\nThe initial state is chosen randomly by , and .\nWe divide the data set into three parts where the ratio of training, validation, and test is , and , respectively. The number of neurons of each layer for the encoder network in LIR-DMD is and the number of neurons of decoder network is . This results in 345 trainable parameters for LIR-DMD. We use three ACFs for this problem. The mappings and are parameterized by FNN with three layers and the width of each layer is 1,8,2, respectively. This results in total 102 trainable parameters in FlowDMD. For EDMD, we choose 3 RBF functions as the RBF dictionary.\nWe randomly choose one example from the test set and plot its results in Figure 5 ###reference_###. Both Figure 5 ###reference_###(a) and Figure 5 ###reference_###(b) show that the reconstruction calculated by LIR-DMD and FlowDMD are better than that of the Exact DMD and EDMD. Furthermore, the difference of trajectories between LIR-DMD and FlowDMD is very small. Figure 5 ###reference_###(c) and Figure 5 ###reference_###(d) illustrate that the reconstruction error of FlowDMD is the smallest. In the first 30 time steps, LIR-DMD has a similar error to FlowDMD. The error of FlowDMD increases much more slowly than that of LIR-DMD for the following 30 time steps. We conclude that FlowDMD has better generalization ability than LIR-DMD.\n###figure_8### ###figure_9### ###figure_10### ###figure_11### We use the TRL2E to evaluate the reconstruction results of trajectories for Exact DMD, EDMD, FlowDMD and LIR-DMD on 40 randomly generated examples, respectively, and the corresponding results are depicted by Figure 6 ###reference_###. For FlowDMD, the reconstruction error is the lowest among almost all of the test examples, and the average total relative error is only . Compared with LIR-DMD, FlowDMD has better generalization ability and learning ability of the Koopman invariant subspace.\n###figure_12###"
70
+ },
71
+ {
72
+ "section_id": "4.2",
73
+ "parent_section_id": "4",
74
+ "section_name": "Burgers\u2019 equation",
75
+ "text": "Consider the 1-D Burgers\u2019 equation Raissi et al. (2019 ###reference_44###) given by\nwhere is a random variable that satisfies a uniform distribution . We use the finite element method with 30 equidistant grid points for the spatial discretization and the implicit Euler method with a step size of for temporal discretization. We generate 100 samples of for the initial state and compute the corresponding solutions. The examples are then divided into three parts, with proportions for training, for validation, and for test.\nWe test the performance of the Exact DMD, LIR-DMD, and FlowDMD. The rank of Exact DMD is 3 and the same rank is also used in LIR-DMD and FlowDMD to embed the Koopman linearity. The structure of the encoder network for LIR-DMD is , and the decoder network is where the numbers in the brackets represent the width of each layer and we use RCFs to replace ACFs. This results in an invertible neural network of depth of 3 with one RCF block and one RCF. In each RCF, the width of each layer in FNN to parameterize the mapping is 15, 40, 15, which results in 7530 parameters in FlowDMD, whereas LIR-DMD has 10650 parameters. For EDMD, we choose 30 RBF functions as the RBF dictionary.\nFigure 7 ###reference_### depicts that FlowDMD has the smallest absolute reconstruction error and TRL2E. Figure 8 ###reference_###(a) and Figure 8 ###reference_###(b) show that the reconstruction error of Exact DMD, EDMD and LIR-DMD all increase with time, but FlowDMD maintains in a very low level.\n###figure_13### Figure 9 ###reference_### summarizes the TRL2E of reconstruction on all test examples and depicts that the FlowDMD has the smallest error on almost all test examples, where the average TRL2E of FlowDMD is . For some test examples, Exact DMD has the same TRL2E with FlowDMD, but for most test examples, FlowDMD performs better than Exact DMD. The TRL2E of LIR-DMD are bigger than FlowDMD over all the test examples and are slightly better than Exact DMD for some test examples.\n###figure_14### ###figure_15### ###figure_16###"
76
+ },
77
+ {
78
+ "section_id": "4.3",
79
+ "parent_section_id": "4",
80
+ "section_name": "Allen-Cahn equation",
81
+ "text": "Consider the 1-D Allen-Cahn equation Raissi et al. (2019 ###reference_44###) given by\nwhere , , and . We use the finite element method with 20 equidistant grid points for the spatial discretization and the implicit Euler with a step size of for the temporal discretization. Furthermore, we generate 100 samples of and use FEniCS to compute the numerical solutions.\nThe data set is segmented according to a ratio of , , , respectively to be used as the training set, the validation set, and the test set. The structure of the encoder network for LIR-DMD is and the decoder network is , where the numbers in the bracket indicate the width of each layer. This results in 6190 parameters for LIR-DMD. For FlowDMD, we also use RCFs to replace the ACFs. The neural network for FlowDMD consists of one RCF block and one RCF, which results in a network with depth . In each RCF, the width of each layer of the FNN to parameterize is 10, 20, 10. Finally, we obtain 2580 parameters for FlowDMD. The rank of Exact DMD is 3, and the same rank is also used in LIR-DMD and FlowDMD to embed the Koopman linearity. We choose 4 RBF functions as the RBF dictionary for EDMD. Results are reported in Figure 10 ###reference_###\u201312 ###reference_###.\n###figure_17### Figure 10 ###reference_### clearly shows that FlowDMD can reconstruct the original state most accurately. It reveals that the absolute error of exact DMD, EDMD and LIR-DMD increase over time, but FlowDMD can maintain the error in a low level all the time.\nIn addition, numerical results show that FlowDMD is more robust and generalizes better than Exact DMD, EDMD and LIR-DMD. Specifically, the error of the state reconstruction for four methods are given in Figure 11 ###reference_###. At the beginning time, FlowDMD has the biggest relative error because the norm of the true state variables is too small, which leads to a large relative error. As time evolves, the error of FlowDMD reaches the lowest level among all four methods.\n###figure_18### ###figure_19### In Figure 12 ###reference_###, we use the test data set to evaluate the generalization ability. The FlowDMD has almost the smallest TRL2E in most examples and the average of the total relative error is . It also shows that the fluctuation of error for FlowDMD is smaller than that of LIR-DMD, which demonstrates that FlowDMD has a better generalization ability and is more robust than LIR-DMD.\n###figure_20###"
82
+ },
83
+ {
84
+ "section_id": "4.4",
85
+ "parent_section_id": "4",
86
+ "section_name": "Sensitivity study",
87
+ "text": "Here, we study the sensitivity of FlowDMD systematically using the Allen-Cahn equation in Section 4.3 ###reference_### with respect to the following five aspects,\nThe neural network initialization.\nThe hyperparameter in the loss function.\nThe structure of neural networks.\nThe rank used by DMD in Algorithm 1 ###reference_###.\nThe division index in Definition 3 ###reference_nition3### for CF-INN."
88
+ },
89
+ {
90
+ "section_id": "4.4.1",
91
+ "parent_section_id": "4.4",
92
+ "section_name": "4.4.1 Sensitivity with respect to the neural network initialization",
93
+ "text": "In order to quantify the sensitivity of FlowDMD with respect to the initialization, we consider the same data set with Section 4.3 ###reference_###. Simultaneously, we fix the structure for FlowDMD to include only one RCF block and one RCF. Each RCF has a FNN to parameterize where the width of each layer is . Moreover, all FNNs use the rectified linear unit as activation functions. We use 15 random seeds to initialize models and train all the models with the same setting. In Figure 13 ###reference_###, we report the TRL2E between the reconstructed states and the \u201ctrue\u201d states. Evidently, the TRL2E remains stable for different initializations of neural networks, as demonstrated by the consistent results obtained within the following interval,\n###figure_21###"
94
+ },
95
+ {
96
+ "section_id": "4.4.2",
97
+ "parent_section_id": "4.4",
98
+ "section_name": "4.4.2 Sensitivity with respect to",
99
+ "text": "We utilize the same training set with Section 4.3 ###reference_### and select from the list . As shown in Table 1 ###reference_###, the different weights in the loss function have little influence on the final results. We observe that the error is minimized when , which suggests the use of an adaptive weight selection algorithm. The gradient flow provided by the neural tangent kernel Wang et al. (2022 ###reference_45###) can be employed to adjust the weight and accelerate the training process, and we leave this for our future work."
100
+ },
101
+ {
102
+ "section_id": "4.4.3",
103
+ "parent_section_id": "4.4",
104
+ "section_name": "4.4.3 Sensitivity with respect to the structure of neural networks",
105
+ "text": "We study the impact of the number of RCFs and the number of neurons in the FNN to parameterize the mapping on the performance of the FlowDMD. Specifically, the sensitivity of FlowDMD is being quantified with respect to two parameters: the number of RCFs () and the number of neurons () in the middle layer of the FNN. Here, the FNN used to parameterize is restricted to a three layer structure of . The results are summarized in Table 2 ###reference_###, which indicate that the reconstruction of FlowDMD has little to do with its structure while adding more neurons or more RCFs will not improve the final results to a big extent."
106
+ },
107
+ {
108
+ "section_id": "4.4.4",
109
+ "parent_section_id": "4.4",
110
+ "section_name": "4.4.4 Sensitivity with respect to the rank of DMD",
111
+ "text": "As we increase the rank used for the DMD computations in Algorithm 1 ###reference_###, we include more information, but the computation time also increases. In this study, we investigate how the DMD rank affects the model and its reconstruction. The results in Table 3 ###reference_### show that as we increase the rank , the corresponding error decreases rapidly."
112
+ },
113
+ {
114
+ "section_id": "4.4.5",
115
+ "parent_section_id": "4.4",
116
+ "section_name": "4.4.5 Sensitivity with respect to",
117
+ "text": "To investigate how the division index affects the performance of FlowDMD, we report the mean value of TRL2E for different values of in Figure 14 ###reference_###. As Figure 14 ###reference_### shows, the setting of minimizes the TRL2E and results in the largest TRL2E. However, the total relative error always remains in a low and stable level by FlowDMD.\n###figure_22###"
118
+ },
119
+ {
120
+ "section_id": "5",
121
+ "parent_section_id": null,
122
+ "section_name": "Conclusion",
123
+ "text": "In this paper, we introduced the FlowDMD framework to learn both the observable functions and reconstruction functions for the Koopman embedding, which is implemented through coupling flow invertible neural network. Our method gives more accurate approximations of the Koopman operator than state-of-the-art methods. Our FlowDMD is structurally invertible, which simplifies the loss function and improves the accuracy of the state reconstruction. Numerical experiments show that our approach is more accurate, efficient, and interpretable than the state-of-the-art methods."
124
+ }
125
+ ],
126
+ "appendix": [],
127
+ "tables": {
128
+ "1": {
129
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T1\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Total relative error for different .</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T1.5\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T1.5.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.1.2\">0.01</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.1.3\">0.1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.1.4\">1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.1.5\">10</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T1.5.1.6\">100</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.5.2.1\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T1.5.2.1.1\">TRL2E</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T1.5.2.1.2\">6.2e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T1.5.2.1.3\">6.8e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T1.5.2.1.4\">8.2e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T1.5.2.1.5\">3.2e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T1.5.2.1.6\">6.9e-02</td>\n</tr>\n</tbody>\n</table>\n</figure>",
130
+ "capture": "Table 1: Total relative error for different ."
131
+ },
132
+ "2": {
133
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Total relative error for different structures of networks in FlowDMD.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.1\">\n<th class=\"ltx_td ltx_nopad ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T2.3.1.1\"><svg height=\"24.77\" overflow=\"visible\" version=\"1.1\" width=\"34.56\"><g transform=\"translate(0,24.77) scale(1,-1)\"><path d=\"M 0,24.77 34.56,0\" stroke=\"black\" stroke-width=\"0.4\"></path><g class=\"ltx_svg_fog\" transform=\"translate(0,0)\"><g transform=\"translate(0,11.12) scale(1, -1)\"><foreignobject height=\"11.12\" overflow=\"visible\" width=\"17.28\">\n<span class=\"ltx_inline-block\" id=\"S4.T2.3.1.1.pic1.1.1\">\n<span class=\"ltx_inline-block ltx_align_left\" id=\"S4.T2.3.1.1.pic1.1.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.pic1.1.1.1.1\"></span>\n</span>\n</span></foreignobject></g></g><g class=\"ltx_svg_fog\" transform=\"translate(17.31,11.12)\"><g transform=\"translate(0,13.65) scale(1, -1)\"><foreignobject height=\"13.65\" overflow=\"visible\" width=\"17.25\">\n<span class=\"ltx_inline-block\" id=\"S4.T2.3.1.1.pic1.2.1\">\n<span class=\"ltx_inline-block ltx_align_right\" id=\"S4.T2.3.1.1.pic1.2.1.1\">\n<span class=\"ltx_p\" id=\"S4.T2.3.1.1.pic1.2.1.1.1\"></span>\n</span>\n</span></foreignobject></g></g></g></svg></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.1.2\">2</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.1.3\">3</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.1.4\">4</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.1.5\">8</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T2.3.1.6\">12</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_t\" id=\"S4.T2.3.2.1.1\">10</th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.1.2\">5.6e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.1.3\">7.7e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.1.4\">7.1e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.1.5\">8.4e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.2.1.6\">5.4e-02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.2\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.3.2.1\">20</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.2.2\">6.4e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.2.3\">7.6e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.2.4\">6.4e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.2.5\">7.9e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.2.6\">8.6e-02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.4.3\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.4.3.1\">40</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.4.3.2\">7.1e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.4.3.3\">8.4e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.4.3.4\">4.1e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.4.3.5\">4.8e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.4.3.6\">10.3e-02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.5.4\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row\" id=\"S4.T2.3.5.4.1\">80</th>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.4.2\">4.0e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.4.3\">7.6e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.4.4\">7.7e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.4.5\">8.0e-02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.5.4.6\">6.6e-02</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.6.5\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b\" id=\"S4.T2.3.6.5.1\">160</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.6.5.2\">8.9e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.6.5.3\">4.2e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.6.5.4\">8.0e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.6.5.5\">8.3e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b\" id=\"S4.T2.3.6.5.6\">6.9e-02</td>\n</tr>\n</tbody>\n</table>\n</figure>",
134
+ "capture": "Table 2: Total relative error for different structures of networks in FlowDMD."
135
+ },
136
+ "3": {
137
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Total relative error for different low rank dimension in Algorithm <a class=\"ltx_ref\" href=\"#alg1\" title=\"Algorithm 1 \u2023 2.2 Dynamic mode decomposition \u2023 2 Preliminaries \u2023 Koopman operator learning using invertible neural networks1footnote 11footnote 1This work is partially supported by the National Natural Science Foundation of China (NSFC) under grant number 12101407, the Chongqing Entrepreneurship and Innovation Program for Returned Overseas Scholars under grant number CX2023068, and the Fundamental Research Funds for the Central Universities under grant number 2023CDJXY-042.\"><span class=\"ltx_text ltx_ref_tag\">1</span></a>.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.3\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_t\" id=\"S4.T3.3.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.3.1.2\">1</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.3.1.3\">3</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.3.1.4\">5</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.3.1.5\">7</th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_t\" id=\"S4.T3.3.1.6\">9</th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.3.2.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_b ltx_border_t\" id=\"S4.T3.3.2.1.1\">TRL2E</th>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T3.3.2.1.2\">17.4e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T3.3.2.1.3\">6.8e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T3.3.2.1.4\">6.7e-02</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T3.3.2.1.5\">9e-03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_t\" id=\"S4.T3.3.2.1.6\">3e-03</td>\n</tr>\n</tbody>\n</table>\n</figure>",
138
+ "capture": "Table 3: Total relative error for different low rank dimension in Algorithm 1."
139
+ }
140
+ },
141
+ "image_paths": {
142
+ "1": {
143
+ "figure_path": "2306.17396v2_figure_1.png",
144
+ "caption": "Figure 1: Koopman operator and inverse of observable functions",
145
+ "url": "http://arxiv.org/html/2306.17396v2/x1.png"
146
+ },
147
+ "2(a)": {
148
+ "figure_path": "2306.17396v2_figure_2(a).png",
149
+ "caption": "(a)\nFigure 2: Generalization capability test of AE.\n(a) the training data distribution.\n(b) the s\u2062i\u2062n\u2062(x)\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc65sin(x)italic_s italic_i italic_n ( italic_x ) test function.\n(c) S-shaped scatters test.\n(d) random scatters from 2-d standard normal distribution.",
150
+ "url": "http://arxiv.org/html/2306.17396v2/x2.png"
151
+ },
152
+ "2(b)": {
153
+ "figure_path": "2306.17396v2_figure_2(b).png",
154
+ "caption": "(b)\nFigure 2: Generalization capability test of AE.\n(a) the training data distribution.\n(b) the s\u2062i\u2062n\u2062(x)\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc65sin(x)italic_s italic_i italic_n ( italic_x ) test function.\n(c) S-shaped scatters test.\n(d) random scatters from 2-d standard normal distribution.",
155
+ "url": "http://arxiv.org/html/2306.17396v2/x3.png"
156
+ },
157
+ "2(c)": {
158
+ "figure_path": "2306.17396v2_figure_2(c).png",
159
+ "caption": "(c)\nFigure 2: Generalization capability test of AE.\n(a) the training data distribution.\n(b) the s\u2062i\u2062n\u2062(x)\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc65sin(x)italic_s italic_i italic_n ( italic_x ) test function.\n(c) S-shaped scatters test.\n(d) random scatters from 2-d standard normal distribution.",
160
+ "url": "http://arxiv.org/html/2306.17396v2/x4.png"
161
+ },
162
+ "2(d)": {
163
+ "figure_path": "2306.17396v2_figure_2(d).png",
164
+ "caption": "(d)\nFigure 2: Generalization capability test of AE.\n(a) the training data distribution.\n(b) the s\u2062i\u2062n\u2062(x)\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc65sin(x)italic_s italic_i italic_n ( italic_x ) test function.\n(c) S-shaped scatters test.\n(d) random scatters from 2-d standard normal distribution.",
165
+ "url": "http://arxiv.org/html/2306.17396v2/x5.png"
166
+ },
167
+ "3": {
168
+ "figure_path": "2306.17396v2_figure_3.png",
169
+ "caption": "Figure 3: The forward and backward directions of ACF and flipped ACF, as well as the structure of an ACF block. Here, the \u201cId\u201d operation represents the identity mapping.",
170
+ "url": "http://arxiv.org/html/2306.17396v2/x6.png"
171
+ },
172
+ "4": {
173
+ "figure_path": "2306.17396v2_figure_4.png",
174
+ "caption": "Figure 4: The general framework of FlowDMD.",
175
+ "url": "http://arxiv.org/html/2306.17396v2/x7.png"
176
+ },
177
+ "5(a)": {
178
+ "figure_path": "2306.17396v2_figure_5(a).png",
179
+ "caption": "(a) The trajectories of x1subscript\ud835\udc651x_{1}italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT\nFigure 5: Comparison of four methods for Example 4.1. The total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error of the Exact DMD, EDMD, LIR-DMD, and FlowDMD are 0.2448, 0.08, 0.0111 and 0.0018, respectively.",
180
+ "url": "http://arxiv.org/html/2306.17396v2/x8.png"
181
+ },
182
+ "5(b)": {
183
+ "figure_path": "2306.17396v2_figure_5(b).png",
184
+ "caption": "(b) The trajectories of x2subscript\ud835\udc652x_{2}italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT\nFigure 5: Comparison of four methods for Example 4.1. The total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error of the Exact DMD, EDMD, LIR-DMD, and FlowDMD are 0.2448, 0.08, 0.0111 and 0.0018, respectively.",
185
+ "url": "http://arxiv.org/html/2306.17396v2/x9.png"
186
+ },
187
+ "5(c)": {
188
+ "figure_path": "2306.17396v2_figure_5(c).png",
189
+ "caption": "(c) Relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error\nFigure 5: Comparison of four methods for Example 4.1. The total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error of the Exact DMD, EDMD, LIR-DMD, and FlowDMD are 0.2448, 0.08, 0.0111 and 0.0018, respectively.",
190
+ "url": "http://arxiv.org/html/2306.17396v2/x10.png"
191
+ },
192
+ "5(d)": {
193
+ "figure_path": "2306.17396v2_figure_5(d).png",
194
+ "caption": "(d) Mean squared error\nFigure 5: Comparison of four methods for Example 4.1. The total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error of the Exact DMD, EDMD, LIR-DMD, and FlowDMD are 0.2448, 0.08, 0.0111 and 0.0018, respectively.",
195
+ "url": "http://arxiv.org/html/2306.17396v2/x11.png"
196
+ },
197
+ "6(a)": {
198
+ "figure_path": "2306.17396v2_figure_6(a).png",
199
+ "caption": "(a)\nFigure 6: Total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error in Example 4.1.",
200
+ "url": "http://arxiv.org/html/2306.17396v2/x12.png"
201
+ },
202
+ "7": {
203
+ "figure_path": "2306.17396v2_figure_7.png",
204
+ "caption": "Figure 7: Comparison of four methods in Example 4.2. The total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT errors for exact DMD, EDMD, LIR-DMD, and FlowDMD are 0.08,0.026, 0.119, and 0.017, respectively.",
205
+ "url": "http://arxiv.org/html/2306.17396v2/x13.png"
206
+ },
207
+ "8(a)": {
208
+ "figure_path": "2306.17396v2_figure_8(a).png",
209
+ "caption": "(a) Relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error\nFigure 8: Error of four methods for Example 4.2.",
210
+ "url": "http://arxiv.org/html/2306.17396v2/x14.png"
211
+ },
212
+ "8(b)": {
213
+ "figure_path": "2306.17396v2_figure_8(b).png",
214
+ "caption": "(b) Mean squared error\nFigure 8: Error of four methods for Example 4.2.",
215
+ "url": "http://arxiv.org/html/2306.17396v2/x15.png"
216
+ },
217
+ "9(a)": {
218
+ "figure_path": "2306.17396v2_figure_9(a).png",
219
+ "caption": "(a)\nFigure 9: Total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error in Example 4.2.",
220
+ "url": "http://arxiv.org/html/2306.17396v2/x16.png"
221
+ },
222
+ "10": {
223
+ "figure_path": "2306.17396v2_figure_10.png",
224
+ "caption": "Figure 10: Comparison of four methods in Example 4.3. The total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error for exact DMD, EDMD, LIR-DMD, and FlowDMD are 0.6129, 0.129, 0.4038, and 0.0725, respectively.",
225
+ "url": "http://arxiv.org/html/2306.17396v2/x17.png"
226
+ },
227
+ "11(a)": {
228
+ "figure_path": "2306.17396v2_figure_11(a).png",
229
+ "caption": "(a) Relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error\nFigure 11: Error of four methods for Example 4.3.",
230
+ "url": "http://arxiv.org/html/2306.17396v2/x18.png"
231
+ },
232
+ "11(b)": {
233
+ "figure_path": "2306.17396v2_figure_11(b).png",
234
+ "caption": "(b) Mean squared error\nFigure 11: Error of four methods for Example 4.3.",
235
+ "url": "http://arxiv.org/html/2306.17396v2/x19.png"
236
+ },
237
+ "12(a)": {
238
+ "figure_path": "2306.17396v2_figure_12(a).png",
239
+ "caption": "(a)\nFigure 12: Total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error in Example 4.3.",
240
+ "url": "http://arxiv.org/html/2306.17396v2/x20.png"
241
+ },
242
+ "13(a)": {
243
+ "figure_path": "2306.17396v2_figure_13(a).png",
244
+ "caption": "(a)\nFigure 13: Total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error for different neural network initialization.",
245
+ "url": "http://arxiv.org/html/2306.17396v2/x21.png"
246
+ },
247
+ "14(a)": {
248
+ "figure_path": "2306.17396v2_figure_14(a).png",
249
+ "caption": "(a)\nFigure 14: Total relative L2subscript\ud835\udc3f2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT error for different division indices.",
250
+ "url": "http://arxiv.org/html/2306.17396v2/x22.png"
251
+ }
252
+ },
253
+ "validation": true,
254
+ "references": [
255
+ {
256
+ "1": {
257
+ "title": "Discovering governing equations from data by sparse\nidentification of nonlinear dynamical systems,",
258
+ "author": "S. L. Brunton, J. L. Proctor,\nJ. N. Kutz,",
259
+ "venue": "Proceedings of the National Academy of Sciences\n113 (2016) 3932\u20133937.",
260
+ "url": null
261
+ }
262
+ },
263
+ {
264
+ "2": {
265
+ "title": "PDE-Net: Learning PDEs from data,",
266
+ "author": "Z. Long, Y. Lu, X. Ma,\nB. Dong,",
267
+ "venue": "in: Proceedings of the 35th International\nConference on Machine Learning, 2018, pp.\n3208\u20133216.",
268
+ "url": null
269
+ }
270
+ },
271
+ {
272
+ "3": {
273
+ "title": "Deep hidden physics models: Deep learning of\nnonlinear partial differential equations,",
274
+ "author": "M. Raissi,",
275
+ "venue": "Journal of Machine Learning Research\n19 (2018) 1\u201324.",
276
+ "url": null
277
+ }
278
+ },
279
+ {
280
+ "4": {
281
+ "title": "Equation discovery for nonlinear dynamical systems:\nA Bayesian viewpoint,",
282
+ "author": "R. Fuentes, R. Nayek,\nP. Gardner, N. Dervilis,\nT. Rogers, K. Worden,\nE. Cross,",
283
+ "venue": "Mechanical Systems and Signal Processing\n154 (2021) 107528.",
284
+ "url": null
285
+ }
286
+ },
287
+ {
288
+ "5": {
289
+ "title": "Integration of neural network-based symbolic\nregression in deep learning for scientific discovery,",
290
+ "author": "S. Kim, P. Y. Lu,\nS. Mukherjee, M. Gilbert,\nL. Jing, V. \u010ceperi\u0107,\nM. Solja\u010di\u0107,",
291
+ "venue": "IEEE Transactions on Neural Networks and Learning\nSystems 32 (2021)\n4166\u20134177.",
292
+ "url": null
293
+ }
294
+ },
295
+ {
296
+ "6": {
297
+ "title": "Hamiltonian systems and transformation in Hilbert\nspace,",
298
+ "author": "B. O. Koopman,",
299
+ "venue": "Proceedings of the National Academy of Sciences\n17 (1931) 315\u2013318.",
300
+ "url": null
301
+ }
302
+ },
303
+ {
304
+ "7": {
305
+ "title": "Dynamic mode decomposition and its variants,",
306
+ "author": "P. J. Schmid,",
307
+ "venue": "Annual Review of Fluid Mechanics\n54 (2022) 225\u2013254.",
308
+ "url": null
309
+ }
310
+ },
311
+ {
312
+ "8": {
313
+ "title": "On dynamic mode decomposition: Theory and\napplications,",
314
+ "author": "J. H. Tu, C. W. Rowley,\nD. M. Luchtenburg, S. L. Brunton,\nJ. N. Kutz,",
315
+ "venue": "Journal of Computational Dynamics\n1 (2014) 391\u2013421.",
316
+ "url": null
317
+ }
318
+ },
319
+ {
320
+ "9": {
321
+ "title": "Sparsity-promoting dynamic mode decomposition,",
322
+ "author": "M. R. Jovanovi\u0107, P. J. Schmid,\nJ. W. Nichols,",
323
+ "venue": "Physics of Fluids 26\n(2014) 024103.",
324
+ "url": null
325
+ }
326
+ },
327
+ {
328
+ "10": {
329
+ "title": "Bayesian dynamic mode decomposition,",
330
+ "author": "N. Takeishi, Y. Kawahara,\nY. Tabei, T. Yairi,",
331
+ "venue": "in: Proceedings of the Twenty-Sixth International\nJoint Conference on Artificial Intelligence, 2017, pp.\n2814\u20132821.",
332
+ "url": null
333
+ }
334
+ },
335
+ {
336
+ "11": {
337
+ "title": "Ergodic theory, dynamic mode decomposition, and\ncomputation of spectral properties of the Koopman operator,",
338
+ "author": "H. Arbabi, I. Mezic,",
339
+ "venue": "SIAM Journal on Applied Dynamical Systems\n16 (2017) 2096\u20132126.",
340
+ "url": null
341
+ }
342
+ },
343
+ {
344
+ "12": {
345
+ "title": "Higher order dynamic mode decomposition,",
346
+ "author": "S. Le Clainche, J. M. Vega,",
347
+ "venue": "SIAM Journal on Applied Dynamical Systems\n16 (2017) 882\u2013925.",
348
+ "url": null
349
+ }
350
+ },
351
+ {
352
+ "13": {
353
+ "title": "Randomized dynamic mode decomposition,",
354
+ "author": "N. B. Erichson, L. Mathelin,\nJ. N. Kutz, S. L. Brunton,",
355
+ "venue": "SIAM Journal on Applied Dynamical Systems\n18 (2019) 1867\u20131891.",
356
+ "url": null
357
+ }
358
+ },
359
+ {
360
+ "14": {
361
+ "title": "Online dynamic mode decomposition for time-varying\nsystems,",
362
+ "author": "H. Zhang, C. W. Rowley,\nE. A. Deem, L. N. Cattafesta,",
363
+ "venue": "SIAM Journal on Applied Dynamical Systems\n18 (2019) 1586\u20131609.",
364
+ "url": null
365
+ }
366
+ },
367
+ {
368
+ "15": {
369
+ "title": "Residual dynamic mode decomposition: robust and\nverified Koopmanism,",
370
+ "author": "M. J. Colbrook, L. J. Ayton,\nM. Sz\u0151ke,",
371
+ "venue": "Journal of Fluid Mechanics 955\n(2023) A21.",
372
+ "url": null
373
+ }
374
+ },
375
+ {
376
+ "16": {
377
+ "title": "A data\u2013driven approximation of the Koopman\noperator: Extending dynamic mode decomposition,",
378
+ "author": "M. O. Williams, I. G. Kevrekidis,\nC. W. Rowley,",
379
+ "venue": "Journal of Nonlinear Science 25\n(2015a) 1307\u20131346.",
380
+ "url": null
381
+ }
382
+ },
383
+ {
384
+ "17": {
385
+ "title": "A kernel-based method for data-driven Koopman\nspectral analysis,",
386
+ "author": "M. O. Williams, C. W. Rowley,\nI. G. Kevrekidis,",
387
+ "venue": "Journal of Computational Dynamics\n2 (2015b)\n247\u2013265.",
388
+ "url": null
389
+ }
390
+ },
391
+ {
392
+ "18": {
393
+ "title": "Linearly recurrent autoencoder networks for learning\ndynamics,",
394
+ "author": "S. E. Otto, C. W. Rowley,",
395
+ "venue": "SIAM Journal on Applied Dynamical Systems\n18 (2019) 558\u2013593.",
396
+ "url": null
397
+ }
398
+ },
399
+ {
400
+ "19": {
401
+ "title": "Extended dynamic mode decomposition with dictionary\nlearning: A data-driven adaptive spectral decomposition of the Koopman\noperator,",
402
+ "author": "Q. Li, F. Dietrich, E. M.\nBollt, I. G. Kevrekidis,",
403
+ "venue": "Chaos: An Interdisciplinary Journal of Nonlinear\nScience 27 (2017) 103111.",
404
+ "url": null
405
+ }
406
+ },
407
+ {
408
+ "20": {
409
+ "title": "Learning deep neural network representations for\nKoopman operators of nonlinear dynamical systems,",
410
+ "author": "E. Yeung, S. Kundu,\nN. Hodas,",
411
+ "venue": "in: American Control Conference,\nIEEE, 2019, pp.\n4832\u20134839.",
412
+ "url": null
413
+ }
414
+ },
415
+ {
416
+ "21": {
417
+ "title": "Learning Koopman invariant subspaces for dynamic\nmode decomposition,",
418
+ "author": "N. Takeishi, Y. Kawahara,\nT. Yairi,",
419
+ "venue": "in: Advances in Neural Information Processing\nSystems, volume 30, 2017, pp.\n1130\u20131140.",
420
+ "url": null
421
+ }
422
+ },
423
+ {
424
+ "22": {
425
+ "title": "Deep learning for universal linear embeddings of\nnonlinear dynamics,",
426
+ "author": "B. Lusch, J. N. Kutz,\nS. L. Brunton,",
427
+ "venue": "Nature Communications 9\n(2018) 1\u201310.",
428
+ "url": null
429
+ }
430
+ },
431
+ {
432
+ "23": {
433
+ "title": "Forecasting sequential data using consistent\nKoopman autoencoders,",
434
+ "author": "O. Azencot, N. B. Erichson,\nV. Lin, M. Mahoney,",
435
+ "venue": "in: International Conference on Machine\nLearning, 2020, pp. 475\u2013485.",
436
+ "url": null
437
+ }
438
+ },
439
+ {
440
+ "24": {
441
+ "title": "Physics-informed probabilistic learning of linear\nembeddings of nonlinear dynamics with guaranteed stability,",
442
+ "author": "S. Pan, K. Duraisamy,",
443
+ "venue": "SIAM Journal on Applied Dynamical Systems\n19 (2020) 480\u2013509.",
444
+ "url": null
445
+ }
446
+ },
447
+ {
448
+ "25": {
449
+ "title": "Deep learning nonlinear multiscale dynamic problems\nusing Koopman operator,",
450
+ "author": "M. Li, L. Jiang,",
451
+ "venue": "Journal of Computational Physics\n446 (2021) 110660.",
452
+ "url": null
453
+ }
454
+ },
455
+ {
456
+ "26": {
457
+ "title": "Characterizing and correcting for the effect of\nsensor noise in the dynamic mode decomposition,",
458
+ "author": "S. T. Dawson, M. S. Hemati,\nM. O. Williams, C. W. Rowley,",
459
+ "venue": "Experiments in Fluids 57\n(2016) 42.",
460
+ "url": null
461
+ }
462
+ },
463
+ {
464
+ "27": {
465
+ "title": "Koopman neural operator forecaster for time-series\nwith temporal distributional shifts,",
466
+ "author": "R. Wang, Y. Dong, S. O.\nArik, R. Yu,",
467
+ "venue": "in: The Eleventh International Conference on\nLearning Representations, 2023.",
468
+ "url": null
469
+ }
470
+ },
471
+ {
472
+ "28": {
473
+ "title": "Deep learning enhanced dynamic mode decomposition,",
474
+ "author": "D. J. Alford-Lago, C. W. Curtis,\nA. T. Ihler, O. Issan,",
475
+ "venue": "Chaos: An Interdisciplinary Journal of Nonlinear\nScience 32 (2022) 033116.",
476
+ "url": null
477
+ }
478
+ },
479
+ {
480
+ "29": {
481
+ "title": "Learning the Koopman eigendecomposition: A\ndiffeomorphic approach,",
482
+ "author": "P. Bevanda, J. Kirmayr,\nS. Sosnowski, S. Hirche,",
483
+ "venue": "in: American Control Conference,\nIEEE, 2022, pp.\n2736\u20132741.",
484
+ "url": null
485
+ }
486
+ },
487
+ {
488
+ "30": {
489
+ "title": "Prediction accuracy of dynamic mode decomposition,",
490
+ "author": "H. Lu, D. M. Tartakovsky,",
491
+ "venue": "SIAM Journal on Scientific Computing\n42 (2020) A1639\u2013A1662.",
492
+ "url": null
493
+ }
494
+ },
495
+ {
496
+ "31": {
497
+ "title": "Normalizing flows for probabilistic modeling and\ninference,",
498
+ "author": "G. Papamakarios, E. Nalisnick,\nD. J. Rezende, S. Mohamed,\nB. Lakshminarayanan,",
499
+ "venue": "Journal of Machine Learning Research\n22 (2021) 1\u201364.",
500
+ "url": null
501
+ }
502
+ },
503
+ {
504
+ "32": {
505
+ "title": "Normalizing flows: An introduction and review of\ncurrent methods,",
506
+ "author": "I. Kobyzev, S. J. Prince,\nM. A. Brubaker,",
507
+ "venue": "IEEE Transactions on Pattern Analysis and Machine\nIntelligence 43 (2021)\n3964\u20133979.",
508
+ "url": null
509
+ }
510
+ },
511
+ {
512
+ "33": {
513
+ "title": "Nice: Non-linear independent components estimation,",
514
+ "author": "L. Dinh, D. Krueger,\nY. Bengio,",
515
+ "venue": "arXiv preprint arXiv:1410.8516\n(2014).",
516
+ "url": null
517
+ }
518
+ },
519
+ {
520
+ "34": {
521
+ "title": "Density estimation using real NVP,",
522
+ "author": "L. Dinh, J. Sohl-Dickstein,\nS. Bengio,",
523
+ "venue": "in: International Conference on Learning\nRepresentations, 2017.",
524
+ "url": null
525
+ }
526
+ },
527
+ {
528
+ "35": {
529
+ "title": "Glow: Generative flow with invertible 1x1\nconvolutions,",
530
+ "author": "D. P. Kingma, P. Dhariwal,",
531
+ "venue": "in: Advances in Neural Information Processing\nSystems, volume 31, 2018, pp.\n10215\u201310224.",
532
+ "url": null
533
+ }
534
+ },
535
+ {
536
+ "36": {
537
+ "title": "The reversible residual network: Backpropagation\nwithout storing activations,",
538
+ "author": "A. N. Gomez, M. Ren,\nR. Urtasun, R. B. Grosse,",
539
+ "venue": "in: Advances in Neural Information Processing\nSystems, volume 30, 2017, pp.\n2214\u20132224.",
540
+ "url": null
541
+ }
542
+ },
543
+ {
544
+ "37": {
545
+ "title": "Dolfin: A C++/Python finite element library,",
546
+ "author": "A. Logg, G. N. Wells,\nJ. Hake,",
547
+ "venue": "in: Automated Solution of Differential Equations\nby the Finite Element Method: The FEniCS Book,\nSpringer, 2012, pp.\n173\u2013225.",
548
+ "url": null
549
+ }
550
+ },
551
+ {
552
+ "38": {
553
+ "title": "Pydmd: Python dynamic mode decomposition,",
554
+ "author": "N. Demo, M. Tezzele,\nG. Rozza,",
555
+ "venue": "Journal of Open Source Software\n3 (2018) 530.",
556
+ "url": null
557
+ }
558
+ },
559
+ {
560
+ "39": {
561
+ "title": "Pytorch: An imperative style, high-performance deep\nlearning library,",
562
+ "author": "A. Paszke, S. Gross,\nF. Massa, A. Lerer,\nJ. Bradbury, G. Chanan,\nT. Killeen, Z. Lin,\nN. Gimelshein, L. Antiga, et al.,",
563
+ "venue": "in: Advances in Neural Information Processing\nSystems, volume 32, 2019, pp.\n8024\u20138035.",
564
+ "url": null
565
+ }
566
+ },
567
+ {
568
+ "40": {
569
+ "title": "Understanding the difficulty of training deep\nfeedforward neural networks,",
570
+ "author": "X. Glorot, Y. Bengio,",
571
+ "venue": "in: Proceedings of the thirteenth International\nConference on Artificial Intelligence and Statistics, 2010,\npp. 249\u2013256.",
572
+ "url": null
573
+ }
574
+ },
575
+ {
576
+ "41": {
577
+ "title": "Adam: A method for stochastic optimization,",
578
+ "author": "D. P. Kingma, J. Ba,",
579
+ "venue": "in: International Conference on Learning\nRepresentations, 2015.",
580
+ "url": null
581
+ }
582
+ },
583
+ {
584
+ "42": {
585
+ "title": "An overview of gradient descent optimization\nalgorithms,",
586
+ "author": "S. Ruder,",
587
+ "venue": "CoRR abs/1609.04747\n(2016).",
588
+ "url": null
589
+ }
590
+ },
591
+ {
592
+ "43": {
593
+ "title": "Physics-informed neural networks: A deep learning\nframework for solving forward and inverse problems involving nonlinear\npartial differential equations,",
594
+ "author": "M. Raissi, P. Perdikaris,\nG. E. Karniadakis,",
595
+ "venue": "Journal of Computational physics\n378 (2019) 686\u2013707.",
596
+ "url": null
597
+ }
598
+ },
599
+ {
600
+ "44": {
601
+ "title": "When and why PINNs fail to train: A neural tangent\nkernel perspective,",
602
+ "author": "S. Wang, X. Yu,\nP. Perdikaris,",
603
+ "venue": "Journal of Computational Physics\n449 (2022) 110768.",
604
+ "url": null
605
+ }
606
+ }
607
+ ],
608
+ "url": "http://arxiv.org/html/2306.17396v2"
609
+ }
20240123/2307.02156v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2307.02764v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2308.10487v2.json ADDED
@@ -0,0 +1,658 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Deciphering Raw Data in Neuro-Symbolic Learning with Provable Guarantees",
3
+ "abstract": "Neuro-symbolic hybrid systems are promising for integrating machine learning and symbolic reasoning, where perception models are facilitated with information inferred from a symbolic knowledge base through logical reasoning. Despite empirical evidence showing the ability of hybrid systems to learn accurate perception models, the theoretical understanding of learnability is still lacking. Hence, it remains unclear why a hybrid system succeeds for a specific task and when it may fail given a different knowledge base. In this paper, we introduce a novel way of characterising supervision signals from a knowledge base, and establish a criterion for determining the knowledge\u2019s efficacy in facilitating successful learning. This, for the first time, allows us to address the two questions above by inspecting the knowledge base under investigation. Our analysis suggests that many knowledge bases satisfy the criterion, thus enabling effective learning, while some fail to satisfy it, indicating potential failures. Comprehensive experiments confirm the utility of our criterion on benchmark tasks.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Integrating machine learning and symbolic reasoning is a holy grail challenge in artificial intelligence.\nThis pursuit has attracted much attention over the past decades (Garcez, Broda, and Gabbay 2002 ###reference_19###; Getoor and Taskar 2007 ###reference_22###; Russell 2015 ###reference_48###; De Raedt et al. 2021 ###reference_9###; Hitzler and Sarker 2022 ###reference_25###), leading to fruitful developments such as probabilistic logic programing (De Raedt and Kimmig 2015 ###reference_11###) and statistical relational artificial intelligence (De Raedt et al. 2016 ###reference_10###).\nIn recent years, great progress has been made in neuro-symbolic methods, equipping symbolic systems with the ability to perceive sub-symbolic data.\nOne intriguing finding in these hybrid systems is that the perception performance of initialised classifiers can be significantly enhanced through abduction, a.k.a. abductive reasoning (Dai et al. 2019 ###reference_7###; Li et al. 2020 ###reference_35###).\nMoreover, it has been shown that accurate classifiers can be learned from scratch without relying on fully labelled data, given appropriate objectives and knowledge bases (Xu et al. 2018 ###reference_59###; Manhaeve et al. 2018 ###reference_40###; Tsamoura, Hospedales, and Michael 2021 ###reference_54###; Dai and Muggleton 2021 ###reference_6###).\nThese advances highlight the value of symbolic reasoning in many learning tasks.\nHowever, not all symbolic knowledge helps improve learning performance; there are failures in practice (Cai et al. 2021 ###reference_1###; Marconato et al. 2023a ###reference_41###, b ###reference_42###).\nMore importantly, the theoretical underpinnings that drive these empirical successes or failures remain elusive, which may hinder the adoption of neuro-symbolic methods in other applications.\nIn particular, it is unclear why such a hybrid learning system works for a specific task and when it may fail given a different knowledge base.\n###figure_1### In this paper, we address the above questions under the framework of abductive learning (ABL), an expressive and representative paradigm of hybrid systems (Zhou 2019 ###reference_63###; Zhou and Huang 2022 ###reference_65###).\nAs illustrated in Fig. 1 ###reference_###, a hybrid learning system usually involves the perception of raw inputs using a classifier, followed by an abductive reasoning module (Kakas, Kowalski, and Toni 1992 ###reference_32###) that aims to correct the wrongly perceived labels by minimising the inconsistency with a given symbolic knowledge base."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Preliminaries",
15
+ "text": ""
16
+ },
17
+ {
18
+ "section_id": "3",
19
+ "parent_section_id": null,
20
+ "section_name": "Theoretical Analysis",
21
+ "text": "Previous studies have showcased the practicality of neuro-symbolic learning systems\u2014the objective of minimal inconsistency empirically yields classifiers adept at accurately predicting labels.\nIn this section, we aim to disclose the ingredients of success from a theoretical perspective.\nWe begin by considering a simple yet representative task.\nThis motivates us to formulate a novel way of characterising supervision signals from a given knowledge base, and provide conditions under which the signals are sufficient for learning to succeed.\nSpecifically,\nwe show that the objective in Eq. 2 ###reference_### essentially addresses an upper bound of a location-based risk, whose minimisers are guaranteed to recover ground-truth labels when a rank criterion is satisfied."
22
+ },
23
+ {
24
+ "section_id": "4",
25
+ "parent_section_id": null,
26
+ "section_name": "Experiments",
27
+ "text": "In this section, we conduct comprehensive experiments to validate the utility of the proposed criterion on various tasks.\nThe code is available for download.222https://github.com/AbductiveLearning/ABL-TL ###reference_L###"
28
+ },
29
+ {
30
+ "section_id": "5",
31
+ "parent_section_id": null,
32
+ "section_name": "Related Work",
33
+ "text": ""
34
+ },
35
+ {
36
+ "section_id": "6",
37
+ "parent_section_id": null,
38
+ "section_name": "Conclusion",
39
+ "text": "In this work, we introduce a novel characterisation of the supervision signals from a given knowledge base and establish a rank criterion capable of indicating the practical efficacy of a given knowledge base in improving learning performance.\nBoth theoretical and empirical results shed light on the success of hybrid learning systems while pinpointing potential failures when the supervision signals from a symbolic knowledge base are insufficient to ensure effective learning.\nFuture work includes the detailed analysis of mutual promotion between learning and reasoning, the incorporation of other machine learning models, and the exploitation of abundant labelled data and inaccurate knowledge bases."
40
+ }
41
+ ],
42
+ "appendix": [
43
+ {
44
+ "section_id": "Appendix 1",
45
+ "parent_section_id": null,
46
+ "section_name": "Appendix A Pseudocode for Neuro-Symbolic Learning",
47
+ "text": ""
48
+ },
49
+ {
50
+ "section_id": "Appendix 2",
51
+ "parent_section_id": null,
52
+ "section_name": "Appendix B Proofs",
53
+ "text": "Recall that the objective of minimal inconsistency is expressed as follows.\nwhere denotes the labels abduced from the candidate set .\nAlthough various heuristics have been proposed to select the most likely labels from the candidate set, we note that these heuristics behave like random guessing in the early stages of training when the classifier is randomly initialised.\nTherefore, the case that the abduced labels are randomly chosen from is especially significant.\nIn this case, the objective of minimal inconsistency is equivalent to the following in expectation:\nwhere since the concept space contains only one target concept and the uniform assumption holds.\nFix an example with and define as the expectation with respect to . Then, we obtain\nwhere the last equality holds because , i.e., any with belongs to the target concept .\nDenote the candidate set of abduced labels as , where . Then, we have\nwhere denotes the estimation of by the classifier .\nBy Jensen\u2019s inequality, we have\nSince for any , we obtain\nwhere is the indicator function.\nBy the uniform assumption, i.e., , , we obtain , , and , .\nMeanwhile, we have\nwhere the inequality holds because and .\nHence, combining this inequality with Eq. 12 ###reference_### yields\nFurther, by combining this inequality with Eq. 11 ###reference_###, we obtain\nand then\nwhere the penultimate equality holds since we have defined as an estimation of the conditional probability in Eq. 3 ###reference_### and .\nMeanwhile, according to the generation process of the instance-location pairs described in Section 3 ###reference_###, we have\nFinally, we conclude by taking expectation over in Eq. 10 ###reference_### as follows.\n\u220e\nRecall that the objective of minimal inconsistency is expressed as follows.\nwhere denotes the labels abduced from the candidate set .\nAlthough various heuristics have been proposed to select the most likely labels from the candidate set, we note that these heuristics behave like random guessing in the early stages of training when the classifier is randomly initialised.\nTherefore, the case that the abduced labels are randomly chosen from is especially significant.\nIn this case, the objective of minimal inconsistency is equivalent to the following in expectation:\nFix an example and a target concept such that .\nDenote the candidate set of abduced labels as , where .\nThen, the objective becomes\nwhere denotes the estimation of by the classifier .\nBy Jensen\u2019s inequality, we have\nSince for any , we obtain\nwhere is the indicator function.\nBy marginalising over , we obtain\n, .\nMeanwhile, we have\nwhere the inequality holds because , , and .\nHence, combining this inequality with Eq. 23 ###reference_### yields\nFurther, by combining this inequality with Eq. 22 ###reference_###, we obtain\nwhere represents a synthetic label with value , and denotes .\nThen, we have\nwhere the penultimate equality holds since we have defined as an estimation of the conditional probability in Eq. 5 ###reference_### and .\nMeanwhile, according to the generation process of the instance-target-location triplets described in Section 3 ###reference_###, we have\nFinally, we conclude by taking expectation over and in Eq. 21 ###reference_### as follows.\n\u220e\nBy definition, we have\nNote that when is minimised, is also minimised for any with .\nFor cross-entropy loss, we have the following optimisation problem:\nBy using the Lagrange multiplier method, we have\nBy setting the derivative to , we obtain .\nMeanwhile, since , we have . Then, we obtain for any with , i.e., .\nSimilarly, by definition, we have\nwhere , i.e., .\nThen, the minimiser of is , i.e., .\nTherefore, we obtain\nOn the other hand, by the definition of , the minimiser of , denoted by , satisfies the following:\nBy combining Eq. 34 ###reference_### and Eq. 35 ###reference_###, we have\nTherefore, if has full row rank, we obtain , which implies .\n\u220e"
54
+ },
55
+ {
56
+ "section_id": "Appendix 3",
57
+ "parent_section_id": null,
58
+ "section_name": "Appendix C Experimental Settings",
59
+ "text": ""
60
+ },
61
+ {
62
+ "section_id": "Appendix 4",
63
+ "parent_section_id": null,
64
+ "section_name": "Appendix D Further Results on Benchmark Tasks",
65
+ "text": "The observations reported in the main text are further corroborated by Table 2 ###reference_###, which showcases the test performance of ResNet-18 (He et al. 2016 ###reference_24###) produced by different methods on different datasets and tasks."
66
+ },
67
+ {
68
+ "section_id": "Appendix 5",
69
+ "parent_section_id": null,
70
+ "section_name": "Appendix E Examples of Random Knowledge Bases",
71
+ "text": "positive([Y0,Y1,Y2]) (Y0Y1Y2)(Y0Y1Y2)(Y0Y1Y2).\nnegative([Y0,Y1,Y2]) positive([Y0,Y1,Y2]).\npositive([Y0,Y1,Y2]) (Y0Y1Y2)(Y0Y1Y2)(Y0Y1Y2)(Y0Y1Y2).\nnegative([Y0,Y1,Y2]) positive([Y0,Y1,Y2]).\npositive([Y0,Y1,Y2]) (Y0Y1Y2)(Y0Y1Y2).\nnegative([Y0,Y1,Y2]) positive([Y0,Y1,Y2]).\npositive([Y0,Y1,Y2]) (Y0Y1Y2)(Y0Y1Y2).\nnegative([Y0,Y1,Y2]) positive([Y0,Y1,Y2])."
72
+ }
73
+ ],
74
+ "tables": {
75
+ "1": {
76
+ "table_html": "<figure class=\"ltx_table\" id=\"S3.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S3.T1.124\" style=\"width:505.9pt;height:492.8pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(58.9pt,-57.4pt) scale(1.30365358729971,1.30365358729971) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S3.T1.124.124\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S3.T1.124.124.125.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T1.124.124.125.1.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.124.124.125.1.1.1\">Task</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"S3.T1.124.124.125.1.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.124.124.125.1.2.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S3.T1.124.124.125.1.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.124.124.125.1.3.1\">MNIST</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S3.T1.124.124.125.1.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.124.124.125.1.4.1\">EMNIST</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S3.T1.124.124.125.1.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.124.124.125.1.5.1\">USPS</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"S3.T1.124.124.125.1.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.124.124.125.1.6.1\">Kuzushiji</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"S3.T1.124.124.125.1.7\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.124.124.125.1.7.1\">Fashion</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S3.T1.7.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.1.1.1.1\" rowspan=\"5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.1.1.1.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.2.2.2.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.2.2.2.2.1\">Rand</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.3.3.3.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.4.4.4.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.5.5.5.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.6.6.6.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.7.7.7.7\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.13.13.13\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.8.8.8.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.8.8.8.1.1\">MaxP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.9.9.9.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.10.10.10.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.11.11.11.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.12.12.12.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.13.13.13.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.19.19.19\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.14.14.14.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.14.14.14.1.1\">MinD</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.15.15.15.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.16.16.16.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.17.17.17.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.18.18.18.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.19.19.19.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.25.25.25\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.20.20.20.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.20.20.20.1.1\">Avg</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.21.21.21.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.22.22.22.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.23.23.23.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.24.24.24.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.25.25.25.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.31.31.31\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.26.26.26.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.26.26.26.1.1\">TL</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.27.27.27.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.28.28.28.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.29.29.29.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.30.30.30.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.31.31.31.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.38.38.38\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.32.32.32.1\" rowspan=\"5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.32.32.32.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.33.33.33.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.33.33.33.2.1\">Rand</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.34.34.34.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.35.35.35.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.36.36.36.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.37.37.37.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.38.38.38.7\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.44.44.44\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.39.39.39.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.39.39.39.1.1\">MaxP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.40.40.40.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.41.41.41.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.42.42.42.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.43.43.43.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.44.44.44.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.50.50.50\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.45.45.45.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.45.45.45.1.1\">MinD</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.46.46.46.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.47.47.47.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.48.48.48.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.49.49.49.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.50.50.50.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.56.56.56\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.51.51.51.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.51.51.51.1.1\">Avg</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.52.52.52.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.53.53.53.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.54.54.54.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.55.55.55.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.56.56.56.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.62.62.62\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.57.57.57.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.57.57.57.1.1\">TL</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.58.58.58.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.59.59.59.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.60.60.60.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.61.61.61.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.62.62.62.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.69.69.69\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.63.63.63.1\" rowspan=\"5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.63.63.63.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.64.64.64.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.64.64.64.2.1\">Rand</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.65.65.65.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.66.66.66.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.67.67.67.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.68.68.68.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.69.69.69.7\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.75.75.75\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.70.70.70.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.70.70.70.1.1\">MaxP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.71.71.71.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.72.72.72.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.73.73.73.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.74.74.74.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.75.75.75.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.81.81.81\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.76.76.76.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.76.76.76.1.1\">MinD</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.77.77.77.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.78.78.78.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.79.79.79.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.80.80.80.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.81.81.81.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.87.87.87\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.82.82.82.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.82.82.82.1.1\">Avg</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.83.83.83.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.84.84.84.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.85.85.85.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.86.86.86.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.87.87.87.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.93.93.93\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.88.88.88.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.88.88.88.1.1\">TL</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.89.89.89.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.90.90.90.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.91.91.91.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.92.92.92.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.93.93.93.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.100.100.100\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"S3.T1.94.94.94.1\" rowspan=\"5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"S3.T1.94.94.94.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"S3.T1.95.95.95.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.95.95.95.2.1\">Rand</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.96.96.96.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.97.97.97.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.98.98.98.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S3.T1.99.99.99.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S3.T1.100.100.100.7\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.106.106.106\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.101.101.101.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.101.101.101.1.1\">MaxP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.102.102.102.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.103.103.103.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.104.104.104.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.105.105.105.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.106.106.106.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.112.112.112\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.107.107.107.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.107.107.107.1.1\">MinD</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.108.108.108.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.109.109.109.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.110.110.110.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.111.111.111.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.112.112.112.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.118.118.118\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"S3.T1.113.113.113.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.113.113.113.1.1\">Avg</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.114.114.114.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.115.115.115.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.116.116.116.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S3.T1.117.117.117.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S3.T1.118.118.118.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S3.T1.124.124.124\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"S3.T1.119.119.119.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"S3.T1.119.119.119.1.1\">TL</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.120.120.120.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.121.121.121.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.122.122.122.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S3.T1.123.123.123.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S3.T1.124.124.124.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Test accuracy (%) of each method using MLP on benchmark datasets and tasks.</figcaption>\n</figure>",
77
+ "capture": "Table 1: Test accuracy (%) of each method using MLP on benchmark datasets and tasks."
78
+ },
79
+ "2": {
80
+ "table_html": "<figure class=\"ltx_table\" id=\"A4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A4.T2.124\" style=\"width:505.9pt;height:486.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(56.4pt,-54.3pt) scale(1.28706995755978,1.28706995755978) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"A4.T2.124.124\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A4.T2.124.124.125.1\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"A4.T2.124.124.125.1.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.124.124.125.1.1.1\">Task</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_th_row ltx_border_r ltx_border_tt\" id=\"A4.T2.124.124.125.1.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.124.124.125.1.2.1\">Method</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"A4.T2.124.124.125.1.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.124.124.125.1.3.1\">MNIST</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"A4.T2.124.124.125.1.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.124.124.125.1.4.1\">EMNIST</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"A4.T2.124.124.125.1.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.124.124.125.1.5.1\">USPS</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_tt\" id=\"A4.T2.124.124.125.1.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.124.124.125.1.6.1\">Kuzushiji</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"A4.T2.124.124.125.1.7\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.124.124.125.1.7.1\">Fashion</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A4.T2.7.7.7\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A4.T2.1.1.1.1\" rowspan=\"5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.1.1.1.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A4.T2.2.2.2.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.2.2.2.2.1\">Rand</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.3.3.3.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.4.4.4.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.5.5.5.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.6.6.6.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.7.7.7.7\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.13.13.13\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.8.8.8.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.8.8.8.1.1\">MaxP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.9.9.9.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.10.10.10.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.11.11.11.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.12.12.12.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.13.13.13.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.19.19.19\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.14.14.14.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.14.14.14.1.1\">MinD</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.15.15.15.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.16.16.16.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.17.17.17.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.18.18.18.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.19.19.19.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.25.25.25\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.20.20.20.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.20.20.20.1.1\">Avg</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.21.21.21.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.22.22.22.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.23.23.23.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.24.24.24.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.25.25.25.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.31.31.31\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.26.26.26.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.26.26.26.1.1\">TL</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.27.27.27.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.28.28.28.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.29.29.29.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.30.30.30.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.31.31.31.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.38.38.38\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A4.T2.32.32.32.1\" rowspan=\"5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.32.32.32.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A4.T2.33.33.33.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.33.33.33.2.1\">Rand</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.34.34.34.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.35.35.35.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.36.36.36.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.37.37.37.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.38.38.38.7\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.44.44.44\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.39.39.39.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.39.39.39.1.1\">MaxP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.40.40.40.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.41.41.41.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.42.42.42.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.43.43.43.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.44.44.44.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.50.50.50\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.45.45.45.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.45.45.45.1.1\">MinD</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.46.46.46.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.47.47.47.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.48.48.48.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.49.49.49.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.50.50.50.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.56.56.56\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.51.51.51.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.51.51.51.1.1\">Avg</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.52.52.52.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.53.53.53.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.54.54.54.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.55.55.55.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.56.56.56.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.62.62.62\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.57.57.57.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.57.57.57.1.1\">TL</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.58.58.58.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.59.59.59.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.60.60.60.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.61.61.61.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.62.62.62.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.69.69.69\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A4.T2.63.63.63.1\" rowspan=\"5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.63.63.63.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A4.T2.64.64.64.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.64.64.64.2.1\">Rand</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.65.65.65.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.66.66.66.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.67.67.67.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.68.68.68.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.69.69.69.7\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.75.75.75\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.70.70.70.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.70.70.70.1.1\">MaxP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.71.71.71.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.72.72.72.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.73.73.73.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.74.74.74.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.75.75.75.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.81.81.81\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.76.76.76.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.76.76.76.1.1\">MinD</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.77.77.77.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.78.78.78.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.79.79.79.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.80.80.80.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.81.81.81.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.87.87.87\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.82.82.82.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.82.82.82.1.1\">Avg</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.83.83.83.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.84.84.84.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.85.85.85.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.86.86.86.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.87.87.87.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.93.93.93\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.88.88.88.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.88.88.88.1.1\">TL</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.89.89.89.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.90.90.90.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.91.91.91.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.92.92.92.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.93.93.93.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.100.100.100\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r ltx_border_t\" id=\"A4.T2.94.94.94.1\" rowspan=\"5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_font_smallcaps\" id=\"A4.T2.94.94.94.1.1\"></span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r ltx_border_t\" id=\"A4.T2.95.95.95.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.95.95.95.2.1\">Rand</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.96.96.96.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.97.97.97.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.98.98.98.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A4.T2.99.99.99.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A4.T2.100.100.100.7\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.106.106.106\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.101.101.101.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.101.101.101.1.1\">MaxP</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.102.102.102.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.103.103.103.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.104.104.104.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.105.105.105.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.106.106.106.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.112.112.112\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.107.107.107.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.107.107.107.1.1\">MinD</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.108.108.108.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.109.109.109.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.110.110.110.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.111.111.111.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.112.112.112.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.118.118.118\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_r\" id=\"A4.T2.113.113.113.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.113.113.113.1.1\">Avg</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.114.114.114.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.115.115.115.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.116.116.116.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A4.T2.117.117.117.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A4.T2.118.118.118.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A4.T2.124.124.124\">\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_row ltx_border_bb ltx_border_r\" id=\"A4.T2.119.119.119.1\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"><span class=\"ltx_text ltx_markedasmath ltx_font_smallcaps\" id=\"A4.T2.119.119.119.1.1\">TL</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A4.T2.120.120.120.2\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A4.T2.121.121.121.3\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A4.T2.122.122.122.4\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A4.T2.123.123.123.5\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A4.T2.124.124.124.6\" style=\"padding-left:10.2pt;padding-right:10.2pt;\"></td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Test accuracy (%) of each method using ResNet-18 <cite class=\"ltx_cite ltx_citemacro_citep\">(He et\u00a0al. <a class=\"ltx_ref\" href=\"#bib.bib24\" title=\"\">2016</a>)</cite> on benchmark datasets and tasks.</figcaption>\n</figure>",
81
+ "capture": "Table 2: Test accuracy (%) of each method using ResNet-18 (He et\u00a0al. 2016) on benchmark datasets and tasks."
82
+ }
83
+ },
84
+ "image_paths": {
85
+ "1": {
86
+ "figure_path": "2308.10487v2_figure_1.png",
87
+ "caption": "Figure 1: \nAn illustration of the hybrid learning framework.\nFirst, raw data such as handwritten equations are perceived by a classifier.\nNext, the perceived labels are revised via logical abduction under the principle of minimal inconsistency.\nFinally, the abduced labels are used to update the classifier.",
88
+ "url": "http://arxiv.org/html/2308.10487v2/x1.png"
89
+ },
90
+ "2": {
91
+ "figure_path": "2308.10487v2_figure_2.png",
92
+ "caption": "Figure 2: \nIllustration of the knowledge base about conjunction, the facts abduced from the knowledge base, and the raw inputs corresponding to the target concept \u201c\ud835\ude8c\ud835\ude98\ud835\ude97\ud835\ude93\ud835\ude8c\ud835\ude98\ud835\ude97\ud835\ude93\\mathtt{conj}typewriter_conj\u201d.",
93
+ "url": "http://arxiv.org/html/2308.10487v2/x2.png"
94
+ },
95
+ "3": {
96
+ "figure_path": "2308.10487v2_figure_3.png",
97
+ "caption": "Figure 3: \nIllustration of the location signals in Example 1.\nFrom the input sequences, we can observe instance-location pairs \u27e8x,\u03b9\u27e9\ud835\udc65\ud835\udf04\\langle x,\\iota\\rangle\u27e8 italic_x , italic_\u03b9 \u27e9, while ground-truth labels y\ud835\udc66yitalic_y are unobservable.\nIntuitively, the instances of y=0\ud835\udc660y=0italic_y = 0 are more likely to occur at the 2222-th position, if candidate label probabilities are equal.",
98
+ "url": "http://arxiv.org/html/2308.10487v2/x3.png"
99
+ },
100
+ "4": {
101
+ "figure_path": "2308.10487v2_figure_4.png",
102
+ "caption": "Figure 4: \nIllustration of the knowledge base about the target concepts \u201c\ud835\ude8c\ud835\ude98\ud835\ude97\ud835\ude93\ud835\udff6\ud835\ude8c\ud835\ude98\ud835\ude97\ud835\ude93\ud835\udff6\\mathtt{conj0}typewriter_conj0\u201d and \u201c\ud835\ude8c\ud835\ude98\ud835\ude97\ud835\ude93\ud835\udff7\ud835\ude8c\ud835\ude98\ud835\ude97\ud835\ude93\ud835\udff7\\mathtt{conj1}typewriter_conj1\u201d, along with the corresponding facts abducted from the knowledge base.",
103
+ "url": "http://arxiv.org/html/2308.10487v2/x4.png"
104
+ },
105
+ "5": {
106
+ "figure_path": "2308.10487v2_figure_5.png",
107
+ "caption": "Figure 5: \nPerformance on \ud835\ude77\ud835\ude74\ud835\ude73\ud835\ude77\ud835\ude74\ud835\ude73\\mathtt{HED}typewriter_HED using different knowledge bases of numeral systems ranging from base 2 to base 10.",
108
+ "url": "http://arxiv.org/html/2308.10487v2/x5.png"
109
+ },
110
+ "6(a)": {
111
+ "figure_path": "2308.10487v2_figure_6(a).png",
112
+ "caption": "(a) \ud835\ude73\ud835\ude7d\ud835\ude75\ud835\ude73\ud835\ude7d\ud835\ude75\\mathtt{DNF}typewriter_DNF, m=3\ud835\udc5a3m=3italic_m = 3\nFigure 6: \nComparison of knowledge bases satisfying or not satisfying the rank criterion. Knowledge bases are created by randomly generating rules in disjunctive or conjunctive norm forms, with clause lengths varying from 3 to 5. The rank criterion effectively indicates the success of learning accurate classifiers.",
113
+ "url": "http://arxiv.org/html/2308.10487v2/x6.png"
114
+ },
115
+ "6(b)": {
116
+ "figure_path": "2308.10487v2_figure_6(b).png",
117
+ "caption": "(b) \ud835\ude73\ud835\ude7d\ud835\ude75\ud835\ude73\ud835\ude7d\ud835\ude75\\mathtt{DNF}typewriter_DNF, m=4\ud835\udc5a4m=4italic_m = 4\nFigure 6: \nComparison of knowledge bases satisfying or not satisfying the rank criterion. Knowledge bases are created by randomly generating rules in disjunctive or conjunctive norm forms, with clause lengths varying from 3 to 5. The rank criterion effectively indicates the success of learning accurate classifiers.",
118
+ "url": "http://arxiv.org/html/2308.10487v2/x7.png"
119
+ },
120
+ "6(c)": {
121
+ "figure_path": "2308.10487v2_figure_6(c).png",
122
+ "caption": "(c) \ud835\ude73\ud835\ude7d\ud835\ude75\ud835\ude73\ud835\ude7d\ud835\ude75\\mathtt{DNF}typewriter_DNF, m=5\ud835\udc5a5m=5italic_m = 5\nFigure 6: \nComparison of knowledge bases satisfying or not satisfying the rank criterion. Knowledge bases are created by randomly generating rules in disjunctive or conjunctive norm forms, with clause lengths varying from 3 to 5. The rank criterion effectively indicates the success of learning accurate classifiers.",
123
+ "url": "http://arxiv.org/html/2308.10487v2/x8.png"
124
+ }
125
+ },
126
+ "validation": true,
127
+ "references": [
128
+ {
129
+ "1": {
130
+ "title": "Abductive Learning with Ground Knowledge Base.",
131
+ "author": "Cai, L.-W.; Dai, W.-Z.; Huang, Y.-X.; Li, Y.-F.; Muggleton, S. H.; and Jiang, Y. 2021.",
132
+ "venue": "In IJCAI, 1815\u20131821.",
133
+ "url": null
134
+ }
135
+ },
136
+ {
137
+ "2": {
138
+ "title": "Deep learning for classical japanese literature.",
139
+ "author": "Clanuwat, T.; Bober-Irizar, M.; Kitamoto, A.; Lamb, A.; Yamamoto, K.; and Ha, D. 2018.",
140
+ "venue": "arXiv preprint arXiv:1812.01718.",
141
+ "url": null
142
+ }
143
+ },
144
+ {
145
+ "3": {
146
+ "title": "EMNIST: Extending MNIST to handwritten letters.",
147
+ "author": "Cohen, G.; Afshar, S.; Tapson, J.; and Van Schaik, A. 2017.",
148
+ "venue": "In IJCNN, 2921\u20132926.",
149
+ "url": null
150
+ }
151
+ },
152
+ {
153
+ "4": {
154
+ "title": "Tensorlog: A probabilistic database implemented using deep-learning infrastructure.",
155
+ "author": "Cohen, W.; Yang, F.; and Mazaitis, K. R. 2020.",
156
+ "venue": "Journal of Artificial Intelligence Research, 67: 285\u2013325.",
157
+ "url": null
158
+ }
159
+ },
160
+ {
161
+ "5": {
162
+ "title": "Learning from partial labels.",
163
+ "author": "Cour, T.; Sapp, B.; and Taskar, B. 2011.",
164
+ "venue": "Journal of Machine Learning Research, 12: 1501\u20131536.",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "6": {
170
+ "title": "Abductive knowledge induction from raw data.",
171
+ "author": "Dai, W.-Z.; and Muggleton, S. H. 2021.",
172
+ "venue": "In IJCAI, 1845\u20131851.",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "7": {
178
+ "title": "Bridging machine learning and logical reasoning by abductive learning.",
179
+ "author": "Dai, W.-Z.; Xu, Q.; Yu, Y.; and Zhou, Z.-H. 2019.",
180
+ "venue": "In NeurIPS, 2811\u20132822.",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "8": {
186
+ "title": "Combining logical abduction and statistical induction: Discovering written primitives with human knowledge.",
187
+ "author": "Dai, W.-Z.; and Zhou, Z.-H. 2017.",
188
+ "venue": "In AAAI, 4392\u20134398.",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "9": {
194
+ "title": "From statistical relational to neural-symbolic artificial intelligence.",
195
+ "author": "De Raedt, L.; Duman\u010di\u0107, S.; Manhaeve, R.; and Marra, G. 2021.",
196
+ "venue": "In IJCAI, 4943\u20134950.",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "10": {
202
+ "title": "Statistical relational artificial intelligence: Logic, probability, and computation.",
203
+ "author": "De Raedt, L.; Kersting, K.; Natarajan, S.; and Poole, D. 2016.",
204
+ "venue": "Synthesis lectures on artificial intelligence and machine learning, 10(2): 1\u2013189.",
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "11": {
210
+ "title": "Probabilistic (logic) programming concepts.",
211
+ "author": "De Raedt, L.; and Kimmig, A. 2015.",
212
+ "venue": "Machine Learning, 100: 5\u201347.",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "12": {
218
+ "title": "Solving the multiple instance problem with axis-parallel rectangles.",
219
+ "author": "Dietterich, T. G.; Lathrop, R. H.; and Lozano-P\u00e9rez, T. 1997.",
220
+ "venue": "Artificial intelligence, 89(1-2): 31\u201371.",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "13": {
226
+ "title": "Logic tensor networks for semantic image interpretation.",
227
+ "author": "Donadello, I.; Serafini, L.; and Garcez, A. D. 2017.",
228
+ "venue": "In IJCAI.",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "14": {
234
+ "title": "Neural Logic Machines.",
235
+ "author": "Dong, H.; Mao, J.; Lin, T.; Wang, C.; Li, L.; and Zhou, D. 2019.",
236
+ "venue": "In ICLR.",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "15": {
242
+ "title": "Learning classifiers from only positive and unlabeled data.",
243
+ "author": "Elkan, C.; and Noto, K. 2008.",
244
+ "venue": "In KDD, 213\u2013220.",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "16": {
250
+ "title": "Making sense of raw input.",
251
+ "author": "Evans, R.; Bo\u0161njak, M.; Buesing, L.; Ellis, K.; Pfau, D.; Kohli, P.; and Sergot, M. 2021.",
252
+ "venue": "Artificial Intelligence, 299: 103521.",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "17": {
258
+ "title": "Provably consistent partial-label learning.",
259
+ "author": "Feng, L.; Lv, J.; Han, B.; Xu, M.; Niu, G.; Geng, X.; An, B.; and Sugiyama, M. 2020.",
260
+ "venue": "In NeurIPS, 10948\u201310960.",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "18": {
266
+ "title": "Markov chains: from theory to implementation and experimentation.",
267
+ "author": "Gagniuc, P. A. 2017.",
268
+ "venue": "John Wiley & Sons.",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "19": {
274
+ "title": "Neural-symbolic learning systems: foundations and applications.",
275
+ "author": "Garcez, A. S. d.; Broda, K.; and Gabbay, D. M. 2002.",
276
+ "venue": "Springer Science & Business Media.",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "20": {
282
+ "title": "Abductive reasoning in neural-symbolic systems.",
283
+ "author": "Garcez, A. S. d.; Gabbay, D. M.; Ray, O.; and Woods, J. 2007.",
284
+ "venue": "Topoi, 26: 37\u201349.",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "21": {
290
+ "title": "Differentiable programs with neural libraries.",
291
+ "author": "Gaunt, A. L.; Brockschmidt, M.; Kushman, N.; and Tarlow, D. 2017.",
292
+ "venue": "In ICML, 1213\u20131222.",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "22": {
298
+ "title": "Introduction to statistical relational learning.",
299
+ "author": "Getoor, L.; and Taskar, B. 2007.",
300
+ "venue": "MIT press.",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "23": {
306
+ "title": "Stochastic Optimization of Sorting Networks via Continuous Relaxations.",
307
+ "author": "Grover, A.; Wang, E.; Zweig, A.; and Ermon, S. 2018.",
308
+ "venue": "In ICLR.",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "24": {
314
+ "title": "Deep residual learning for image recognition.",
315
+ "author": "He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016.",
316
+ "venue": "In CVPR, 770\u2013778.",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "25": {
322
+ "title": "Neuro-Symbolic Artificial Intelligence - The State of the Art.",
323
+ "author": "Hitzler, P.; and Sarker, M. K. 2022.",
324
+ "venue": "Frontiers in Artificial Intelligence and Applications. IOS Press.",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "26": {
330
+ "title": "Fast abductive learning by similarity-based consistency optimization.",
331
+ "author": "Huang, Y.-X.; Dai, W.-Z.; Cai, L.-W.; Muggleton, S. H.; and Jiang, Y. 2021.",
332
+ "venue": "In NeurIPS, 26574\u201326584.",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "27": {
338
+ "title": "Enabling Knowledge Refinement upon New Concepts in Abductive Learning.",
339
+ "author": "Huang, Y.-X.; Dai, W.-Z.; Jiang, Y.; and Zhou, Z.-H. 2023a.",
340
+ "venue": "In AAAI, 7928\u20137935.",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "28": {
346
+ "title": "Semi-supervised abductive learning and its application to theft judicial sentencing.",
347
+ "author": "Huang, Y.-X.; Dai, W.-Z.; Yang, J.; Cai, L.-W.; Cheng, S.; Huang, R.; Li, Y.-F.; and Zhou, Z.-H. 2020.",
348
+ "venue": "In ICDM, 1070\u20131075.",
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "29": {
354
+ "title": "Enabling abductive learning to exploit knowledge graph.",
355
+ "author": "Huang, Y.-X.; Sun, Z.; Li, G.; Tian, X.; Dai, W.-Z.; Hu, W.; Jiang, Y.; and Zhou, Z.-H. 2023b.",
356
+ "venue": "In IJCAI, 3839\u20133847.",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "30": {
362
+ "title": "A database for handwritten text recognition research.",
363
+ "author": "Hull, J. J. 1994.",
364
+ "venue": "IEEE Transactions on pattern analysis and machine intelligence, 16(5): 550\u2013554.",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "31": {
370
+ "title": "Learning with multiple labels.",
371
+ "author": "Jin, R.; and Ghahramani, Z. 2002.",
372
+ "venue": "In NeurIPS, 897\u2013904.",
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "32": {
378
+ "title": "Abductive logic programming.",
379
+ "author": "Kakas, A. C.; Kowalski, R. A.; and Toni, F. 1992.",
380
+ "venue": "Journal of logic and computation, 2(6): 719\u2013770.",
381
+ "url": null
382
+ }
383
+ },
384
+ {
385
+ "33": {
386
+ "title": "Adam: A method for stochastic optimization.",
387
+ "author": "Kingma, D. P.; and Ba, J. 2015.",
388
+ "venue": "In ICLR.",
389
+ "url": null
390
+ }
391
+ },
392
+ {
393
+ "34": {
394
+ "title": "Gradient-based learning applied to document recognition.",
395
+ "author": "LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998.",
396
+ "venue": "Proceedings of the IEEE, 86(11): 2278\u20132324.",
397
+ "url": null
398
+ }
399
+ },
400
+ {
401
+ "35": {
402
+ "title": "Closed loop neural-symbolic learning via integrating neural perception, grammar parsing, and symbolic reasoning.",
403
+ "author": "Li, Q.; Huang, S.; Hong, Y.; Chen, Y.; Wu, Y. N.; and Zhu, S.-C. 2020.",
404
+ "venue": "In ICML, 5884\u20135894.",
405
+ "url": null
406
+ }
407
+ },
408
+ {
409
+ "36": {
410
+ "title": "Softened Symbol Grounding for Neuro-symbolic Systems.",
411
+ "author": "Li, Z.; Yao, Y.; Chen, T.; Xu, J.; Cao, C.; Ma, X.; Jian, L.; et al. 2023.",
412
+ "venue": "In ICLR.",
413
+ "url": null
414
+ }
415
+ },
416
+ {
417
+ "37": {
418
+ "title": "Statistical Analysis With Missing Data.",
419
+ "author": "Little, R.; and Rubin, D. 1987.",
420
+ "venue": "Wiley Series in Probability and Statistics. Wiley.",
421
+ "url": null
422
+ }
423
+ },
424
+ {
425
+ "38": {
426
+ "title": "Out-of-Distribution Generalization by Neural-Symbolic Joint Training.",
427
+ "author": "Liu, A.; Xu, H.; Van den Broeck, G.; and Liang, Y. 2023.",
428
+ "venue": "In AAAI, 12252\u201312259.",
429
+ "url": null
430
+ }
431
+ },
432
+ {
433
+ "39": {
434
+ "title": "Learnability of the superset label learning problem.",
435
+ "author": "Liu, L.; and Dietterich, T. 2014.",
436
+ "venue": "In ICML, 1629\u20131637.",
437
+ "url": null
438
+ }
439
+ },
440
+ {
441
+ "40": {
442
+ "title": "Deepproblog: Neural probabilistic logic programming.",
443
+ "author": "Manhaeve, R.; Dumancic, S.; Kimmig, A.; Demeester, T.; and De Raedt, L. 2018.",
444
+ "venue": "In NeurIPS, 3753\u20133763.",
445
+ "url": null
446
+ }
447
+ },
448
+ {
449
+ "41": {
450
+ "title": "Neuro Symbolic Continual Learning: Knowledge, Reasoning Shortcuts and Concept Rehearsal.",
451
+ "author": "Marconato, E.; Bontempo, G.; Ficarra, E.; Calderara, S.; Passerini, A.; and Teso, S. 2023a.",
452
+ "venue": "In ICML, 23915\u201323936.",
453
+ "url": null
454
+ }
455
+ },
456
+ {
457
+ "42": {
458
+ "title": "Not All Neuro-Symbolic Concepts Are Created Equal: Analysis and Mitigation of Reasoning Shortcuts.",
459
+ "author": "Marconato, E.; Teso, S.; Vergari, A.; and Passerini, A. 2023b.",
460
+ "venue": "In NeurIPS.",
461
+ "url": null
462
+ }
463
+ },
464
+ {
465
+ "43": {
466
+ "title": "Hypothesizing an algorithm from one example: the role of specificity.",
467
+ "author": "Muggleton, S. H. 2023.",
468
+ "venue": "Philosophical Transactions of the Royal Society A, 381(2251): 20220046.",
469
+ "url": null
470
+ }
471
+ },
472
+ {
473
+ "44": {
474
+ "title": "Learning with noisy labels.",
475
+ "author": "Natarajan, N.; Dhillon, I. S.; Ravikumar, P. K.; and Tewari, A. 2013.",
476
+ "venue": "In NeurIPS, 1196\u20131204.",
477
+ "url": null
478
+ }
479
+ },
480
+ {
481
+ "45": {
482
+ "title": "Pytorch: An imperative style, high-performance deep learning library.",
483
+ "author": "Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019.",
484
+ "venue": "In NeurIPS, 8024\u20138035.",
485
+ "url": null
486
+ }
487
+ },
488
+ {
489
+ "46": {
490
+ "title": "Making deep neural networks robust to label noise: A loss correction approach.",
491
+ "author": "Patrini, G.; Rozza, A.; Krishna Menon, A.; Nock, R.; and Qu, L. 2017.",
492
+ "venue": "In CVPR, 1944\u20131952.",
493
+ "url": null
494
+ }
495
+ },
496
+ {
497
+ "47": {
498
+ "title": "Abduction and induction.",
499
+ "author": "Peirce, C. S. 1955.",
500
+ "venue": "Philosophical Writings of Pierce, 150\u201356.",
501
+ "url": null
502
+ }
503
+ },
504
+ {
505
+ "48": {
506
+ "title": "Unifying logic and probability.",
507
+ "author": "Russell, S. 2015.",
508
+ "venue": "Communications of the ACM, 58(7): 88\u201397.",
509
+ "url": null
510
+ }
511
+ },
512
+ {
513
+ "49": {
514
+ "title": "A simple neural network module for relational reasoning.",
515
+ "author": "Santoro, A.; Raposo, D.; Barrett, D. G.; Malinowski, M.; Pascanu, R.; Battaglia, P.; and Lillicrap, T. 2017.",
516
+ "venue": "In NeurIPS, 4967\u20134976.",
517
+ "url": null
518
+ }
519
+ },
520
+ {
521
+ "50": {
522
+ "title": "Human problem solving: The state of the theory in 1970.",
523
+ "author": "Simon, H. A.; and Newell, A. 1971.",
524
+ "venue": "American psychologist, 26(2): 145.",
525
+ "url": null
526
+ }
527
+ },
528
+ {
529
+ "51": {
530
+ "title": "The hasyv2 dataset.",
531
+ "author": "Thoma, M. 2017.",
532
+ "venue": "arXiv preprint arXiv:1701.08380.",
533
+ "url": null
534
+ }
535
+ },
536
+ {
537
+ "52": {
538
+ "title": "Knowledge-based artificial neural networks.",
539
+ "author": "Towell, G. G.; and Shavlik, J. W. 1994.",
540
+ "venue": "Artificial intelligence, 70(1-2): 119\u2013165.",
541
+ "url": null
542
+ }
543
+ },
544
+ {
545
+ "53": {
546
+ "title": "Neural arithmetic logic units.",
547
+ "author": "Trask, A.; Hill, F.; Reed, S. E.; Rae, J.; Dyer, C.; and Blunsom, P. 2018.",
548
+ "venue": "In NeurIPS, 8046\u2013\u20138055.",
549
+ "url": null
550
+ }
551
+ },
552
+ {
553
+ "54": {
554
+ "title": "Neural-symbolic integration: A compositional perspective.",
555
+ "author": "Tsamoura, E.; Hospedales, T.; and Michael, L. 2021.",
556
+ "venue": "In AAAI, 5051\u20135060.",
557
+ "url": null
558
+ }
559
+ },
560
+ {
561
+ "55": {
562
+ "title": "Tac-valuer: Knowledge-based stroke evaluation in table tennis.",
563
+ "author": "Wang, J.; Deng, D.; Xie, X.; Shu, X.; Huang, Y.-X.; Cai, L.-W.; Zhang, H.; Zhang, M.-L.; Zhou, Z.-H.; and Wu, Y. 2021.",
564
+ "venue": "In KDD, 3688\u20133696.",
565
+ "url": null
566
+ }
567
+ },
568
+ {
569
+ "56": {
570
+ "title": "On Learning Latent Models with Multi-Instance Weak Supervision.",
571
+ "author": "Wang, K.; Tsamoura, E.; and Roth, D. 2023.",
572
+ "venue": "In NeurIPS.",
573
+ "url": null
574
+ }
575
+ },
576
+ {
577
+ "57": {
578
+ "title": "Satnet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver.",
579
+ "author": "Wang, P.-W.; Donti, P.; Wilder, B.; and Kolter, Z. 2019.",
580
+ "venue": "In ICML, 6545\u20136554.",
581
+ "url": null
582
+ }
583
+ },
584
+ {
585
+ "58": {
586
+ "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms.",
587
+ "author": "Xiao, H.; Rasul, K.; and Vollgraf, R. 2017.",
588
+ "venue": "arXiv preprint arXiv:1708.07747.",
589
+ "url": null
590
+ }
591
+ },
592
+ {
593
+ "59": {
594
+ "title": "A semantic loss function for deep learning with symbolic knowledge.",
595
+ "author": "Xu, J.; Zhang, Z.; Friedman, T.; Liang, Y.; and Broeck, G. 2018.",
596
+ "venue": "In ICML, 5502\u20135511.",
597
+ "url": null
598
+ }
599
+ },
600
+ {
601
+ "60": {
602
+ "title": "NeurASP: embracing neural networks into answer set programming.",
603
+ "author": "Yang, Z.; Ishay, A.; and Lee, J. 2021.",
604
+ "venue": "In IJCAI, 1755\u20131762.",
605
+ "url": null
606
+ }
607
+ },
608
+ {
609
+ "61": {
610
+ "title": "Learning with biased complementary labels.",
611
+ "author": "Yu, X.; Liu, T.; Gong, M.; and Tao, D. 2018.",
612
+ "venue": "In ECCV, 68\u201383.",
613
+ "url": null
614
+ }
615
+ },
616
+ {
617
+ "62": {
618
+ "title": "Learning from aggregate observations.",
619
+ "author": "Zhang, Y.; Charoenphakdee, N.; Wu, Z.; and Sugiyama, M. 2020.",
620
+ "venue": "In NeurIPS, 7993\u20138005.",
621
+ "url": null
622
+ }
623
+ },
624
+ {
625
+ "63": {
626
+ "title": "Abductive learning: towards bridging machine learning and logical reasoning.",
627
+ "author": "Zhou, Z. 2019.",
628
+ "venue": "Science China Information Sciences, 62(7): 76101:1\u201376101:3.",
629
+ "url": null
630
+ }
631
+ },
632
+ {
633
+ "64": {
634
+ "title": "A brief introduction to weakly supervised learning.",
635
+ "author": "Zhou, Z.-H. 2018.",
636
+ "venue": "National science review, 5(1): 44\u201353.",
637
+ "url": null
638
+ }
639
+ },
640
+ {
641
+ "65": {
642
+ "title": "Abductive Learning.",
643
+ "author": "Zhou, Z.-H.; and Huang, Y.-X. 2022.",
644
+ "venue": "In Neuro-Symbolic Artificial Intelligence: The State of the Art, 353\u2013369. IOS Press.",
645
+ "url": null
646
+ }
647
+ },
648
+ {
649
+ "66": {
650
+ "title": "Multi-instance learning by treating instances as non-iid samples.",
651
+ "author": "Zhou, Z.-H.; Sun, Y.-Y.; and Li, Y.-F. 2009.",
652
+ "venue": "In ICML, 1249\u20131256.",
653
+ "url": null
654
+ }
655
+ }
656
+ ],
657
+ "url": "http://arxiv.org/html/2308.10487v2"
658
+ }
20240123/2308.12890v3.json ADDED
@@ -0,0 +1,562 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "Large Language Models Vote: Prompting for Rare Disease Identification",
3
+ "abstract": "The emergence of generative Large Language Models (LLMs) emphasizes the need for\naccurate and efficient prompting approaches. The use of LLMs in Few-Shot Learning (FSL) settings,\nwhere data is scarce, has become a standard practice. FSL has also become popular in many\nArtificial Intelligence (AI) subdomains, including AI for health. Rare diseases affect a small\nfraction of the population, and due to limited data availability, their identification from\nclinical notes inherently requires FSL techniques. Manual data collection and annotation is both\nexpensive and time-consuming. In this paper, we propose Models-Vote Prompting (MVP), an ensemble\nprompting approach for improving the performance of LLM queries in FSL settings. MVP works by\nprompting several LLMs to perform the same task and then conducting a majority vote on the\nresulting outputs. The proposed method achieves improved results to any one model in the ensemble\non one-shot rare disease identification and classification tasks. MVP performance rivals that of\nwell-known Self-Consistency (SC) prompting. In addition, we introduce a novel rare disease dataset\nfor FSL, reproducible to those who signed the MIMIC-IV Data Use Agreement (DUA). Furthermore, we\nalso assess the feasibility of using JSON-augmented prompts for automating generative LLM\nevaluation.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Large Language Models (LLMs) are language models that typically consist of hundreds of millions to\nbillions of parameters and are characterized by unsupervised pre-training and supervised\nfine-tuning [1 ###reference_x1###]. LLMs have proven useful in many tasks, including representation\nlearning [2 ###reference_x2###, 3 ###reference_x3###], machine translation [4 ###reference_x4###, 5 ###reference_x5###], and text generation [6 ###reference_x6###, 7 ###reference_x7###]. Recently, generative LLMs have taken Computer Science (CS) and Artificial Intelligence (AI)\nresearch communities by storm, with models such as LLaMA [8 ###reference_x8###] and Stable\nDiffusion [9 ###reference_x9###] seeing significant adoption and with ChatGPT [10 ###reference_x10###]\nreaching over 100 million active users within two months since its release [11 ###reference_x11###].\nPrompting is a novel paradigm, where with the help of a textual prompt, downstream tasks can be\nmodeled similar to those solved during pre-training [12 ###reference_x12###]. This can be considered an\nalternative to the conventional unsupervised pre-training and supervised fine-tuning paradigms. The\nact of prompting is closely tied to the concept of in-context learning, where a language model is\ngiven a prompt composed of training examples and a test instance as the input. The model then\ngenerates the corresponding output to the test instance without any change to its\nparameters [13 ###reference_x13###]. The output of the model coherently follows the language of the\nprompt, by understanding the meaning present in the examples [14 ###reference_x14###]. Prompting led to\nthe emergence of prompt engineering, a discipline that aims to develop effective prompting methods\nfor efficiently solving tasks [15 ###reference_x15###]. A variety of prompting methods have already\nbeen developed, including Instruction Prompting (IP) in the manner of\nInstructGPT [16 ###reference_x16###], Chain-of-Thought (CoT) [17 ###reference_x17###], and Self-Consistency\n(SC) [18 ###reference_x18###] prompting. Yet, there has been a lack of prompting approaches that combine the\nknowledge of multiple LLMs.\nDeep Learning (DL) often requires large amounts of data that can be expensive and occasionally\ndifficult to obtain. Few-Shot Learning (FSL) is a subfield of AI that attempts to enable machine\nlearning even in cases with a small number sample (also known as shots). FSL has recently shown\npromising performance in various tasks, from image segmentation [19 ###reference_x19###] to speaker\nrecognition [20 ###reference_x20###] and from Named-Entity Recognition (NER) [21 ###reference_x21###] to Question-Answering\n(QA) [22 ###reference_x22###]. Furthermore, prompt-based approaches perform well in FSL\nsettings [23 ###reference_x23###].\nRare disease identification serves as a natural application for utilizing FSL techniques. A rare\ndisease is defined as a disease that affects no more than 200,000 people in the population (US\ndefinition) [24 ###reference_x24###] or no more than one in two thousand people (EU\ndefinition) [25 ###reference_x25###]. Despite slight differences in these definitions, we can consider a rare\ndisease to be a disease that affects one in several thousand people. As a result of this rarity, it\nis hard to obtain extensive and comprehensive amounts of data for these diseases, which prompts the\nuse of FSL approaches. Unlike structured Electronic Health Records (EHRs) that primarily capture\nstandardized and limited information, Clinical Notes (CNs) are detailed narratives of patient\nconditions, symptoms, treatments, and contextual information. The complexity and variety of language\nused in CNs reflect the intricate nuances of medical cases, including subtle symptoms that might not\nbe documented in structured formats. NLP algorithms can effectively sift through these notes,\nextracting valuable insights and patterns that might lead to the early detection and accurate\ndiagnosis of rare diseases. This process aids medical professionals in uncovering hidden\ncorrelations and symptoms, reducing the diagnostic odyssey that often characterizes rare disease\ncases. On the other hand, rare disease datasets are also scarce, especially for Natural Language\nProcessing (NLP) purposes, and relying only on the structured data can lead to erroneous conclusions\nor missed patients when recruiting for trials [26 ###reference_x26###, 27 ###reference_x27###, 28 ###reference_x28###].\nTo the best of our knowledge, there has been one work that attempted to build a rare disease dataset\nvia weak supervision [29 ###reference_x29###], but the annotations have not been fully verified by humans with\nbiomedical experience, and the number of rare disease cases in existing datasets such as\nMIMIC-III [30 ###reference_x30###, 31 ###reference_x31###] are not sufficient for 4-way 128 or 256-shot\nexperiments.\nIn this paper, we make the following contributions:\nWe propose Models-Vote Prompting (MVP), an ensemble prompting method that uses multiple\nLLMs for majority voting to improve the success rate of task completion. We evaluate the\nproposed strategy on two NLP tasks (with four different context sizes) and measure its\nperformance via metrics: accuracy, precision, recall, and F-score. We also compare MVP to\nSC prompting. Furthermore, we conduct statistical hypothesis testing to verify that the\nmean difference in model outputs between MVP and second-best models is non-zero.\nWe introduce a novel FSL dataset for rare disease identification. The rare disease dataset\nwas obtained by processing a recently released MIMIC-IV [32 ###reference_x32###, 33 ###reference_x33###]\ndatabase. For the tasks performed in the study, we conduct a thorough, two-round\nannotation process involving annotation guidelines and two annotators with biomedical\nexperience, resulting in a high Inter-Annotator Agreement (IAA) score. The dataset,\nexcluding annotations, can be fully reproduced by following the steps in the codebase we\nreleased for the study111https://github.com/PittNAIL/llms-vote ###reference_###..\nWe emphasize the importance of incorporating parsable formats, such as JSON, in LLM\nprompts, for facilitating LLM evaluation."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "Overview of Prompting Approaches",
15
+ "text": "In this section, we provide a brief overview of the popular prompting approaches: IP, CoT, and SC.\nWe also formally define our proposed approach: MVP. In this direction, we use the formal approach\ninspired by Phuong et al. [34 ###reference_x34###] We start by introducing the formal notation and\ndefining the existing methods. A comparison with MVP is made in the following section."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Notation",
21
+ "text": "Let be a sequence of sub-word\ntokens. The goal is to learn an estimate of the distribution from independent\nand identically distributed (i.i.d.) data. We denote the parameters of the distribution as\n and the output sequence of tokens as . Note that, unlike a typical\nprogramming language, we use 1-based indexing and inclusive ranges (i.e., range \nincludes both and ). Then the following holds:\nwhich can also be formulated as"
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Overview",
27
+ "text": "IP is the simplest of all the prompting approaches described in the paper, where the input text\nincludes instructions (usually, a few). This can be formally expressed as follows:\nCoT makes use of a series of reasoning steps to get\nfrom to the output . Hence, it can be formulated as:\nSC prompting is an ensemble approach that samples LLM decoder to generate some number of i.i.d\nchains of thought. We denote this number as and let be the chains of thought. We get a set of responses , where . After generating , we\nget:\n###figure_1###"
28
+ },
29
+ {
30
+ "section_id": "3",
31
+ "parent_section_id": null,
32
+ "section_name": "Models-Vote Prompting",
33
+ "text": "Notice that all of the methods described in the previous section make use of a single model, yet the\navailability of LLMs allows for the use of multiple, further enriching model pool, and diversity of\nthe training data. MVP is an ensemble approach that uses a set of language models to generate\na response. Formally, this can be expressed as follows:\nWe then perform majority voting and select the most frequent response as follows:\nThe proposed approach improves upon existing techniques in several ways.\nFirst, MVP allows for considering several models trained on different datasets, which is especially\nuseful for complex problems or when a single model does not have sufficient knowledge to generate\nproper responses. As opposed to existing strategies that query the same model multiple\ntimes [35 ###reference_x35###], we adopt an approach similar to that seen in the Random Forest, where the\nfinal prediction depends on the majority vote of multiple learners [36 ###reference_x36###]. Despite the\nindividual models not being weak learners, utilizing a pool of models in the proposed manner may\nalso balance bias and variance and converge to an average obtained from multiple datasets. In other\nwords, a single LLM generates text after pre-training on some dataset , while MVP uses the\nknowledge from the collection of datasets . While the datasets may\noverlap, i.e., can be true, the increasing availability of\ndomain-specific datasets has facilitated the development of domain-aware LLMs. Besides, the\navailability of general-purpose conversational datasets has also been\ngrowing [37 ###reference_x37###]. Thus, it is not too difficult to find a set of models\nwhere the overlap is not large [38 ###reference_x38###, 39 ###reference_x39###]. Therefore, while the\nimportance of strategies utilizing a single model cannot be understated, it is vital to consider\napproaches for improving performance that make use of multiple models.\nSecond, MVP facilitates inference on low-powered hardware (e.g., with limited GPU memory). In cases\nwhere using very large (over 50 billion parameters) LLMs is infeasible due to cost or hardware\nconstraints, one can combine results from several smaller models and observe performance improvement\nagainst any individual model.\nFinally, since MVP is an aggregator of results, it is flexible, making the integration of\npre-existing prompting methods straightforward."
34
+ },
35
+ {
36
+ "section_id": "4",
37
+ "parent_section_id": null,
38
+ "section_name": "Dataset",
39
+ "text": "In this section, we introduce a new rare disease dataset. First, we would like to describe the\nmethods for obtaining the rare disease dataset, including generating the subset and the annotation\nprocess. We used the recently released MIMIC-IV database, which is over five times larger than its\npredecessor MIMIC-III and contains 331,794 de-identified discharge summaries, as the base for the\ndataset. Figure 1 ###reference_### shows the overview pipeline."
40
+ },
41
+ {
42
+ "section_id": "4.1",
43
+ "parent_section_id": "4",
44
+ "section_name": "Term Matching Augmented with Weak Supervision Rules",
45
+ "text": "We used SPARQL domain-specific language (DSL) for extracting rare disease terms from the Orphanet\nRare Disease Ontology (ORDO) version\n4.2222https://www.orphadata.com/ordo/ ###reference_www.orphadata.com/ordo/###\n[40 ###reference_x40###]. SPARQL queries were executed using the Python library\nrdflib333https://rdflib.readthedocs.io/en/stable/ ###reference_###.\nAfter obtaining a list of rare diseases, we performed simple term matching on the MIMIC-IV database.\nSince the number of lookups was on the order of billions and the documents were not of small size,\nwe have not used Python. The rationale was two-fold: first, due to Global Interpreter Lock (GIL),\nPython does not have the ability to perform proper multi-threading and second, it is a high-level\ninterpreted language, which would not satisfy the performance requirements needed for efficiently\ncompleting the task at hand. Instead, we used the Rust444https://www.rust-lang.org/ ###reference_www.rust-lang.org/### programming language and the\nrayon555https://docs.rs/rayon/latest/rayon/ ###reference_###\nlibrary for performing term-matching in parallel. This allowed us to create an inverted index of\nterms mapped to the note identifiers.\nAfter obtaining an inverted index, we performed further filtering by applying weak supervision rules\nproven effective in the recent work on rare diseases [29 ###reference_x29###], by greatly improving the precision\nwhile retaining the level of recall of a string-based matching method. Specifically, we removed rare\ndiseases whose term length was less than four (i.e., character count rule to filter out ambiguous\nabbreviations) or those whose occurrences were more than 0.5% (i.e., \u201cprevalence\u201d rule to filter\nout common disease mentions which are not likely to be of a rare disease). Finally, we obtained a\nsubset from which we selected the four most frequent rare diseases, having a number of cases\nsufficient for us to perform the few-shot evaluation on them. These four rare diseases are\nBabesiosis, Giant Cell Arteritis, Graft Versus Host Disease, and Cryptogenic Organizing Pneumonia."
46
+ },
47
+ {
48
+ "section_id": "4.2",
49
+ "parent_section_id": "4",
50
+ "section_name": "Annotations",
51
+ "text": "The annotation process consisted of two rounds: an initial session to ensure a high IAA and a second\nsession for the final annotations. Two annotators with specialized knowledge in the biomedical field\nannotated the two distinct batches of CNs with rare disease occurrences. The initial round had 64\nCNs, while the second round had 256 CNs. If a patient whose CN was under consideration had a rare\ndisease, the patient would be labeled \u201c1\u201d and \u201c0\u201d otherwise. Cases where a patient\u2019s CN\ndemonstrated a family history of the disease, or suffered from that disease in the past (but not in\nthe present), were labeled as \u201c0\u201d666To note that this is distinct from Dong et\nal. [29 ###reference_x29###] where past rare diseases were also annotated as positive; we aim to\nidentify present rare diseases.. Our annotation guidelines outlined the annotation\nprocess, accompanied by examples of positive and negative matches.\nWe used Cohen\u2019s kappa [41 ###reference_x41###] for computing IAA:\nwhere is the probability of agreement on the label assigned to any sample, and is\nthe hypothetical probability of chance agreement.\nThe Cohen\u2019s kappa score for the initial IAA assessment on 64 document annotations (the first round\nof annotations) was 0.839. Such a high value suggested a near-perfect agreement after which, we\nmoved to the second round.\nIn the second round, 256 documents were annotated, which we used for evaluating the proposed\nprompting method.\nFinally, we ended up with 256 annotation documents providing information about which rare disease\noccurred in a context and whether the person (whose discharge summary it is) suffers from the rare\ndisease present in the context. The final dataset statistics are shown in Table 1 ###reference_###.\nThe dataset will be available to researchers who have signed MIMIC-IV Data Use Agreement (DUA)."
52
+ },
53
+ {
54
+ "section_id": "5",
55
+ "parent_section_id": null,
56
+ "section_name": "Experiments",
57
+ "text": "###figure_2### The rare disease dataset we built was used for FSL experiments. For a more holistic evaluation, the\noriginal 256 annotated documents were chunked into four subsets, containing CN substrings with 32,\n64, 128, and 256 words (context size). It should be noted that every context window contained the\nrare disease mention. The evaluation was performed using the following openly available models:\nLlama 2 13B [42 ###reference_x42###], MedAlpaca 13B [43 ###reference_x43###],\nStable Platypus 2 13B [44 ###reference_x44###], and Vicuna 13B [45 ###reference_x45###]. We selected these LLMs\nas they were some of the highest scoring models on the\nOpen LLM Leaderboard [46 ###reference_x46###]. MedAlpaca specifically was selected due to its\nfine-tuning on medical question-answering. To compare MVP with SC prompting, Llama 2 with SC\nprompting was also evaluated. For all models, the maximum number of tokens was set to 1,024."
58
+ },
59
+ {
60
+ "section_id": "5.1",
61
+ "parent_section_id": "5",
62
+ "section_name": "Tasks",
63
+ "text": "As discussed before, we considered two tasks: rare disease identification and rare disease\nclassification.\nRare disease identification, while discussed separately from classification, can also be considered\na binary classification task. This task determines whether the model can infer that the person in\nquestion has a particular disease. In this case, majority voting consisted of computing the number\nof 0 and 1 votes per prompt. Since our experiments considered 4 models in total, if the sum was less\nthan 2, we counted it as no (0). Otherwise, we counted it as yes (1).\nRare disease classification assesses the model\u2019s ability to correctly identify rare diseases in the\ngiven context. For this task, we had five classes: the original four rare diseases and the class\n\u201cother\u201d if the model prediction was not one of the four diseases. The disease was considered\ncorrectly classified if it received the majority of votes. In situations where multiple labels were\npredicted (i.e., had majority votes), as long as the correctly identified one was among them, we\nstill counted it as a correct prediction. The rationale is that having the same number of votes\nmakes a sample stand out and in practice would require careful examination.\nIn total, 256 CNs have been considered and for each of the tasks, we provided 32, 64, 128, and 256\nword context windows for evaluation."
64
+ },
65
+ {
66
+ "section_id": "5.2",
67
+ "parent_section_id": "5",
68
+ "section_name": "Prompt Engineering",
69
+ "text": "Proper calibration of prompts has demonstrated improvement in model\nperformance [47 ###reference_x47###]. Similarly, using the same prompt format as in the pre-training\ncan improve model performance, and such prompt engineering approaches have gotten\npopular [48 ###reference_x48###]. We followed the paradigm and designed prompts on a per-model basis.\nFigure 2 ###reference_### illustrates the method.\nWe used CoT in the prompt design. For MedAlpaca, the template included a task description as a\ncontext, a question-and-answer example for instruction prompting, and the actual question in the\nform of a CN context. Llama 2 prompt followed the same idea but used tags specific to its\npre-training. Finally, for both Vicuna and Stable Platypus 2, we utilized an approach similar to\nMedAlpaca, as their pre-training prompts were also alike. Note that $EXPLANATION$,\n$JSON$, and $TASK_DESCRIPTION$ are all placeholders to be replaced by actual\ntext. For $EXPLANATION$, $JSON$, text is shown in\nFigure 2 ###reference_###. $TASK_DESCRIPTION$ describes a task and lists all\ndiseases to be identified, but has been omitted for brevity."
70
+ },
71
+ {
72
+ "section_id": "5.3",
73
+ "parent_section_id": "5",
74
+ "section_name": "JSON-Augmented Prompts to Facilitate Model Evaluation",
75
+ "text": "Automated evaluation of generative LLMs can be challenging, as the output is mainly human language.\nA typical solution is recruiting annotators, which can be expensive and time-consuming. Recently,\nstudies have shown that using formats, such as XML, can work well for model\nevaluation [49 ###reference_x49###].\nAs shown in Figure 2 ###reference_###, we used JSON format to represent a part of the\ninput prompt. Since the instruction output contained a parsable JSON string, generative LLMs\nreplicated the behavior, which reduced the need for human annotation and allowed for automatic\nevaluation.\nWe also note that using JSON for prompt engineering is flexible and not limited to rare disease\nidentification. Furthermore, we can use JSON to model nested responses and graph-like structures as\nwe can describe graphs using adjacency lists."
76
+ },
77
+ {
78
+ "section_id": "5.4",
79
+ "parent_section_id": "5",
80
+ "section_name": "Metrics",
81
+ "text": "For model evaluation, we used accuracy, precision, recall, and F-score. We also used paired t-tests\nbetween the best and second-best models to verify that the mean difference between two sets of\nobservations is non-zero (i.e., there is a difference in model behavior). Note that t-tests were\nperformed directly on model outputs, and the threshold for statistical significance was 0.05. The\nnull hypothesis states that model prediction scores are not statistically significantly different,\nand an alternative hypothesis states the opposite."
82
+ },
83
+ {
84
+ "section_id": "6",
85
+ "parent_section_id": null,
86
+ "section_name": "Results",
87
+ "text": "###table_1### Table 2 ###reference_### shows the experimental results. Note that APRF is a 4-tuple that stands for\nAccuracy, Precision, Recall, and F-score, respectively. Moving forward, we will use the APRF acronym\nto discuss performance. The best scores for a given context will be highlighted as bold and\nunderlined."
88
+ },
89
+ {
90
+ "section_id": "6.1",
91
+ "parent_section_id": "6",
92
+ "section_name": "Rare Disease Identification",
93
+ "text": "For 32-word context windows, MVP performed the best, with the APRF of (0.66, 0.66, 0.65, 0.65).\nLlama 2 came next with the APRF of (0.63, 0.68, 0.61, 0.59). Interestingly, Llama 2 (SC Prompting)\noutperformed MVP in precision but underperformed in all other metrics. Stable Platypus 2 and Vicuna\nhad comparable performance, with MedAlpaca showing the worst performance. The paired t-test for\ndifference on MVP and Llama 2 outputs resulted in a statistically significant p-value of\napproximately .\nMVP also showed the best performance in the case of 64-word contexts, with the APRF of\n(0.70, 0.72, 0.69, 0.69). Stable Platypus 2 had the second-best performance of\n(0.67, 0.67, 0.66, 0.66). Llama 2 (SC Prompting) had the best overall precision score but did not\nperform as well in other metrics. Llama 2 and Vicuna had comparable performance, while MedAlpaca had\nthe worst APRF values. The paired t-test on MVP and Stable Platypus 2 resulted in a statistically\nsignificant p-value of approximately .\nAs for 128-word context windows, MVP and Llama 2 showed the best performance, with the APRF values\nof (0.62, 0.62, 0.61, 0.60) and (0.62, 0.65, 0.60, 0.58), respectively. Llama 2 (SC Prompting)\nshowed the best precision, but underperformed in other metrics. Stable Platypus 2 and Vicuna\nperformed similarly, with MedAlpaca performing the worst. The paired t-test on MVP and Stable\nPlatypus 2 gave a statistically significant p-value of approximately .\nFor 256-word context windows, MVP and Vicuna performed similarly, with APRF values of\n(0.68, 0.67, 0.67, 0.67) and (0.67, 0.71, 0.68, 0.66), respectively. Llama 2, Llama 2 (SC Prompting),\nand Stable Platypus 2 had comparable performance, with MedAlpaca showing the worst\nperformance. The p-value for the paired t-test on MVP and Vicuna was ,\ndemonstrating statistical significance in the mean difference of outputs.\nOverall, MVP showed the best performance across all benchmarks. The difference in outputs of MVP and\nsecond-best models was verified by the paired t-test for difference, with p-values always being less\nthan the cutoff value of 0.05. MVP also outperformed Llama 2 (SC Prompting) across all context\nsizes."
94
+ },
95
+ {
96
+ "section_id": "6.2",
97
+ "parent_section_id": "6",
98
+ "section_name": "Rare Disease Classification",
99
+ "text": "For the 32-word context windows, Llama 2 (SC Prompting) performed the best with the APRF of\n(0.84, 0.78, 0.67, 0.72). MVP was the second-best approach and had the APRF of\n(0.80, 0.76, 0.64, 0.69). Llama 2, Stable Platypus 2, and Vicuna performed similarly, with MedAlpaca\nperforming the worst. The paired t-test between Llama 2 (SC Prompting) and MVP gave a statistically\nsignificant p-value of approximately .\nAs for the 64-word context windows, MVP performed the best with the APRF values of\n(0.81, 0.75, 0.65, 0.69). Llama 2 (SC Prompting) marginally underperformed with the APRF of\n(0.79, 0.79, 0.63, 0.70). Llama 2, Stable Platypus 2, and Vicuna had similar performance. MedAlpaca\nshowed the worst performance. The paired t-test on the outputs of MVP and Llama 2 (SC Prompting)\nresulted in the statistically significant p-value of .\nIn 128-word context experiments, Llama 2 (SC Prompting) and MVP performed the best, with the APRF\nvalues of (0.75, 0.79, 0.60, 0.68) and (0.75, 0.77, 0.60, 0.67), respectively. MVP performed\nslightly underperformed. Llama 2, Stable Platypus 2, and Vicuna had similar performance, with\nMedAlpaca performing the worst. The p-value value for Llama 2 (SC Prompting) and MVP was\napproximately , which is not statistically significant.\nFinally, for 256-word experiments, Stable Platypus 2 and Llama 2 (SC Prompting) were first and\nsecond best models, with the APRF scores of (0.66, 0.76, 0.53, 0.61) and (0.62, 0.80, 0.50, 0.60),\nrespectively. Llama 2, Vicuna, and MVP had similar performance, and MedAlpaca performed the worst.\nThe p-value for the t-test between Stable Platypus 2 and Llama 2 (SC Prompting) was approximately\n, meaning that the mean difference between model outputs was not\nstatistically significant.\nIn rare disease classification results, MVP and Llama 2 (SC Prompting) showed the best overall\nresults. However, unlike rare disease identification, in the case of 256-word context windows, both\nStable Platypus 2 and Llama 2 (SC Prompting) marginally outperformed MVP."
100
+ },
101
+ {
102
+ "section_id": "6.3",
103
+ "parent_section_id": "6",
104
+ "section_name": "JSON Compliance in Model Evaluation",
105
+ "text": "Table 3 ###reference_### shows the number of model responses that were not in partial or complete\nJSON-compliant format. These leftover entries were human-annotated. This was a substantially faster\nmethod, as approximately 85.9% of the 4096 results did not require manual annotation. As shown in\nthe first column of the table, MedAlpaca had the worst compliance rate for JSON format, with 197\nerrors across 4 context sizes. Llama 2 on the other hand performed very well, with only 33 errors."
106
+ },
107
+ {
108
+ "section_id": "7",
109
+ "parent_section_id": null,
110
+ "section_name": "Ablation Study",
111
+ "text": "###figure_3### We conducted ablation study to remove individual models from MVP and examined the model performance.\nFigure 3 ###reference_### shows the results of the ablation study. Note that Models-Vote\nPrompting label denotes the original MVP performance with all models, with no model model excluded.\nOverall, removing a model from the MVP ensemble decreases performance across APRF. However, in the\ncase of MedAlpaca, removing the model causes MVP to have a similar or slightly improved performance\nto the original four-model ensemble. An explanation for the change in performance could be\nMedAlpaca\u2019s underperformance on the given tasks (relative to the other models). This behavior is\nconsistent across the identification and classification tasks for all tested context sizes.\nFurthermore, for the rare disease identification task at 128-word context size, removing individual\nmodels does not result in significant performance degradation (i.e., the resulting performance is\ncomparable to that of the original MVP). Such behavior may be due to the models exhibiting similar\nperformance at the 128-word context and Vicuna not performing as well as it does for other context\nsizes.\nBoth the number of models and models themselves are hyperparameters, and we hypothesize that\ndetermining optimal values may be domain and task-dependent. In the following section, we also note\nthat this could be a potential future research direction."
112
+ },
113
+ {
114
+ "section_id": "8",
115
+ "parent_section_id": null,
116
+ "section_name": "Limitations and Future Work",
117
+ "text": "There is much to be explored and expanded upon beyond what we presented in the paper. First, using\nother LLMs for model evaluation, such as Falcon [50 ###reference_x50###] whose training data differs from\nthat of models used in the paper, could be interesting. Second, performing the same tasks using\nsmaller LLMs (e.g., 7 billion parameter models) may show promising results. Third, increasing the\nnumber of models used in MVP can be another research direction. Fourth, the criteria for the\nselection of models for MVP needs further study. Fifth, using MVP for other domains or tasks can\nalso be an option. The novel rare disease dataset can help define new evaluation tasks, and\nincorporating different prompting approaches may help improve MVP performance. Note that increasing\ncontext size decreased model performance (in both tasks). This behavior may have been caused by\nincreased ambiguity, as a larger context window often contains more medical terms, which increases\ncomplexity and reduces performance. However, a thorough study is needed to examine and better\nexplain the behavior. Finally, using MVP with prompts incorporating JSON for depicting other\nresponse structures (e.g., nested relationships, graph-like structures, etc.) can also be a\npotential future avenue for exploration."
118
+ },
119
+ {
120
+ "section_id": "9",
121
+ "parent_section_id": null,
122
+ "section_name": "Conclusion",
123
+ "text": "We proposed Models-Vote Prompting (MVP), a prompting approach that, in addition to its promising\nperformance on the novel rare disease dataset, introduces a new perspective to rare disease\nidentification and classification tasks. Through our experiments, we evaluated and quantified the\nextent of the performance improvements achieved by our method. To support the hypothesis presented\nin the paper, we conducted statistical hypothesis tests that demonstrated statistical significance\nacross experiments. Furthermore, we conducted an ablation study to examine the behavior of MVP after\nexcluding individual models. We also explored the feasibility of using JSON-augmented prompts, which\nproved effective and reduced the need for manual, human annotation of the LLM-generated results."
124
+ }
125
+ ],
126
+ "appendix": [],
127
+ "tables": {
128
+ "1": {
129
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx4.T1\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx4.T1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.1.1.1\" style=\"font-size:90%;\">Disease</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.1.2.1\" style=\"font-size:90%;\">MIMIC-IV</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.1.3.1\" style=\"font-size:90%;\">Filtered</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.1.4.1\" style=\"font-size:90%;\">AR 1</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx4.T1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx4.T1.1.1.1.5.1\" style=\"font-size:90%;\">AR 2</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx4.T1.1.2.1.1\"><span class=\"ltx_text\" id=\"Sx4.T1.1.2.1.1.1\" style=\"font-size:90%;\">Babesiosis</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.2.1.2\"><span class=\"ltx_text\" id=\"Sx4.T1.1.2.1.2.1\" style=\"font-size:90%;\">320</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.2.1.3\"><span class=\"ltx_text\" id=\"Sx4.T1.1.2.1.3.1\" style=\"font-size:90%;\">106</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.2.1.4\"><span class=\"ltx_text\" id=\"Sx4.T1.1.2.1.4.1\" style=\"font-size:90%;\">16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx4.T1.1.2.1.5\"><span class=\"ltx_text\" id=\"Sx4.T1.1.2.1.5.1\" style=\"font-size:90%;\">64</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.3.2\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T1.1.3.2.1\"><span class=\"ltx_text\" id=\"Sx4.T1.1.3.2.1.1\" style=\"font-size:90%;\">Cryptogenic Organizing Pneumonia</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.3.2.2\"><span class=\"ltx_text\" id=\"Sx4.T1.1.3.2.2.1\" style=\"font-size:90%;\">304</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.3.2.3\"><span class=\"ltx_text\" id=\"Sx4.T1.1.3.2.3.1\" style=\"font-size:90%;\">110</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.3.2.4\"><span class=\"ltx_text\" id=\"Sx4.T1.1.3.2.4.1\" style=\"font-size:90%;\">16</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.3.2.5\"><span class=\"ltx_text\" id=\"Sx4.T1.1.3.2.5.1\" style=\"font-size:90%;\">64</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.4.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx4.T1.1.4.3.1\"><span class=\"ltx_text\" id=\"Sx4.T1.1.4.3.1.1\" style=\"font-size:90%;\">Giant Cell Arteritis</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.4.3.2\"><span class=\"ltx_text\" id=\"Sx4.T1.1.4.3.2.1\" style=\"font-size:90%;\">406</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.4.3.3\"><span class=\"ltx_text\" id=\"Sx4.T1.1.4.3.3.1\" style=\"font-size:90%;\">115</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.4.3.4\"><span class=\"ltx_text\" id=\"Sx4.T1.1.4.3.4.1\" style=\"font-size:90%;\">16</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx4.T1.1.4.3.5\"><span class=\"ltx_text\" id=\"Sx4.T1.1.4.3.5.1\" style=\"font-size:90%;\">64</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx4.T1.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx4.T1.1.5.4.1\"><span class=\"ltx_text\" id=\"Sx4.T1.1.5.4.1.1\" style=\"font-size:90%;\">Graft Versus Host Disease</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.5.4.2\"><span class=\"ltx_text\" id=\"Sx4.T1.1.5.4.2.1\" style=\"font-size:90%;\">429</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.5.4.3\"><span class=\"ltx_text\" id=\"Sx4.T1.1.5.4.3.1\" style=\"font-size:90%;\">106</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.5.4.4\"><span class=\"ltx_text\" id=\"Sx4.T1.1.5.4.4.1\" style=\"font-size:90%;\">16</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"Sx4.T1.1.5.4.5\"><span class=\"ltx_text\" id=\"Sx4.T1.1.5.4.5.1\" style=\"font-size:90%;\">64</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Number of documents per disease.</figcaption>\n</figure>",
130
+ "capture": "Table 1: Number of documents per disease."
131
+ },
132
+ "2": {
133
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx6.T2\">\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"Sx6.T2.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx6.T2.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx6.T2.1.1.1.1.1\" style=\"font-size:80%;\">Model</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx6.T2.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx6.T2.1.1.1.2.1\" style=\"font-size:80%;\">Context</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx6.T2.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx6.T2.1.1.1.3.1\" style=\"font-size:80%;\">Identification (APRF)</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"Sx6.T2.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx6.T2.1.1.1.4.1\" style=\"font-size:80%;\">Classification (APRF)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.2.2.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.2.2.1.1\" style=\"font-size:80%;\">Llama 2</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"Sx6.T2.1.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.2.2.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.2.2.3.1\" style=\"font-size:80%;\">0.63, 0.68, 0.61, 0.59</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.2.2.4\">\n<span class=\"ltx_text\" id=\"Sx6.T2.1.2.2.4.1\" style=\"font-size:80%;\">0.77, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.2.2.4.2\" style=\"font-size:80%;\">0.79</span><span class=\"ltx_text\" id=\"Sx6.T2.1.2.2.4.3\" style=\"font-size:80%;\">, 0.62, 0.69</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.3.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.3.3.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.3.3.1.1\" style=\"font-size:80%;\">MedAlpaca</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.3.3.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.3.3.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.3.3.3.1\" style=\"font-size:80%;\">0.48, 0.51, 0.50, 0.41</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.3.3.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.3.3.4.1\" style=\"font-size:80%;\">0.32, 0.62, 0.26, 0.29</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.4.4.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.4.4.1.1\" style=\"font-size:80%;\">Stable Platypus 2</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.4.4.2\"><span class=\"ltx_text\" id=\"Sx6.T2.1.4.4.2.1\" style=\"font-size:80%;\">32 words</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.4.4.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.4.4.3.1\" style=\"font-size:80%;\">0.63, 0.63, 0.63, 0.63</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.4.4.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.4.4.4.1\" style=\"font-size:80%;\">0.74, 0.78, 0.59, 0.67</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.5.5.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.5.5.1.1\" style=\"font-size:80%;\">Vicuna</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.5.5.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.5.5.3.1\" style=\"font-size:80%;\">0.60, 0.63, 0.61, 0.59</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.5.5.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.5.5.4.1\" style=\"font-size:80%;\">0.70, 0.70, 0.56, 0.61</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.6.6\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.6.6.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.6.6.1.1\" style=\"font-size:80%;\">Llama 2 (Self-Consistency Prompting)</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.6.6.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.6.6.3\">\n<span class=\"ltx_text\" id=\"Sx6.T2.1.6.6.3.1\" style=\"font-size:80%;\">0.60, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.6.6.3.2\" style=\"font-size:80%;\">0.72</span><span class=\"ltx_text\" id=\"Sx6.T2.1.6.6.3.3\" style=\"font-size:80%;\">, 0.57, 0.50</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.6.6.4\">\n<span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.6.6.4.1\" style=\"font-size:80%;\">0.84</span><span class=\"ltx_text\" id=\"Sx6.T2.1.6.6.4.2\" style=\"font-size:80%;\">, 0.78, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.6.6.4.3\" style=\"font-size:80%;\">0.67</span><span class=\"ltx_text\" id=\"Sx6.T2.1.6.6.4.4\" style=\"font-size:80%;\">, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.6.6.4.5\" style=\"font-size:80%;\">0.72</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.7.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.7.7.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.7.7.1.1\" style=\"font-size:80%;\">Models-Vote Prompting</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.7.7.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.7.7.3\">\n<span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.7.7.3.1\" style=\"font-size:80%;\">0.66</span><span class=\"ltx_text\" id=\"Sx6.T2.1.7.7.3.2\" style=\"font-size:80%;\">, 0.66, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.7.7.3.3\" style=\"font-size:80%;\">0.65</span><span class=\"ltx_text\" id=\"Sx6.T2.1.7.7.3.4\" style=\"font-size:80%;\">, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.7.7.3.5\" style=\"font-size:80%;\">0.65</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.7.7.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.7.7.4.1\" style=\"font-size:80%;\">0.80, 0.76, 0.64, 0.69</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.8.8.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.8.8.1.1\" style=\"font-size:80%;\">Llama 2</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"Sx6.T2.1.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.8.8.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.8.8.3.1\" style=\"font-size:80%;\">0.63, 0.67, 0.61, 0.58</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.8.8.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.8.8.4.1\" style=\"font-size:80%;\">0.74, 0.76, 0.59, 0.66</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.9.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.9.9.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.9.9.1.1\" style=\"font-size:80%;\">MedAlpaca</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.9.9.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.9.9.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.9.9.3.1\" style=\"font-size:80%;\">0.47, 0.49, 0.50, 0.39</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.9.9.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.9.9.4.1\" style=\"font-size:80%;\">0.28, 0.63, 0.23, 0.24</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.10.10\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.10.10.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.10.10.1.1\" style=\"font-size:80%;\">Stable Platypus 2</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.10.10.2\"><span class=\"ltx_text\" id=\"Sx6.T2.1.10.10.2.1\" style=\"font-size:80%;\">64 words</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.10.10.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.10.10.3.1\" style=\"font-size:80%;\">0.67, 0.67, 0.66, 0.66</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.10.10.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.10.10.4.1\" style=\"font-size:80%;\">0.76, 0.78, 0.61, 0.68</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.11.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.11.11.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.11.11.1.1\" style=\"font-size:80%;\">Vicuna</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.11.11.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.11.11.3.1\" style=\"font-size:80%;\">0.62, 0.64, 0.63, 0.62</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.11.11.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.11.11.4.1\" style=\"font-size:80%;\">0.77, 0.75, 0.62, 0.67</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.12.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.12.12.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.12.12.1.1\" style=\"font-size:80%;\">Llama 2 (Self-Consistency Prompting)</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.12.12.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.12.12.3\">\n<span class=\"ltx_text\" id=\"Sx6.T2.1.12.12.3.1\" style=\"font-size:80%;\">0.60, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.12.12.3.2\" style=\"font-size:80%;\">0.79</span><span class=\"ltx_text\" id=\"Sx6.T2.1.12.12.3.3\" style=\"font-size:80%;\">, 0.57, 0.49</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.12.12.4\">\n<span class=\"ltx_text\" id=\"Sx6.T2.1.12.12.4.1\" style=\"font-size:80%;\">0.79, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.12.12.4.2\" style=\"font-size:80%;\">0.79</span><span class=\"ltx_text\" id=\"Sx6.T2.1.12.12.4.3\" style=\"font-size:80%;\">, 0.63, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.12.12.4.4\" style=\"font-size:80%;\">0.70</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.13.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.13.13.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.13.13.1.1\" style=\"font-size:80%;\">Models-Vote Prompting</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.13.13.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.13.13.3\">\n<span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.13.13.3.1\" style=\"font-size:80%;\">0.70</span><span class=\"ltx_text\" id=\"Sx6.T2.1.13.13.3.2\" style=\"font-size:80%;\">, 0.72, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.13.13.3.3\" style=\"font-size:80%;\">0.69</span><span class=\"ltx_text\" id=\"Sx6.T2.1.13.13.3.4\" style=\"font-size:80%;\">, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.13.13.3.5\" style=\"font-size:80%;\">0.69</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.13.13.4\">\n<span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.13.13.4.1\" style=\"font-size:80%;\">0.81</span><span class=\"ltx_text\" id=\"Sx6.T2.1.13.13.4.2\" style=\"font-size:80%;\">, 0.75, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.13.13.4.3\" style=\"font-size:80%;\">0.65</span><span class=\"ltx_text\" id=\"Sx6.T2.1.13.13.4.4\" style=\"font-size:80%;\">, 0.69</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.14.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.14.14.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.14.14.1.1\" style=\"font-size:80%;\">Llama 2</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"Sx6.T2.1.14.14.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.14.14.3\">\n<span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.14.14.3.1\" style=\"font-size:80%;\">0.62</span><span class=\"ltx_text\" id=\"Sx6.T2.1.14.14.3.2\" style=\"font-size:80%;\">, 0.65, 0.60, 0.58</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.14.14.4\">\n<span class=\"ltx_text\" id=\"Sx6.T2.1.14.14.4.1\" style=\"font-size:80%;\">0.70, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.14.14.4.2\" style=\"font-size:80%;\">0.79</span><span class=\"ltx_text\" id=\"Sx6.T2.1.14.14.4.3\" style=\"font-size:80%;\">, 0.56, 0.66</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.15.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.15.15.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.15.15.1.1\" style=\"font-size:80%;\">MedAlpaca</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.15.15.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.15.15.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.15.15.3.1\" style=\"font-size:80%;\">0.48, 0.53, 0.51, 0.40</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.15.15.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.15.15.4.1\" style=\"font-size:80%;\">0.25, 0.59, 0.20, 0.17</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.16.16\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.16.16.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.16.16.1.1\" style=\"font-size:80%;\">Stable Platypus 2</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.16.16.2\"><span class=\"ltx_text\" id=\"Sx6.T2.1.16.16.2.1\" style=\"font-size:80%;\">128 words</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.16.16.3\">\n<span class=\"ltx_text\" id=\"Sx6.T2.1.16.16.3.1\" style=\"font-size:80%;\">0.61, 0.61, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.16.16.3.2\" style=\"font-size:80%;\">0.61</span><span class=\"ltx_text\" id=\"Sx6.T2.1.16.16.3.3\" style=\"font-size:80%;\">, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.16.16.3.4\" style=\"font-size:80%;\">0.60</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.16.16.4\">\n<span class=\"ltx_text\" id=\"Sx6.T2.1.16.16.4.1\" style=\"font-size:80%;\">0.71, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.16.16.4.2\" style=\"font-size:80%;\">0.79</span><span class=\"ltx_text\" id=\"Sx6.T2.1.16.16.4.3\" style=\"font-size:80%;\">, 0.57, 0.66</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.17.17\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.17.17.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.17.17.1.1\" style=\"font-size:80%;\">Vicuna</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.17.17.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.17.17.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.17.17.3.1\" style=\"font-size:80%;\">0.59, 0.61, 0.60, 0.58</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.17.17.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.17.17.4.1\" style=\"font-size:80%;\">0.70, 0.75, 0.56, 0.63</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.18.18\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.18.18.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.18.18.1.1\" style=\"font-size:80%;\">Llama 2 (Self-Consistency Prompting)</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.18.18.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.18.18.3\">\n<span class=\"ltx_text\" id=\"Sx6.T2.1.18.18.3.1\" style=\"font-size:80%;\">0.60, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.18.18.3.2\" style=\"font-size:80%;\">0.68</span><span class=\"ltx_text\" id=\"Sx6.T2.1.18.18.3.3\" style=\"font-size:80%;\">, 0.57, 0.51</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.18.18.4\">\n<span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.18.18.4.1\" style=\"font-size:80%;\">0.75</span><span class=\"ltx_text\" id=\"Sx6.T2.1.18.18.4.2\" style=\"font-size:80%;\">, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.18.18.4.3\" style=\"font-size:80%;\">0.79</span><span class=\"ltx_text\" id=\"Sx6.T2.1.18.18.4.4\" style=\"font-size:80%;\">, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.18.18.4.5\" style=\"font-size:80%;\">0.60</span><span class=\"ltx_text\" id=\"Sx6.T2.1.18.18.4.6\" style=\"font-size:80%;\">, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.18.18.4.7\" style=\"font-size:80%;\">0.68</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.19.19\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.19.19.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.19.19.1.1\" style=\"font-size:80%;\">Models-Vote Prompting</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.19.19.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.19.19.3\">\n<span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.19.19.3.1\" style=\"font-size:80%;\">0.62</span><span class=\"ltx_text\" id=\"Sx6.T2.1.19.19.3.2\" style=\"font-size:80%;\">, 0.62, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.19.19.3.3\" style=\"font-size:80%;\">0.61</span><span class=\"ltx_text\" id=\"Sx6.T2.1.19.19.3.4\" style=\"font-size:80%;\">, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.19.19.3.5\" style=\"font-size:80%;\">0.60</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.19.19.4\">\n<span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.19.19.4.1\" style=\"font-size:80%;\">0.75</span><span class=\"ltx_text\" id=\"Sx6.T2.1.19.19.4.2\" style=\"font-size:80%;\">, 0.77, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.19.19.4.3\" style=\"font-size:80%;\">0.60</span><span class=\"ltx_text\" id=\"Sx6.T2.1.19.19.4.4\" style=\"font-size:80%;\">, 0.67</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.20.20\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.20.20.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.20.20.1.1\" style=\"font-size:80%;\">Llama 2</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"Sx6.T2.1.20.20.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.20.20.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.20.20.3.1\" style=\"font-size:80%;\">0.67, 0.67, 0.66, 0.66</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"Sx6.T2.1.20.20.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.20.20.4.1\" style=\"font-size:80%;\">0.61, 0.79, 0.48, 0.59</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.21.21\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.21.21.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.21.21.1.1\" style=\"font-size:80%;\">MedAlpaca</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.21.21.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.21.21.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.21.21.3.1\" style=\"font-size:80%;\">0.47, 0.51, 0.50, 0.35</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.21.21.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.21.21.4.1\" style=\"font-size:80%;\">0.19, 0.32, 0.15, 0.10</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.22.22\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.22.22.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.22.22.1.1\" style=\"font-size:80%;\">Stable Platypus 2</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.22.22.2\"><span class=\"ltx_text\" id=\"Sx6.T2.1.22.22.2.1\" style=\"font-size:80%;\">256 words</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.22.22.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.22.22.3.1\" style=\"font-size:80%;\">0.61, 0.61, 0.60, 0.60</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.22.22.4\">\n<span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.22.22.4.1\" style=\"font-size:80%;\">0.66</span><span class=\"ltx_text\" id=\"Sx6.T2.1.22.22.4.2\" style=\"font-size:80%;\">, 0.76, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.22.22.4.3\" style=\"font-size:80%;\">0.53</span><span class=\"ltx_text\" id=\"Sx6.T2.1.22.22.4.4\" style=\"font-size:80%;\">, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.22.22.4.5\" style=\"font-size:80%;\">0.61</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.23.23\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.23.23.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.23.23.1.1\" style=\"font-size:80%;\">Vicuna</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.23.23.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.23.23.3\">\n<span class=\"ltx_text\" id=\"Sx6.T2.1.23.23.3.1\" style=\"font-size:80%;\">0.67, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.23.23.3.2\" style=\"font-size:80%;\">0.71</span><span class=\"ltx_text\" id=\"Sx6.T2.1.23.23.3.3\" style=\"font-size:80%;\">, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.23.23.3.4\" style=\"font-size:80%;\">0.68</span><span class=\"ltx_text\" id=\"Sx6.T2.1.23.23.3.5\" style=\"font-size:80%;\">, 0.66</span>\n</td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.23.23.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.23.23.4.1\" style=\"font-size:80%;\">0.52, 0.70, 0.41, 0.50</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.24.24\">\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.24.24.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.24.24.1.1\" style=\"font-size:80%;\">Llama 2 (Self-Consistency Prompting)</span></td>\n<td class=\"ltx_td\" id=\"Sx6.T2.1.24.24.2\"></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.24.24.3\"><span class=\"ltx_text\" id=\"Sx6.T2.1.24.24.3.1\" style=\"font-size:80%;\">0.63, 0.65, 0.62, 0.60</span></td>\n<td class=\"ltx_td ltx_align_left\" id=\"Sx6.T2.1.24.24.4\">\n<span class=\"ltx_text\" id=\"Sx6.T2.1.24.24.4.1\" style=\"font-size:80%;\">0.62, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.24.24.4.2\" style=\"font-size:80%;\">0.80</span><span class=\"ltx_text\" id=\"Sx6.T2.1.24.24.4.3\" style=\"font-size:80%;\">, 0.50, 0.60</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T2.1.25.25\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx6.T2.1.25.25.1\"><span class=\"ltx_text ltx_font_italic\" id=\"Sx6.T2.1.25.25.1.1\" style=\"font-size:80%;\">Models-Vote Prompting</span></td>\n<td class=\"ltx_td ltx_border_bb\" id=\"Sx6.T2.1.25.25.2\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx6.T2.1.25.25.3\">\n<span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.25.25.3.1\" style=\"font-size:80%;\">0.68</span><span class=\"ltx_text\" id=\"Sx6.T2.1.25.25.3.2\" style=\"font-size:80%;\">, 0.67, 0.67, </span><span class=\"ltx_text ltx_font_bold ltx_framed_underline\" id=\"Sx6.T2.1.25.25.3.3\" style=\"font-size:80%;\">0.67</span>\n</td>\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"Sx6.T2.1.25.25.4\"><span class=\"ltx_text\" id=\"Sx6.T2.1.25.25.4.1\" style=\"font-size:80%;\">0.61, 0.73, 0.49, 0.56</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:80%;\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Experimental Results on Identification and Classification of Rare Diseases</figcaption>\n</figure>",
134
+ "capture": "Table 2: Experimental Results on Identification and Classification of Rare Diseases"
135
+ },
136
+ "3": {
137
+ "table_html": "<figure class=\"ltx_table\" id=\"Sx6.T3\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"Sx6.T3.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"Sx6.T3.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_th_row ltx_border_tt\" id=\"Sx6.T3.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx6.T3.1.1.1.1.1\" style=\"font-size:90%;\">Model</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx6.T3.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx6.T3.1.1.1.2.1\" style=\"font-size:90%;\">No JSON</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_tt\" id=\"Sx6.T3.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"Sx6.T3.1.1.1.3.1\" style=\"font-size:90%;\">Compliance</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"Sx6.T3.1.2.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_t\" id=\"Sx6.T3.1.2.1.1\"><span class=\"ltx_text\" id=\"Sx6.T3.1.2.1.1.1\" style=\"font-size:90%;\">Llama 2</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx6.T3.1.2.1.2\"><span class=\"ltx_text\" id=\"Sx6.T3.1.2.1.2.1\" style=\"font-size:90%;\">33</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"Sx6.T3.1.2.1.3\"><span class=\"ltx_text\" id=\"Sx6.T3.1.2.1.3.1\" style=\"font-size:90%;\">96.8%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T3.1.3.2\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx6.T3.1.3.2.1\"><span class=\"ltx_text\" id=\"Sx6.T3.1.3.2.1.1\" style=\"font-size:90%;\">MedAlpaca</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx6.T3.1.3.2.2\"><span class=\"ltx_text\" id=\"Sx6.T3.1.3.2.2.1\" style=\"font-size:90%;\">197</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx6.T3.1.3.2.3\"><span class=\"ltx_text\" id=\"Sx6.T3.1.3.2.3.1\" style=\"font-size:90%;\">80.8%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T3.1.4.3\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx6.T3.1.4.3.1\"><span class=\"ltx_text\" id=\"Sx6.T3.1.4.3.1.1\" style=\"font-size:90%;\">Stable Platypus 2</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx6.T3.1.4.3.2\"><span class=\"ltx_text\" id=\"Sx6.T3.1.4.3.2.1\" style=\"font-size:90%;\">185</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx6.T3.1.4.3.3\"><span class=\"ltx_text\" id=\"Sx6.T3.1.4.3.3.1\" style=\"font-size:90%;\">82.0%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T3.1.5.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row\" id=\"Sx6.T3.1.5.4.1\"><span class=\"ltx_text\" id=\"Sx6.T3.1.5.4.1.1\" style=\"font-size:90%;\">Vicuna</span></th>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx6.T3.1.5.4.2\"><span class=\"ltx_text\" id=\"Sx6.T3.1.5.4.2.1\" style=\"font-size:90%;\">162</span></td>\n<td class=\"ltx_td ltx_align_center\" id=\"Sx6.T3.1.5.4.3\"><span class=\"ltx_text\" id=\"Sx6.T3.1.5.4.3.1\" style=\"font-size:90%;\">84.2%</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"Sx6.T3.1.6.5\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_row ltx_border_bb ltx_border_t\" id=\"Sx6.T3.1.6.5.1\"><span class=\"ltx_text\" id=\"Sx6.T3.1.6.5.1.1\" style=\"font-size:90%;\">Overall JSON</span></th>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx6.T3.1.6.5.2\"><span class=\"ltx_text\" id=\"Sx6.T3.1.6.5.2.1\" style=\"font-size:90%;\">577</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_t\" id=\"Sx6.T3.1.6.5.3\"><span class=\"ltx_text\" id=\"Sx6.T3.1.6.5.3.1\" style=\"font-size:90%;\">85.9%</span></td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\" style=\"font-size:90%;\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>JSON Compliance by Model</figcaption>\n</figure>",
138
+ "capture": "Table 3: JSON Compliance by Model"
139
+ }
140
+ },
141
+ "image_paths": {
142
+ "1": {
143
+ "figure_path": "2308.12890v3_figure_1.png",
144
+ "caption": "Figure 1: Rare Disease Dataset Pipeline",
145
+ "url": "http://arxiv.org/html/2308.12890v3/extracted/5364189/image/rare_disease_dataset_pipeline.png"
146
+ },
147
+ "2": {
148
+ "figure_path": "2308.12890v3_figure_2.png",
149
+ "caption": "Figure 2: CoT-Augmented Models-Vote Prompt Engineering",
150
+ "url": "http://arxiv.org/html/2308.12890v3/extracted/5364189/image/prompt_engineering.png"
151
+ },
152
+ "3": {
153
+ "figure_path": "2308.12890v3_figure_3.png",
154
+ "caption": "Figure 3: Ablation Study Results",
155
+ "url": "http://arxiv.org/html/2308.12890v3/extracted/5364189/image/ablation_study.png"
156
+ }
157
+ },
158
+ "validation": true,
159
+ "references": [
160
+ {
161
+ "1": {
162
+ "title": "\u201cA Survey of Large Language Models\u201d, 2023",
163
+ "author": "Wayne Xin Zhao et al.",
164
+ "venue": "arXiv:2303.18223 [cs.CL]",
165
+ "url": null
166
+ }
167
+ },
168
+ {
169
+ "2": {
170
+ "title": "\u201cBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding\u201d",
171
+ "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova",
172
+ "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
173
+ "url": null
174
+ }
175
+ },
176
+ {
177
+ "3": {
178
+ "title": "\u201cRoBERTa: A Robustly Optimized {BERT} Pretraining Approach\u201d, 2020",
179
+ "author": "Yinhan Liu et al.",
180
+ "venue": "URL: https://openreview.net/forum?id=SyxS0T4tvS",
181
+ "url": null
182
+ }
183
+ },
184
+ {
185
+ "4": {
186
+ "title": "\u201cBART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension\u201d",
187
+ "author": "Mike Lewis et al.",
188
+ "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
189
+ "url": null
190
+ }
191
+ },
192
+ {
193
+ "5": {
194
+ "title": "\u201cExploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer\u201d",
195
+ "author": "Colin Raffel et al.",
196
+ "venue": "In J. Mach. Learn. Res. 21.1",
197
+ "url": null
198
+ }
199
+ },
200
+ {
201
+ "6": {
202
+ "title": "\u201cLanguage Models are Unsupervised Multitask Learners\u201d, 2019",
203
+ "author": "Alec Radford et al.",
204
+ "venue": null,
205
+ "url": null
206
+ }
207
+ },
208
+ {
209
+ "7": {
210
+ "title": "\u201cLanguage Models are Few-Shot Learners\u201d",
211
+ "author": "Tom Brown et al.",
212
+ "venue": "In Advances in Neural Information Processing Systems 33",
213
+ "url": null
214
+ }
215
+ },
216
+ {
217
+ "8": {
218
+ "title": "\u201cLLaMA: Open and Efficient Foundation Language Models\u201d, 2023",
219
+ "author": "Hugo Touvron et al.",
220
+ "venue": "arXiv:2302.13971 [cs.CL]",
221
+ "url": null
222
+ }
223
+ },
224
+ {
225
+ "9": {
226
+ "title": "\u201cHigh-Resolution Image Synthesis With Latent Diffusion Models\u201d",
227
+ "author": "Robin Rombach et al.",
228
+ "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 10684\u201310695",
229
+ "url": null
230
+ }
231
+ },
232
+ {
233
+ "10": {
234
+ "title": "\u201cIntroducing ChatGPT\u201d",
235
+ "author": "OpenAI",
236
+ "venue": "In Introducing ChatGPT",
237
+ "url": null
238
+ }
239
+ },
240
+ {
241
+ "11": {
242
+ "title": "\u201cChatGPT sets record for fastest-growing user base - analyst note\u201d",
243
+ "author": "Krystal Hu",
244
+ "venue": "In Reuters, 2023",
245
+ "url": null
246
+ }
247
+ },
248
+ {
249
+ "12": {
250
+ "title": "\u201cPre-Train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing\u201d",
251
+ "author": "Pengfei Liu et al.",
252
+ "venue": "In ACM Comput. Surv. 55.9",
253
+ "url": null
254
+ }
255
+ },
256
+ {
257
+ "13": {
258
+ "title": "\u201cLearning To Retrieve Prompts for In-Context Learning\u201d",
259
+ "author": "Ohad Rubin, Jonathan Herzig and Jonathan Berant",
260
+ "venue": "In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
261
+ "url": null
262
+ }
263
+ },
264
+ {
265
+ "14": {
266
+ "title": "\u201cUnderstanding the Benefits and Challenges of Deploying Conversational AI Leveraging Large Language Models for Public Health Intervention\u201d",
267
+ "author": "Eunkyung Jo, Daniel A. Epstein, Hyunhoon Jung and Young-Ho Kim",
268
+ "venue": "In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI \u201923",
269
+ "url": null
270
+ }
271
+ },
272
+ {
273
+ "15": {
274
+ "title": "\u201cA Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT\u201d, 2023",
275
+ "author": "Jules White et al.",
276
+ "venue": "arXiv:2302.11382 [cs.SE]",
277
+ "url": null
278
+ }
279
+ },
280
+ {
281
+ "16": {
282
+ "title": "\u201cTraining language models to follow instructions with human feedback\u201d, 2022",
283
+ "author": "Long Ouyang et al.",
284
+ "venue": "arXiv:2203.02155 [cs.CL]",
285
+ "url": null
286
+ }
287
+ },
288
+ {
289
+ "17": {
290
+ "title": "\u201cChain-of-Thought Prompting Elicits Reasoning in Large Language Models\u201d, 2023",
291
+ "author": "Jason Wei et al.",
292
+ "venue": "arXiv:2201.11903 [cs.CL]",
293
+ "url": null
294
+ }
295
+ },
296
+ {
297
+ "18": {
298
+ "title": "\u201cSelf-Consistency Improves Chain of Thought Reasoning in Language Models\u201d, 2023",
299
+ "author": "Xuezhi Wang et al.",
300
+ "venue": "arXiv:2203.11171 [cs.CL]",
301
+ "url": null
302
+ }
303
+ },
304
+ {
305
+ "19": {
306
+ "title": "\u201cPANet: Few-Shot Image Semantic Segmentation with Prototype Alignment\u201d, 2020",
307
+ "author": "Kaixin Wang et al.",
308
+ "venue": "arXiv:1908.06391 [cs.CV]",
309
+ "url": null
310
+ }
311
+ },
312
+ {
313
+ "20": {
314
+ "title": "\u201cFew Shot Speaker Recognition using Deep Neural Networks\u201d, 2019",
315
+ "author": "Prashant Anand, Ajeet Kumar Singh, Siddharth Srivastava and Brejesh Lall",
316
+ "venue": "arXiv:1904.08775 [eess.AS]",
317
+ "url": null
318
+ }
319
+ },
320
+ {
321
+ "21": {
322
+ "title": "\u201cFew-Shot Named Entity Recognition: A Comprehensive Study\u201d, 2020",
323
+ "author": "Jiaxin Huang et al.",
324
+ "venue": "arXiv:2012.14978 [cs.CL]",
325
+ "url": null
326
+ }
327
+ },
328
+ {
329
+ "22": {
330
+ "title": "\u201cFew-Shot Question Answering by Pretraining Span Selection\u201d",
331
+ "author": "Ori Ram et al.",
332
+ "venue": "In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
333
+ "url": null
334
+ }
335
+ },
336
+ {
337
+ "23": {
338
+ "title": "\u201cTrue Few-Shot Learning with Prompts\u2014A Real-World Perspective\u201d",
339
+ "author": "Timo Schick and Hinrich Sch\u00fctze",
340
+ "venue": "In Transactions of the Association for Computational Linguistics 10, 2022, pp. 716\u2013731",
341
+ "url": null
342
+ }
343
+ },
344
+ {
345
+ "24": {
346
+ "title": "\u201c21 U.S.C. \u00a7 360bb\u201d Designation of drugs for rare diseases or conditions, U.S. Code Title 21, Section 360bb, 1983",
347
+ "author": "The United States Congress",
348
+ "venue": null,
349
+ "url": null
350
+ }
351
+ },
352
+ {
353
+ "25": {
354
+ "title": "\u201cEUR-LEX - 32000R0141 - EN - EUR-LEX\u201d, 1999",
355
+ "author": "The European Parliament and the Council of the European Union",
356
+ "venue": "URL: http://data.europa.eu/eli/reg/2000/141/oj",
357
+ "url": null
358
+ }
359
+ },
360
+ {
361
+ "26": {
362
+ "title": "\u201cOptimising the use of electronic health records to estimate the incidence of rheumatoid arthritis in primary care: what information is hidden in free text?\u201d",
363
+ "author": "Elizabeth Ford et al.",
364
+ "venue": "In BMC Medical Research Methodology 13.1",
365
+ "url": null
366
+ }
367
+ },
368
+ {
369
+ "27": {
370
+ "title": "\u201cNew Paradigms for Patient-Centered Outcomes Research in Electronic Medical Records: An example of detecting urinary incontinence following prostatectomy\u201d",
371
+ "author": "Tina Hernandez-Boussard et al.",
372
+ "venue": "In eGEMs (Generating Evidence & + Methods to improve patient outcomes) 4.3",
373
+ "url": null
374
+ }
375
+ },
376
+ {
377
+ "28": {
378
+ "title": "\u201cThe Value of Unstructured Electronic Health Record Data in Geriatric Syndrome Case Identification\u201d",
379
+ "author": "Hadi Kharrazi et al.",
380
+ "venue": "In Journal of the American Geriatrics Society 66.8",
381
+ "url": null
382
+ }
383
+ },
384
+ {
385
+ "29": {
386
+ "title": "\u201cOntology-driven and weakly supervised rare disease identification from clinical notes\u201d",
387
+ "author": "Hang Dong et al.",
388
+ "venue": "In BMC Medical Informatics and Decision Making 23.1, 2023, pp. 86",
389
+ "url": null
390
+ }
391
+ },
392
+ {
393
+ "30": {
394
+ "title": "\u201cMIMIC-III Clinical Database\u201d",
395
+ "author": "Alistair Johnson, Tom Pollard and Roger Mark",
396
+ "venue": "PhysioNet, 2023",
397
+ "url": null
398
+ }
399
+ },
400
+ {
401
+ "31": {
402
+ "title": "\u201cMIMIC-III, a freely accessible critical care database\u201d",
403
+ "author": "Alistair E.W. Johnson et al.",
404
+ "venue": "In Scientific Data 3.1",
405
+ "url": null
406
+ }
407
+ },
408
+ {
409
+ "32": {
410
+ "title": "\u201cMIMIC-IV\u201d",
411
+ "author": "Alistair Johnson et al.",
412
+ "venue": "PhysioNet, 2023",
413
+ "url": null
414
+ }
415
+ },
416
+ {
417
+ "33": {
418
+ "title": "\u201cMIMIC-IV\u201d",
419
+ "author": "Alistair Johnson et al.",
420
+ "venue": "PhysioNet, 2021",
421
+ "url": null
422
+ }
423
+ },
424
+ {
425
+ "34": {
426
+ "title": "\u201cFormal Algorithms for Transformers\u201d, 2022",
427
+ "author": "Mary Phuong and Marcus Hutter",
428
+ "venue": "arXiv:2207.09238 [cs.LG]",
429
+ "url": null
430
+ }
431
+ },
432
+ {
433
+ "35": {
434
+ "title": "\u201cAsk Me Anything: A simple strategy for prompting language models\u201d",
435
+ "author": "Simran Arora et al.",
436
+ "venue": "In The Eleventh International Conference on Learning Representations, 2023",
437
+ "url": null
438
+ }
439
+ },
440
+ {
441
+ "36": {
442
+ "title": "\u201cRandom Forests\u201d",
443
+ "author": "Leo Breiman",
444
+ "venue": "In Machine Learning 45.1, 2001, pp. 5\u201332",
445
+ "url": null
446
+ }
447
+ },
448
+ {
449
+ "37": {
450
+ "title": "\u201cA Repository of Conversational Datasets\u201d",
451
+ "author": "Matthew Henderson et al.",
452
+ "venue": "In Proceedings of the First Workshop on NLP for Conversational AI, 2019, pp. 1\u201310",
453
+ "url": null
454
+ }
455
+ },
456
+ {
457
+ "38": {
458
+ "title": "\u201cPretrained Language Models for Biomedical and Clinical Tasks: Understanding and Extending the State-of-the-Art\u201d",
459
+ "author": "Patrick Lewis, Myle Ott, Jingfei Du and Veselin Stoyanov",
460
+ "venue": "In Proceedings of the 3rd Clinical Natural Language Processing Workshop",
461
+ "url": null
462
+ }
463
+ },
464
+ {
465
+ "39": {
466
+ "title": "\u201cDon\u2019t Stop Pretraining: Adapt Language Models to Domains and Tasks\u201d",
467
+ "author": "Suchin Gururangan et al.",
468
+ "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
469
+ "url": null
470
+ }
471
+ },
472
+ {
473
+ "40": {
474
+ "title": "\u201cORDO: an ontology connecting rare disease, epidemiology and genetic data\u201d",
475
+ "author": "Drashtti Vasant et al.",
476
+ "venue": "In Proceedings of ISMB 30, 2014",
477
+ "url": null
478
+ }
479
+ },
480
+ {
481
+ "41": {
482
+ "title": "\u201cInterrater reliability: the kappa statistic\u201d",
483
+ "author": "M.L. McHugh",
484
+ "venue": "In Biochem Med (Zagreb) 22.3, 2012, pp. 276\u2013282",
485
+ "url": null
486
+ }
487
+ },
488
+ {
489
+ "42": {
490
+ "title": "\u201cLlama 2: Open Foundation and Fine-Tuned Chat Models\u201d, 2023",
491
+ "author": "Hugo Touvron et al.",
492
+ "venue": "arXiv:2307.09288 [cs.CL]",
493
+ "url": null
494
+ }
495
+ },
496
+ {
497
+ "43": {
498
+ "title": "\u201cMedAlpaca \u2013 An Open-Source Collection of Medical Conversational AI Models and Training Data\u201d, 2023",
499
+ "author": "Tianyu Han et al.",
500
+ "venue": "arXiv:2304.08247 [cs.CL]",
501
+ "url": null
502
+ }
503
+ },
504
+ {
505
+ "44": {
506
+ "title": "\u201cStable-Platypus2-13B\u201d, 2023",
507
+ "author": "garage-bAInd",
508
+ "venue": "URL: https://huggingface.co/garage-bAInd/Stable-Platypus2-13B",
509
+ "url": null
510
+ }
511
+ },
512
+ {
513
+ "45": {
514
+ "title": "\u201cVicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality\u201d, 2023",
515
+ "author": "The Vicuna Team",
516
+ "venue": "URL: https://vicuna.lmsys.org/",
517
+ "url": null
518
+ }
519
+ },
520
+ {
521
+ "46": {
522
+ "title": "\u201cOpen LLM Leaderboard\u201d, 2023",
523
+ "author": "Huggingface",
524
+ "venue": "URL: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard",
525
+ "url": null
526
+ }
527
+ },
528
+ {
529
+ "47": {
530
+ "title": "\u201cCalibrate Before Use: Improving Few-shot Performance of Language Models\u201d",
531
+ "author": "Zihao Zhao et al.",
532
+ "venue": "In Proceedings of the 38th International Conference on Machine Learning 139, Proceedings of Machine Learning Research",
533
+ "url": null
534
+ }
535
+ },
536
+ {
537
+ "48": {
538
+ "title": "\u201cLlama 2 is here\u201d, 2023",
539
+ "author": "Philipp Schmid, Omar Sanseviero, Pedro Cuenca and Lewis Tunstall",
540
+ "venue": "URL: https://huggingface.co/blog/llama2",
541
+ "url": null
542
+ }
543
+ },
544
+ {
545
+ "49": {
546
+ "title": "\u201cLarge language models in biomedical natural language processing: benchmarks, baselines, and recommendations\u201d, 2023",
547
+ "author": "Qingyu Chen et al.",
548
+ "venue": "arXiv:2305.16326 [cs.CL]",
549
+ "url": null
550
+ }
551
+ },
552
+ {
553
+ "50": {
554
+ "title": "\u201cFalcon-40B: an open large language model with state-of-the-art performance\u201d, 2023",
555
+ "author": "Ebtesam Almazrouei et al.",
556
+ "venue": null,
557
+ "url": null
558
+ }
559
+ }
560
+ ],
561
+ "url": "http://arxiv.org/html/2308.12890v3"
562
+ }
20240123/2308.14190v2.json ADDED
The diff for this file is too large to render. See raw diff
 
20240123/2308.16692v2.json ADDED
@@ -0,0 +1,542 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "title": "SpeechTokenizer: Unified Speech Tokenizer for Speech Language Models",
3
+ "abstract": "Current speech large language models build upon discrete speech representations, which can be categorized into semantic tokens and acoustic tokens. However, existing speech tokens are not specifically designed for speech language modeling. To assess the suitability of speech tokens for building speech language models, we established the first benchmark, SLMTokBench. Our results indicate that neither semantic nor acoustic tokens are ideal for this purpose. Therefore, we propose SpeechTokenizer, a unified speech tokenizer for speech large language models. SpeechTokenizer adopts the Encoder-Decoder architecture with residual vector quantization (RVQ). Unifying semantic and acoustic tokens, SpeechTokenizer disentangles different aspects of speech information hierarchically across different RVQ layers. Furthermore, We construct a Unified Speech Language Model (USLM) leveraging SpeechTokenizer. Experiments show that SpeechTokenizer performs comparably to EnCodec in speech reconstruction and demonstrates strong performance on the SLMTokBench benchmark. Also, USLM outperforms VALL-E in zero-shot Text-to-Speech tasks. Code and models are available at https://github.com/ZhangXInFD/SpeechTokenizer/.",
4
+ "sections": [
5
+ {
6
+ "section_id": "1",
7
+ "parent_section_id": null,
8
+ "section_name": "Introduction",
9
+ "text": "Large language models (OpenAI, 2023 ###reference_21###; Touvron et al., 2023 ###reference_32###) have demonstrated remarkable performance on various natural language processing tasks. This has inspired numerous works to build speech language models (Borsos et al., 2022 ###reference_4###), which have achieved significant breakthroughs across various speech processing tasks (Wang et al., 2023 ###reference_33###; Zhang et al., 2023 ###reference_37###; Rubenstein et al., 2023 ###reference_28###; Dong et al., 2023 ###reference_11###). A key commonality among these works is the utilization of discrete speech representations.\nCurrent discrete speech representations can be categorized into two types: semantic tokens and acoustic tokens (Borsos et al., 2022 ###reference_4###). Semantic tokens are typically from self-supervised pre-trained models with masked language modeling as training objective (Hsu et al., 2021 ###reference_15###; Baevski et al., 2020 ###reference_2###; Chung et al., 2021 ###reference_9###). Derived through k-means clustering on representations from a specific intermediate layer, semantic tokens are depicted as sequences with one-dimensional structure.\nAcoustic tokens can be extracted from neural audio codecs with reconstruction as training objective (Zeghidour et al., 2021 ###reference_36###; D\u00e9fossez et al., 2022 ###reference_12###).\nUtilizing residual vector quantization (RVQ) with hierarchical quantizers for discretization, acoustic tokens are represented as matrices consisting of two dimensions: timesteps and quantizers.\nBuilding upon two speech tokens, there exist three modeling approaches for speech language models, as listed in Table 1 ###reference_###:\ni) Semantic language models are constructed using semantic tokens and employ an external unit vocoder for speech synthesis. (Lakhotia et al., 2021 ###reference_18###; Zhang et al., 2023 ###reference_37###; Hassid et al., 2023 ###reference_13###). While capturing semantically accurate content, their speech generation results in poor quality and a loss of acoustic details.\nii) Acoustic language models are built on acoustic tokens. Taking VALL-E (Wang et al., 2023 ###reference_33###) as an example, despite achieving impressive zero-shot text-to-speech (TTS) capabilities, it still suffers from problems like inaccurate content, due to the complex information within acoustic tokens.\niii) Hierarchical speech language models comprise semantic token language models and acoustic token language models, which capture content information and acoustic details respectively (Borsos et al., 2022 ###reference_4###; Rubenstein et al., 2023 ###reference_28###; Dong et al., 2023 ###reference_11###). This structure shows promising results in both content and speech quality, but the multi-stage modeling approach is more complex, leading to several drawbacks such as error accumulation and slower processing speed. Additionally, there is significant information redundancy between semantic tokens and acoustic tokens, which introduces unnecessary modeling complexities.\nAn ideal speech language model should not only accurately model content, but also generating diverse, high-quality speech, while maintaining an architecture of elegant simplicity.\nCorrespondingly, ideal speech tokens should meet the following two key characteristics: i) Strong alignment with text; ii) Effective preservation of speech information.\n###figure_1### However, existing speech tokens are not explicitly designed for speech language modeling, and there has been no exploration into their suitability for building speech language models.\nTo address this gap, we build Speech Language Model Token Benchmark, to assess the suitability of speech tokens for constructing speech language models.\nOur evaluation reveals that semantic tokens exhibit a high alignment with text while losing some information in speech, such as timbre. Acoustic tokens excel in preserving speech information effectively but do not demonstrate a strong alignment with text.\nWith these observations, we aim to build a specialized speech tokens designed for speech language models by unifying semantic and acoustic tokens.\nSpecifically, we can conduct information disentanglement in the RVQ structure of acoustic tokens, enabling the first RVQ quantizer to generate tokens containing content information, similar to semantic tokens, while the subsequent quantizers complement the remaining paralinguistic information, as illustrated in Figure 1 ###reference_###.\nWith the above motivation, we propose SpeechTokenizer, a unified speech tokenizer for speech large language models. SpeechTokenizer adopts the Encoder-Decoder architecture with residual vector quantization.\nUnifying semantic and acoustic tokens, SpeechTokenizer disentangles different aspects of\nspeech information hierarchically across different RVQ layers.\nBy employing a semantic teacher to guide the first RVQ quantizer, the first layer tokens can effectively capture content information. With residual structure, the subsequent quantizers complement the remaining paralinguistic information.\nBuilding upon SpeechTokenizer, we build a Unified Speech Language Model consisting\nof autoregressive and non-autoregressive models. Experimental results show that SpeechTokenizer performs comparably to EnCodec (D\u00e9fossez et al., 2022 ###reference_12###) in speech reconstruction and demonstrates strong performance on the SLMTokBench benchmark. The USLM notably outperforms VALL-E (Wang et al., 2023 ###reference_33###) in zero-shot Text-to-Speech (TTS) tasks.\nOur contributions include the following:\nWe propose SpeechTokenizer, which is specially designed for speech large language models and unify the semantic and acoustic tokens through disentangling different aspects of speech information hierarchically.\nWe establish SLMTokBench, the first benckmark to assess the suitability of speech tokens for constructing speech language models.\nWe construct a unified speech language model based on SpeechTokenizer, which outperforms VALL-E on zero-shot TTS task."
10
+ },
11
+ {
12
+ "section_id": "2",
13
+ "parent_section_id": null,
14
+ "section_name": "SLMTokBench: Speech Language Model Token Benchmark",
15
+ "text": "To build powerful speech language models, discrete speech representations should possess the following two key characteristics: i) Strong alignment with text; ii) Effective preservation of speech information. Building on this premise, we establish speech Language Model Token Benchmark (SLMTokBench) to assess the suitability of speech tokens for constructing speech language models."
16
+ },
17
+ {
18
+ "section_id": "2.1",
19
+ "parent_section_id": "2",
20
+ "section_name": "Text Alignment Evaluation",
21
+ "text": "We evaluate the degree of text alignment by estimating the mutual information between speech tokens and text.\nFor notation, denotes discrete speech representations; denotes text; denotes the mutual information; test dataset is denoted as and denotes the downstream model. Through the derivation in Appendix A ###reference_###, we can estimate\n as:\nwhere is the variational distribution and can be parameterized by the downstream model .\nThe downstream model is a vanilla 2-layer 1024-unit BLSTM optimized by CTC loss on characters and it takes speech tokens as inputs. Specifically, for each discrete representation, we first establish an embedding matrix, which can be either randomly initialized or derived from the k-means centroid matrix or vector quantization codebooks obtained during the discretization process. We use the embedding matrix to embed the discrete representations and obtain continuous representations, which are then fed into the downstream models. We train the downstream model on LibriSpeech train-clean-100 subset and use dev-clean subset for estimating mutual information. We also calculate the word error rate (WER) on the test set.\nFor downstream model training, we configure the training setup with a batch size of 32, a learning rate of 1e-4, and a total of 200k global steps."
22
+ },
23
+ {
24
+ "section_id": "2.2",
25
+ "parent_section_id": "2",
26
+ "section_name": "Information Preservation Evaluation",
27
+ "text": "To evaluate the preservation of speech information in discrete speech representations, we convert speech tokens back to speech and evaluate resynthesized speech by automatic metrics on content and timbre.\nWe train a unit-HiFIGAN (Polyak et al., 2021 ###reference_23###) on LibriSpeech dataset to convert HuBERT units to waveform. Notably, to avoid interference from additional information, we don\u2019t supply any speaker information during training. For Encodec tokens, we used the Encodec decoder to\ndirectly produce the waveform.\nContent preservation is evaluated by computing the WER through transcribing the resynthesized speech using the Whisper en-medium model (Radford et al., 2023 ###reference_27###). Timbre preservation is evaluated by utilizing WavLM-TDNN (Chen et al., 2022 ###reference_7###) to calculate speaker similarity between the synthesized and groundtruth speech. We randomly sample 300 speech samples from LibriSpeech test set for evaluation."
28
+ },
29
+ {
30
+ "section_id": "2.3",
31
+ "parent_section_id": "2",
32
+ "section_name": "Comparing Semantic Acoustic Tokens",
33
+ "text": "We use HuBERT L9 units to represent semantic tokens and EnCodec codes to represent acoustic tokens.\nAs shown in Table 3 ###reference_###, semantic tokens achieve high mutual information with text but their resynthesized speech has low speaker similarity.\nAcoustic tokens achieve low WER and high speaker similarity for resynthesized speech but have low mutual information with text."
34
+ },
35
+ {
36
+ "section_id": "3",
37
+ "parent_section_id": null,
38
+ "section_name": "SpeechTokenizer",
39
+ "text": ""
40
+ },
41
+ {
42
+ "section_id": "3.1",
43
+ "parent_section_id": "3",
44
+ "section_name": "Model Structure",
45
+ "text": "Our model is built on the framework of RVQ-GANs, following the same pattern as SoundStream(Zeghidour et al., 2021 ###reference_36###)\nand EnCodec(D\u00e9fossez et al., 2022 ###reference_12###). As depicted in Figure2 ###reference_###, our model uses the convolutional-based encoder-decoder network from EnCodec, which performs temporal downscaling with a chosen striding factor. Notably, we have substituted the two-layer LSTM, originally following the convolution blocks in the EnCodec encoder, with a two-layer BiLSTM to augment the semantic modeling ability. We conduct ablation studies of model structure in Appendix B ###reference_###. We quantize the encoder outputs using Residual Vector Quantization (RVQ), a method that can operate quantizes residuals following an initial quantization steps with distinct codebook. Further details of model structure can be found in Appendix D ###reference_###. During training, a semantic teacher provides semantic representation to guide the residual quantization process.\n###figure_2###"
46
+ },
47
+ {
48
+ "section_id": "3.2",
49
+ "parent_section_id": "3",
50
+ "section_name": "Semantic Distillation",
51
+ "text": "To achieve a hierarchical modeling of diverse information across different RVQ layers, we employ semantic guidance for the first quantizer, enabling it to capture content information. Leveraging a residual structure enables the subsequent quantizers to complement the remaining paralinguistic information.\nWe employ HuBERT (Hsu et al., 2021 ###reference_15###) as our semantic teacher in this study, as HuBERT is demonstrated to encompass substantial content information (Mohamed et al., 2022 ###reference_20###). We introduce two types of distillation: continuous representation distillation and pseudo-label prediction.\nFor continuous representation distillation, we employ the 9th layer HuBERT representation or the average representation across all HuBERT layers as semantic teachers. The training objective is to maximize the cosine similarity at the dimension level across all timesteps between the outputs of RVQ first layer and semantic teacher representations. Formally, the continuous distillation loss is defined as:\nwhere and denote the quantized output of RVQ first layer and semantic teacher representation respectively. denotes the projection matrix and is the dimension of semantic teacher representation. The superscript signifies a vector comprising values from all timesteps at dimension . represents cosine similarity and denotes sigmoid activation. This continuous distillation loss function deviates from the commonly employed approach, which calculates the loss based on the representations output by the student and teacher models at the same timestep. A comparative analysis of these two methodologies is provided in Appendix C ###reference_###.\nFor pseudo-label prediction, we adopt HuBERT units as the target label. The training objective is constructed as:\nwhere and respectively denote the quantized output of the first VQ layer and the HuBERT unit at timestep t. denotes the number of time steps and is the projection matrix."
52
+ },
53
+ {
54
+ "section_id": "3.3",
55
+ "parent_section_id": "3",
56
+ "section_name": "Training Objective",
57
+ "text": "Our training approach includes both a reconstruction task and a semantic distillation task. In the reconstruction task, we employ a GAN objective, optimizing a combination of a reconstruction term, a discriminative loss term, and RVQ commitment loss. In the semantic distillation task, the training objective involves a semantic distillation loss term. In the following, represents an speech signal and denotes the reconstructed signal by the network.\nReconstruction Loss The reconstruction loss comprises a time and a frequency domain loss. For time domain, we minimize the L1 distance between and , i.e.\n\nFor frequency domain, we linearly combine the L1 and L2 losses over the mel-spectrogram using several time scales. Formally,\n\nwhere is a 64-bins mel-spectrogram using a normalized STFT with window size of and hop length of\n is the set of scales.\nDiscriminative Loss We use the same dicriminators as HiFi-Codec Yang et al. (2023 ###reference_35###) that consist of three discriminators: A multi-scale STFT-based (MS-STFT) discriminator; a multi-period discriminator (MPD) and a multi-scale discriminator (MSD). Further details of discriminators can be found in Appendix D ###reference_###. The adversarial loss is used to promote perceptual quality and it is defined as a hinge loss over the logits of the discriminator, averaged over multiple discriminators and over\ntime. Let denote the number of discriminators, the adversarial loss for the generator is constructed as follows, For the discriminators is defined as:\nAdditionally, a feature matching loss for the generator is computed as follow:\nwhere the mean is computed over all dimensions and is the number of layers in discriminators.\nRVQ Commitment Loss \nWe add a commitment loss between the pre-quantized value, and its quantized value, without gradient computed for the quantized value. RVQ commitment loss is defined as:\n, where and denote current residual and nearest entry in the corresponding codebook respectively.\nGenerally, the generator is trained to optimize the following loss:\nwhere and are hyper-parameters used to balance each loss term."
58
+ },
59
+ {
60
+ "section_id": "3.4",
61
+ "parent_section_id": "3",
62
+ "section_name": "Unified Speech Language Model",
63
+ "text": "As shown in Figure 1 ###reference_###, we can build a unified speech language model upon SpeechTokenizer. Consisting of autoregressive and non-autoregressive models, it can hierarchically model information in speech. The autoregressive (AR) model captures the content information by modeling tokens from the first RVQ quantizer. The non-autoregressive (NAR) model complements paralinguistic information for the AR model by generating tokens from the subsequent quantizers conditioned on the first-layer tokens.\nWe validate the effectiveness of unified speech language model on zero-shot TTS task.\nThe AR model is built upon the first-layer tokens . Utilizing a transformer decoder-only architecture , we approach this conversion as a casual language modeling task with the phoneme sequence serving as the prompt for the AR model. The training objective can be formulated as\nThe NAR model produces tokens from the subsequent quantizers. Its architecture resembles that of the AR model, comprising eight distinct acoustic embedding layers and output prediction layers. To control the characteristics of the speaker\u2019s voice, n acoustic prompt is employed for timbre guidance. The model is conditioned on phoneme sequence , acoustic prompts and tokens from previous quantizers, leading to the formulation of the training objective as follows\nDuring inference, we convert text input to phoneme sequence and speech prompt to speech tokens. They are concatenated to form the prompts for AR and NAR models. Conditioned on that, the AR model generates first-level tokens, while the NAR model iteratively produces tokens of subsequent levels. The tokens generated by the AR and NAR models are then concatenated to construct the speech token matrix. Finally, we use the SpeechTokenizer decoder to generate the waveform conditioned on the complete token matrix."
64
+ },
65
+ {
66
+ "section_id": "4",
67
+ "parent_section_id": null,
68
+ "section_name": "Experiments",
69
+ "text": ""
70
+ },
71
+ {
72
+ "section_id": "4.1",
73
+ "parent_section_id": "4",
74
+ "section_name": "Experimental Setups",
75
+ "text": "Datasets \nFor SpeechTokenizer training, we use LibriSpeech (Panayotov et al., 2015 ###reference_22###) dataset.\nWe randomly crop a 3-second segment from the speech samples at each training iteration. For zero-shot TTS, we train AR and NAR models on the English subset of Multilingual LibriSpeech (Pratap et al., 2020 ###reference_24###) dataset, which contains 44K hours of transcribed speech data derived from LibriVox audiobooks. We select speech samples with durations ranging from 3 to 14 seconds for training data. The sampling rate is 16KHz for all speech data.\nModel \nFor SpeechTokenizer, we introduce the details about model structure in section 3.1 ###reference_### and Appendix D ###reference_###.\nFor zero-shot TTS experiments,\nAR model and NAR model are both\n12-layer Transformer decoders with 16 attention heads, an attention dimension of 1024 and the FFN dimension of 4096.\nTraining \nFor SpeechTokenizer, the model are trained on 2 A800 GPUS for 20 epochs with maximum learning rate of 4e-4 and batch size of 20 per GPU.\nFor Unified Speech Language Model, both AR and NAR models are trained on 8 A800 GPUS for 500k steps with maximum learning rate of 5e-4. The AR model is trained with batch size of 7500 tokens per GPU, and the NAR model is trained with batch size of 5000 tokens per GPU.\nBaselines \nWe adopt EnCodec_24khz_6kpbs (hereinafter referred to as EnCodec) (D\u00e9fossez et al., 2022 ###reference_12###) as the baseline for SpeechTokenizer and VALL-E (Wang et al., 2023 ###reference_33###) as the baseline system for zero-shot TTS. We train VALL-E under the same dataset and experimental setups as EnCodec."
76
+ },
77
+ {
78
+ "section_id": "4.2",
79
+ "parent_section_id": "4",
80
+ "section_name": "Speech Reconstruction Evaluation",
81
+ "text": "We randomly sample 300 speech samples from LibriSpeech test set for speech reconstruction evaluation.\nWe take into account both subjective and objective evaluation metrics.\nObjective Metrics \nWe use ViSQOL metrics (Hines et al., 2012 ###reference_14###) to measure the speech quality. Additionally, we evaluate content accuracy through Word Error Rate (WER) by transcribing the speech utilizing the Whisper en-medium model (Radford et al., 2023 ###reference_27###).\nSubjective Metrics \nWe adopt a crowd-sourced methodology inpspired by MUSHRA protocol (Series, 2014 ###reference_30###), with a hidden reference but no lowerpass-filtered anchor, for subjective evaluation. We instruct evaluators to rate the perceptual quality of the given samples on a scale of 1 to 100."
82
+ },
83
+ {
84
+ "section_id": "4.3",
85
+ "parent_section_id": "4",
86
+ "section_name": "Unified Speech Language Model Evaluation",
87
+ "text": "We conduct zero-shot TTS evaluation on the VCTK dataset, which comprises 108 speakers. There is no speaker overlap between the training data and VCTK dataset. For each speaker, we randomly selected a 3s utterance as the prompts while the textual content of a different utterance is used as the input text.\nObjective Metrics \nWe evaluate the TTS systems with speaker similarity and WER.\nWe evaluate the speaker similarity\nbetween the generated speech and the prompt speech. We calculate the similarity with the following steps: 1) we utilize WavLM-TDNN to calculate the speaker embedding for the generated speech and the prompt speech. 2) we calculate the cosine similarity between the normalized embeddings.\nWe employ Whisper medium model to transcribe the generated speech and calculate\nthe WER.\nSubjective Metrics \nWe determine the Mean Opinion Score (MOS) and Similarity Mean Opinion Score (SMOS) through human evaluations. MOS reflects the naturalness of speech, while SMOS assesses the degree of similarity to the original speaker\u2019s voice. We engaged 12 and 6 native speakers as contributors for MOS and SMOS evaluations, respectively. MOS and SMOS both span from 1 to 5, with higher values signifying greater speech quality and voice similarity respectively."
88
+ },
89
+ {
90
+ "section_id": "4.4",
91
+ "parent_section_id": "4",
92
+ "section_name": "Main Results",
93
+ "text": "Speech Reconstruction\nTable 2 ###reference_### summarizes the results of speech reconstruction experiments. The SpeechTokenizer achienves lower WER than Encodec, demonstrating its superior ability to preserve content. Additionally, SpeechTokenizer attains a comparable VISQOL score but a higher MUSHRA score than EnCodec, which indicates its stronger capability in generating high-quality speech.\nPerformance on SLMTokBench \nTable 3 ###reference_### displays the performance of SpeechTokenizer on SLMTokBench.\nCompared with EnCodec-RVQ-1, SpeechTokenizer-RVQ-1 achieves higher mutual information between text and lower WER of downstream model. This suggests that SpeechTokenizer exhibits a stronger alignment with textual content. Meanwhile, the of resynthesized speech of SpeechTokenizer RVQ-1 tokens achieves lower WER and speaker similarity, indicating its capability to retain more content-related information while disregarding timbre characteristics, similar to semantic tokens.\nThe resynthesized speech of SpeechTokenizer RVQ-1:8 tokens demonstrates low WER and high speaker similarity, illustrating SpeechTokenizer\u2019s competence in preserving comprehensive speech information, similar to acoustic tokens.\nFurthermore, the speaker similarity of resynthesized speech of SpeechTokenizer RVQ-1 tokens is notably low, whereas that of SpeechTokenizer RVQ-1:8 tokens is considerably high. This observation implies that the tokens from subsequent layers compensate for the timbre information that is discarded by the first layer tokens.\nZero-shot TTS \nAs shown in Table 4 ###reference_###, our USLM demonstrates lower WER than VALL-E. This result highlights that SpeechTokenizer can contribute to a more precise modeling of content information. Additionally, the USLM demonstrates superior speaker similarity, implying that a decoupled information structure is more conducive to modeling speaker-related information."
94
+ },
95
+ {
96
+ "section_id": "5",
97
+ "parent_section_id": null,
98
+ "section_name": "Analysis",
99
+ "text": ""
100
+ },
101
+ {
102
+ "section_id": "5.1",
103
+ "parent_section_id": "5",
104
+ "section_name": "Choices of Semantic Teachers",
105
+ "text": "As shown in Table 3 ###reference_###, as semantic teachers, HuBERT L9 representations perform better than HuBERT units in both Text Alignment and Information Preservation, regardless of whether it\u2019s RVQ-1 or RVQ-1:8. The reason may be that discrete HuBERT units lose some content information compared to the continuous representations, thereby providing weaker semantic guidance to SpeechTokenizer.\nWhen comparing HuBERT L9 representations with HuBERT average representations, we find that in terms of Text Alignment, the mutual information is higher when HuBERT L9 representations serve as the teacher. This is because HuBERT average representations contain some timbre information, while HuBERT L9 offers purer content information. On the other hand, HuBERT average shows better performance in Information Preservation, reflected in a lower WER. We speculate that this is due to a certain level of task conflict between semantic distillation and reconstruction, where the former aims to retain only content information while the later aims to preserve various aspects of speech. The presence of some timbre information in HuBERT average representations could to some extent alleviate this task conflict."
106
+ },
107
+ {
108
+ "section_id": "5.2",
109
+ "parent_section_id": "5",
110
+ "section_name": "Effectiveness of Information Disentanglement",
111
+ "text": "###figure_3### To demonstrate that different speech information can be hierarchically modeled in SpeechTokenizer, we conduct one-shot voice conversion (VC) experiment. This experiment aims to convert speech from any source speaker to an arbitrary target speaker using only a few seconds of reference speech from the target speaker.\nTo use SpeechTokenizer for one-shot VC, the first step is to transform the source speech and reference speech into token matrices.\nBy concatenating the RVQ-1 tokens of source token matrix with RVQ-2:8 tokens of the reference token matrix, and then passing this combined token matrix to the decoder, we can achieve voice conversion.\nThe lengths of the reference and source tokens may not align perfectly. To address this, we use truncation or circular padding to ensure they share the same temporal length, thereby facilitating the concatenation process.\nWe conduct experiments on VCTK dataset. We randomly selected one speech sample from a speaker to serve as the source speech. From the remaining 107 speakers, we individually selected one speech sample of different content to act as the reference speech. We employed two metrics for evaluation: WER and speaker similarity.\nTable 5 ###reference_### reports the results of one-shot VC experiments. From the table, we can see that as the number of layers for reference tokens increases, speaker similarity also gradually increases. This suggests that more information from the reference speaker is being transferred over, proving that speaker information is embedded in tokens from the second to the last layers. When the reference tokens are selected from the second to the fourth layers, we achieve low WER and high speaker similarity, resulting in a satisfactory one-shot VC performance. This indicates that the information disentanglement is successful.\nWe also visualize quantized outputs from different layers in Figure 3 ###reference_###.\nSpecifically, We randomly select five speakers from the VCTK dataset and pick 10 random speech samples per speaker. We extract quantized output of different RVQ layers of SpeechTokenizer. The first layer output is denoted as RVQ-1 representations, while the sum of the outputs from the second layer to the eighth layer is denoted as RVQ-2:8 representations. By performing mean pooling along the temporal dimension, each representation is converted into a single vector. These vectors are then visualized in a 2D space using t-SNE, with speech samples from the same speaker represented in the same color.\nFrom the plot, it can be observed that the RVQ-1 representations for different speakers are scattered randomly without discernible pattern. In contrast, the RVQ-2:8 representations for the same speaker tend to cluster together, while being distinct from those of other speakers. This suggests that speaker-specific information is contained from the second layer up to the eighth layer."
112
+ },
113
+ {
114
+ "section_id": "6",
115
+ "parent_section_id": null,
116
+ "section_name": "Related Work",
117
+ "text": "Oure related work is put in Appendix E ###reference_###."
118
+ },
119
+ {
120
+ "section_id": "7",
121
+ "parent_section_id": null,
122
+ "section_name": "Conclusion",
123
+ "text": "In this study, we present SLMTokBench, which assess the effects of various speech token kinds. Meanwhile, we propose SpeechTokenizer, to unify the discretization of both types of speech tokens to overcome the issue of employing several models to extract semantic and acoustic discrete tokens separately. Furthermore, We developed a unified speech language model (USLM) based on SpeechTokenizer, with better results regarding the generated speech\u2019s content accuracy and quality.\nThe study of a unified speech tokenizer is an essential part of the further development of speech language model in terms of efficiency and quality."
124
+ }
125
+ ],
126
+ "appendix": [
127
+ {
128
+ "section_id": "Appendix 1",
129
+ "parent_section_id": null,
130
+ "section_name": "Appendix A Mutual Information Estimation",
131
+ "text": "For notation, denotes discrete speech representations; denotes text; denotes the mutual information; test dataset is denoted as and denotes the downstream model.\nA measure of mutual information between variable and can be formulated as:\nwhere and are the marginal distributions of and \nrespectively, and denotes the joint distribution of X and Y.\nThe variational contrastive log-ratio upper bound (vCLUB) (Cheng et al., 2020 ###reference_8###) of mutual information is defined by:\nwhere is the variational distribution to approximate the ground-truth probability and can be parameterized by the downstream model .\nWith test dataset , has an unbiased estimation as:"
132
+ },
133
+ {
134
+ "section_id": "Appendix 2",
135
+ "parent_section_id": null,
136
+ "section_name": "Appendix B Model Structure Ablations",
137
+ "text": "We conducted an ablation study on whether to use LSTM or BiLSTM. In the table 6 ###reference_###, it can be seen that the performance of BiLSTM on text alignment is better than that of LSTM, indicating that BiLSTM is better at capturing semantic information."
138
+ },
139
+ {
140
+ "section_id": "Appendix 3",
141
+ "parent_section_id": null,
142
+ "section_name": "Appendix C Continuous Distillation Loss Analysis",
143
+ "text": "In extant literature, the commonly employed loss functions for continuous sequence distillation are typically computed along the temporal axis, with the objective of minimizing the difference between the student and teacher model outputs at each timestep. For instance, the loss function proposed in (Chang et al., 2022 ###reference_6###) aims to maximize the cosine similarity between the student and teacher model representations at the same timestep while minimizing their distance, thereby facilitating the transfer of knowledge from the teacher to the student model. To adapt this formula for our specific task, we can modify the the loss function as follows:\nwhere and respectively denote the quantized output of RVQ first layer and the dimensional semantic teacher representation at timestep . is cosine similarity. denotes the number of timesteps and is the projection matrix. denotes sigmoid activation. controls the contribution of the cosine layers. We refer to this loss function as \"T-axis\" to distinguish it from the \"D-axis\" loss function that we propose in Section 3.2 ###reference_###. The latter term is used to denote the loss function introduced in the aforementioned section. These designations are employed to differentiate between these two types of loss functions in this paper.\nWe investigated the impact of two distinct continuous distillation loss functions on the performance of SpeechTokenizer on SLMTokBench. The results of this experiment are summarized in Table 7 ###reference_###. When compared to the performance of EnCodec on SLMTokBench, as presented in Table 3 ###reference_###, employing the \"T-axis\" continuous distillation loss function significantly enhances SpeechTokenizer\u2019s capability in text alignment. However, this improvement is somewhat inferior to that achieved by SpeechTokenizer utilizing the \"D-axis\" loss function. In terms of Information Preservation, SpeechTokenizer with the \"D-axis\" loss function also outperforms its \"T-axis\" counterpart. The experimental results demonstrate that the \"D-axis\" continuous distillation loss function yields superior distillation effects compared to the traditional \"T-axis\" loss function. We attribute this improvement to the \"D-axis\" loss function\u2019s strategy of calculating cosine similarity across each dimension, ensuring that the student model closely aligns with the teacher model on each feature dimension. This approach provides a richer supervision signal, promoting the learning process of the student model by focusing not only on the overall output similarity but also on the similarity within each dimension."
144
+ },
145
+ {
146
+ "section_id": "Appendix 4",
147
+ "parent_section_id": null,
148
+ "section_name": "Appendix D Details of Model Structure and Discriminators",
149
+ "text": "Encoder & Decoder Architecture \nThe encoder is constructed as a sequential series of components: starting with a 1D convolutional layer featuring channels and a kernel size of 7, followed by a set of residual conventional blocks. Each block is composed of two dilated convolutions with dilation rate of and kernel size of and a skip-connection, followed by a strided convolutional down-sampling layer, with a kernel size of the twice the stride . Whenever down-sampling, the number of channels is doubled. Unlike in EnCodec that the convolution blocks are followed by a two-layer LSTM, we use BiLSTM to augment the semantic modeling ability. A final 1D convolution layer with a kernel size of 7 is used to set the dimensionality of embeddings to . We use and as strides. We use ELU (Clevert et al., 2016 ###reference_10###) as a non-linear activation either layer normalization (Ba et al., 2016 ###reference_1###) or weight normalization (Salimans & Kingma, 2016 ###reference_29###).The decoder mirrors the encoder and uses transposed convolutions and LSTM instead of stride convolutions and BiLSTM, with the strides in reverse order as in the encoder. The decoder outputs the final audio signal.\nResidual Vector Quantizer \nWe use Residual Vector Quantizer (RVQ) to quantize the encoder output and follow the same training procedure as EnCodec. During training, the code selected for each input is updated using an exponential moving average with a decay of 0.99, and codes which have not been assigned any input vector for several batches are replaced with input vectors randomly sampled within current batch. Straight-through-estimator (Bengio et al., 2013 ###reference_3###) is used to compute the gradient of encoder, e.g. as if the quantization step was the identity function during the backward phase. Finally, a commitment loss, consisting of the MSE between the input of the quantizer and its output, with gradient only computed with respect to its input, is added to the overall training loss.\nDiscriminator \nThe MS-STFT discriminator utilizes networks with identical structures that operate on multi-scaled complex-valued STFT, where the real and imaginary parts are concatenated. For each sub-network, it is composed of a 2D convolutional layer (using kernel size with 32 channels), followed by 2D convolutions with increasing dilation rates in the time dimension (1, 2 and 4), and a stride of 2 over the frequency axis. A final 2D convolution with kernel size and stride provide the final prediction. For MSD and MPD, we follow the same settings as in HiFiGAN (Kong et al., 2020 ###reference_17###) but adjust the channel number to align the discriminator\u2019s parameters more closely with that of MS-STFT."
150
+ },
151
+ {
152
+ "section_id": "Appendix 5",
153
+ "parent_section_id": null,
154
+ "section_name": "Appendix E Related Work",
155
+ "text": "Discrete Speech Representations There are two popular speech discrete representations: semantic tokens and\nacoustic tokens. Semantic tokens can be extracted from self-supervised learning of speech representations (Hsu et al., 2021 ###reference_15###; Chung et al., 2021 ###reference_9###) and encode high-level representations that\ncorrelate with coarse, symbolic features while paralinguistic information such as\nspeaker identity and acoustic details are removed. Acoustic tokens can be extracted from neural audio codec (Zeghidour et al., 2021 ###reference_36###; D\u00e9fossez et al., 2022 ###reference_12###; Yang et al., 2023 ###reference_35###) and provide high-fidelity reconstruction of the acoustic details. But they can not decouple different information of speech. SpeechTokenizer unifies the two types of tokens, enabling both high-quality audio reconstruction and decomposition of different information of speech.\nSpoken Generative Language Models \nSpeech discrete representation based spoken generative language models have demonstrated remarkable performance on various speech processing tasks (Borsos et al., 2022 ###reference_4###; Wang et al., 2023 ###reference_33###; Kharitonov et al., 2023 ###reference_16###; Zhang et al., 2023 ###reference_37###). AudioLM (Borsos et al., 2022 ###reference_4###) proposes to model speech based on audio codecs together\nwith semantic codes, which can synthesize speech in a textlesss setting.\nVALL-E (Wang et al., 2023 ###reference_33###) leverages neural codec models to represent\nspeech in discrete tokens from eight quantizers. VALL-E comprises of an autoregressive language model that converts phoenmes to acoustic tokens from the first quantizer and an non-autoregressive language model to generate codes of the other seven quantizers. However, VALL-E suffers from problems that some words may be unclear, missed, or duplicated in\nspeech synthesis due to the information gap between acoustic tokens and phoneme. To bridge the gap, SPEAR-TTS (Kharitonov et al., 2023 ###reference_16###) uses semantic tokens as a bridge between text and acoustic tokens.\nIt first generates semantic tokens from text and then produces acoustic tokens from semantic tokens. However, this multi-stage modeling approach is more complex and can lead to problems like error accumulation and slow inference speed.\nThe first quantizer of SpeechTokenizer generates semantic tokens, while the remaining seven quantizers produce acoustic tokens by modeling the paralinguistic information lost in the semantic tokens. SpeechTokenizer-based VALL-E combines the advantages of VALL-E and SPEAR-TTS, where the autoregressive model can perform text-to-semantic tokens conversion, and the non-autoregressive model can achieve semantic-to-acoustic tokens conversion.\nSpeech Representation Disentanglement \nHuman speech can be roughly decomposed into three components: content, timbre, and prosody (Liu et al., 2023 ###reference_19###). Content represents the main information in the speech, which can be expressed using text or phonemes. Timbre represents the speaker\u2019s characteristics, while prosody encompasses intonation, stress, and rhythm of speech, reflecting how the speaker conveys the content information.\nCurrent Speech Representation Disentanglement (SRD) methods mostly separate speaker information from content information for voice conversion (Qian et al., 2019 ###reference_25###; Casanova et al., 2022 ###reference_5###). These approaches adopt a parallel disentanglement strategy, where the speech is fed into parallel content and speaker encoders to obtain different representations (Qian et al., 2020 ###reference_26###). However, this strategy heavily relies on prior knowledge and introduces strong inductive biases, making the modeling process more complex and potentially overlooking certain speech information like prosody.\nDifferently, VQVC (Wu & Lee, 2020 ###reference_34###) models the content embedding as a series of discrete codes and take the difference between\nquantize-before and quantize-after vector as the speaker embedding.\nSimilarly, SpeechTokenizer utilizes a residual structure to perform serial decomposition of speech information and models different information as discrete tokens."
156
+ },
157
+ {
158
+ "section_id": "Appendix 6",
159
+ "parent_section_id": null,
160
+ "section_name": "Appendix F Codebook Analysis",
161
+ "text": "We investigate whether the tokens learned by the first RVQ quantizer\nrelate to phonetic information.\nUtilizing SpeechTokenizer or EnCodec, we derive speech tokens from the TIMIT training set and extract the RVQ-1 tokens, denoted as . We then compute the conditional probability based on the co-occurrence\nbetween phonemes and the codes. The alignments are constructed by selecting the phoneme that occurs most frequently in the receptive field for each .\nFigure 4 ###reference_### visualizes the conditional probability for both SpeechTokenizer and EnCodec. A darker color block indicates a higher .\nA more distinct contrast between the diagonal color band and its surrounding area signifies greater phoneme purity, which in turn suggests a more accurate mapping between the code and its corresponding phoneme.\nFor SpeechTokenizer, it\u2019s evident that in the codebook of RVQ-1 quantizer, many discrete codes seem to specialize in capturing specific phonetic sounds, indicating RVQ-1 quantizer can obtain a good alignment between codes and labeled phonemes. However, for EnCodec, this phenomenon is not as obvious.\nAdditionally, Figure 4 ###reference_### also reveals that over 600 codes from the EnCodec RVQ-1 codebook have never been utilized, suggesting a suboptimal utilization rate of the codebook when EnCodec encodes speech. A lower utilization rate of the codebook implies that more RVQ layers are required to ensure the quality of synthesized speech, consequently necessitating the generation of more codes during the construction of a spoken generative language model, resulting in greater space, time and computation power consumption.\nWe further evaluate the models using Phone-Normalized Mutual Information (PNMI) (Hsu et al., 2021 ###reference_15###). As shown in Table 8 ###reference_###, RVQ-1 tokens of SpeechTokenizer achieve a superior PNMI score to that of HuBERT units and significantly outperforms EnCodec-RVQ-1. This suggests that the semantic distillation process in SpeechTokenizer is effective, thereby explaining its enhanced text alignment performance.\n###figure_4###"
162
+ },
163
+ {
164
+ "section_id": "Appendix 7",
165
+ "parent_section_id": null,
166
+ "section_name": "Appendix G Extension to Unseen Language",
167
+ "text": "Since the paralinguistic information is considered to be language-agnostic, we attempted to apply SpeechTokenizer directly to unseen languages. We choose German and Chinese. For German, we select samples from the German subset of Multilingual LibriSpeech dataset for testing. For Chinese, we select samples from the Aishell-3 dataset (Shi et al., 2021 ###reference_31###) for testing. We resynthesize speech from RVQ-1 and RVQ-1:8 tokens. Resynthesized speech are displayed in our demo website 111https://0nutation.github.io/SpeechTokenizer.github.io/ ###reference_r.github.io/###. We also analysis the melspectrogram of German speech and English speech in Appendix H ###reference_###.\nResults show that for languages either closely or distantly related to English, resynthesized speech from RVQ-1 tokens tend to lose timbre and prosody information while maintaining clear content. The resynthesized speech generated from RVQ-1:8 tokens is very close to the grountruth. That suggests SpeechTokenizer can achieve hierarchical information disentanglement on unseen language, even though SpeechTokenizer is trained solely on English data. We believe that SpeechTokenizer may possess the ability to extract content from speech while disregarding language-dependent features. This ability holds promise for the development of a multilingual SpeechTokenizer."
168
+ },
169
+ {
170
+ "section_id": "Appendix 8",
171
+ "parent_section_id": null,
172
+ "section_name": "Appendix H Melspectorgram Analysis",
173
+ "text": "We plot the melspectrogram of raw speech, resynthesized speech of EnCodec RVQ-1 tokens, and resynthesized speech of SpeechTokenizer RVQ-1 tokens. From the figure 5 ###reference_###, it\u2019s evident that the melspectrogram corresponding to EnCodec RVQ-1 largely retains the stripes and shapes in the raw melspectrogram. In contrast, the speech resynthesized from SpeechTokenizer RVQ-1 essentially loses all of the horizontal stripes, which indicates that timbre and prosody information has been diminished.\n###figure_5### We alse plot melspectrogram of raw German speech and resynthesized German speech of SpeechTokenizer RVQ-1 tokens.\nAs shown in the Figure 6 ###reference_###, the same patterns observed in English speech are also present in German speech.\n###figure_6###"
174
+ }
175
+ ],
176
+ "tables": {
177
+ "1": {
178
+ "table_html": "<figure class=\"ltx_table\" id=\"S1.T1\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S1.T1.12\" style=\"width:278.2pt;height:95.5pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-16.4pt,5.6pt) scale(0.894415665208611,0.894415665208611) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S1.T1.12.12\">\n<tr class=\"ltx_tr\" id=\"S1.T1.12.12.13\">\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"S1.T1.12.12.13.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S1.T1.12.12.13.2\">Accurate Content</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S1.T1.12.12.13.3\">High-quality Speech</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S1.T1.12.12.13.4\">Single Tokenzier</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.3.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S1.T1.3.3.3.4\">Semantic LM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S1.T1.2.2.2.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S1.T1.3.3.3.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.6.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.6.6.6.4\">Acoustic LM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.4.4.4.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.5.5.5.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.6.6.6.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.9.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S1.T1.9.9.9.4\">Hierarchical LM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.7.7.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S1.T1.8.8.8.2\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"S1.T1.9.9.9.3\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S1.T1.12.12.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S1.T1.12.12.12.4\">USLM\u00a0(ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S1.T1.10.10.10.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S1.T1.11.11.11.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S1.T1.12.12.12.3\"></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Comparision between different speech language models. <span class=\"ltx_text ltx_font_italic\" id=\"S1.T1.17.1\">Semantic LM</span> refers to semantic language models. <span class=\"ltx_text ltx_font_italic\" id=\"S1.T1.18.2\">Acoustic LM</span> refers to acoustic language models. <span class=\"ltx_text ltx_font_italic\" id=\"S1.T1.19.3\">Hierarchical LM</span> refers to hierarchical speech language models. <span class=\"ltx_text ltx_font_italic\" id=\"S1.T1.20.4\">USLM</span> refers to our unified speech language model.\n</figcaption>\n</figure>",
179
+ "capture": "Table 1: Comparision between different speech language models. Semantic LM refers to semantic language models. Acoustic LM refers to acoustic language models. Hierarchical LM refers to hierarchical speech language models. USLM refers to our unified speech language model.\n"
180
+ },
181
+ "2": {
182
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T2.3\" style=\"width:198.7pt;height:90.4pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(0.5pt,-0.2pt) scale(1.00473478851126,1.00473478851126) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T2.3.3\">\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.4\">\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"S4.T2.3.3.4.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T2.3.3.4.2\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.4.2.1\">Objective</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S4.T2.3.3.4.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.3.3.4.3.1\">Subjective</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T2.3.3.3.4\">Tokenizer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.1.1.1.1\">WER\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.2.2.2.2\">VISQOL\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T2.3.3.3.3\">MUSHRA\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.5.1\">Groundtruth</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.5.2\">4.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.5.3\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.5.4\">91.46</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.6.1\">EnCodec</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.6.2\">5.11</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.3.3.6.3\">4.37</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T2.3.3.6.4\">79.86</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.3.3.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb ltx_border_r\" id=\"S4.T2.3.3.7.1\">SpeechTokenizer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.3.3.7.2\">5.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T2.3.3.7.3\">4.30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T2.3.3.7.4\">90.55</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Results of speech reconstruction\n</figcaption>\n</figure>",
183
+ "capture": "Table 2: Results of speech reconstruction\n"
184
+ },
185
+ "3": {
186
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T3\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T3.4\" style=\"width:357.7pt;height:295.1pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(37.0pt,-30.5pt) scale(1.26093197312682,1.26093197312682) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T3.4.4\">\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.5\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T3.4.4.5.1\"></td>\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T3.4.4.5.2\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"S4.T3.4.4.5.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T3.4.4.5.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.5.4.1\">Text Alignment</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T3.4.4.5.5\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.4.4.5.5.1\">Information Preservation</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.4.4.5\">Tokenizer</td>\n<td class=\"ltx_td\" id=\"S4.T3.4.4.4.6\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.4.7\">Teacher</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.1.1.1.1\">MI\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.2.2.2.2\">WER\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.3.3.3.3\">WER\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.4.4\">SIM\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.4.6.1\">Groundtruth</td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.4.4.6.2\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S4.T3.4.4.6.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.4.6.4\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.4.4.6.5\">-</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.4.6.6\">4.58</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T3.4.4.6.7\">1.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.4.7.1\">HuBERT</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.7.2\">KM500</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.7.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.7.4\">31.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.7.5\">9.88</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.7.6\">16.26</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.7.7\">0.77</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.4.8.1\">EnCodec</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.8.2\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.8.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.8.4\">16.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.8.5\">61.52</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.8.6\">38.34</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.8.7\">0.92</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.4.9.1\">EnCodec</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.9.2\">RVQ-1:8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.9.3\">-</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.9.4\">23.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.9.5\">30.91</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.9.6\">5.11</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.9.7\">0.98</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T3.4.4.10.1\"><span class=\"ltx_text ltx_font_italic\" id=\"S4.T3.4.4.10.1.1\">Ablations</span></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.4.4.10.2\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S4.T3.4.4.10.3\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.4.4.10.4\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S4.T3.4.4.10.5\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.4.4.10.6\"></td>\n<td class=\"ltx_td ltx_border_t\" id=\"S4.T3.4.4.10.7\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.11\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.4.11.1\">SpeechTokenizer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.11.2\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.11.3\">HuBERT avg</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.11.4\">30.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.11.5\">15.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.11.6\">9.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.11.7\">0.74</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.12\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.4.12.1\">SpeechTokenizer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.12.2\">RVQ-1:8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.12.3\">HuBERT avg</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.12.4\">29.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.12.5\">16.03</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.12.6\">5.04</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.12.7\">0.97</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.13\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.4.13.1\">SpeechTokenizer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.13.2\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.13.3\">HuBERT L9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.13.4\">32.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.13.5\">12.68</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.13.6\">14.17</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.13.7\">0.73</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.14\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.4.14.1\">SpeechTokenizer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.14.2\">RVQ-1:8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.14.3\">HuBERT L9</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.14.4\">31.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.14.5\">13.12</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.14.6\">5.31</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.14.7\">0.97</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.15\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T3.4.4.15.1\">SpeechTokenizer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.15.2\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.15.3\">HuBERT units</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.15.4\">24.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.4.4.15.5\">34.13</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.15.6\">20.02</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T3.4.4.15.7\">0.72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.4.4.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T3.4.4.16.1\">SpeechTokenizer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.4.16.2\">RVQ-1:8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T3.4.4.16.3\">HuBERT units</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.4.16.4\">25.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T3.4.4.16.5\">30.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.4.16.6\">5.84</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T3.4.4.16.7\">0.95</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Results on SLMTokBench. MI and WER refer to mutual information and word error rate of the downstream model. WER and SIM refer to word error rate and speaker similarity of resynthesized speech respectively. RVQ- denotes the tokens of the RVQ layer. RVQ-: denotes the tokens from the layer to the layer.\n</figcaption>\n</figure>",
187
+ "capture": "Table 3: Results on SLMTokBench. MI and WER refer to mutual information and word error rate of the downstream model. WER and SIM refer to word error rate and speaker similarity of resynthesized speech respectively. RVQ- denotes the tokens of the RVQ layer. RVQ-: denotes the tokens from the layer to the layer.\n"
188
+ },
189
+ "4": {
190
+ "table_html": "<figure class=\"ltx_table\" id=\"S4.T4\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S4.T4.4\" style=\"width:258.4pt;height:95pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(6.7pt,-2.5pt) scale(1.05507814539759,1.05507814539759) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T4.4.4\">\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4.5\">\n<td class=\"ltx_td ltx_border_tt\" id=\"S4.T4.4.4.5.1\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"S4.T4.4.4.5.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T4.4.4.5.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.5.3.1\">Objective</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"S4.T4.4.4.5.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.5.4.1\">Subjective</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.4.4.4.5\">Model</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.4.4.4.6\">Tokenizer</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.1.1.1.1\">WER\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.2.2.2.2\">SIM\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.3.3.3.3\">MOS\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.4.4\">SMOS\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"S4.T4.4.4.6.1\">Groundtruth</td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S4.T4.4.4.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.4.6.3\">1.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T4.4.4.6.4\">0.93</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.4.6.5\">4.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S4.T4.4.4.6.6\">3.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4.7\">\n<td class=\"ltx_td ltx_align_left\" id=\"S4.T4.4.4.7.1\">VALL-E</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.4.4.7.2\">EnCodec</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.7.3\">7.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T4.4.4.7.4\">0.75</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.7.5\">3.08</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S4.T4.4.4.7.6\">3.31</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T4.4.4.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S4.T4.4.4.8.1\">USLM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.4.4.8.2\">SpeechTokenizer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.4.4.8.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.8.3.1\">6.5</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S4.T4.4.4.8.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.8.4.1\">0.84</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.4.4.8.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.8.5.1\">3.63</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S4.T4.4.4.8.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T4.4.4.8.6.1\">3.45</span></td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Results of zero-shot TTS\n</figcaption>\n</figure>",
191
+ "capture": "Table 4: Results of zero-shot TTS\n"
192
+ },
193
+ "5": {
194
+ "table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"S5.T5.2\" style=\"width:139.1pt;height:98pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(5.7pt,-4.0pt) scale(1.08854392202751,1.08854392202751) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S5.T5.2.2\">\n<tr class=\"ltx_tr\" id=\"S5.T5.2.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"S5.T5.2.2.2.3\">Source</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T5.2.2.2.4\">Reference</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T5.1.1.1.1\">WER\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"S5.T5.2.2.2.2\">SIM\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.2.2.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T5.2.2.3.1\">Groundtruth</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.2.2.3.2\">0.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"S5.T5.2.2.3.3\">0.93</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.2.2.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.2.2.4.1\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T5.2.2.4.2\">RVQ-2</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.2.2.4.3\">2.6</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.2.2.4.4\">0.72</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.2.2.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"S5.T5.2.2.5.1\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T5.2.2.5.2\">RVQ-2:4</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.2.2.5.3\">11.7</td>\n<td class=\"ltx_td ltx_align_center\" id=\"S5.T5.2.2.5.4\">0.80</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.2.2.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"S5.T5.2.2.6.1\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"S5.T5.2.2.6.2\">RVQ-2:8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.2.2.6.3\">35.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"S5.T5.2.2.6.4\">0.82</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Results of one-shot voice conversion. Source and Reference refers to source token matrix and reference token matrix respectively.\n</figcaption>\n</figure>",
195
+ "capture": "Table 5: Results of one-shot voice conversion. Source and Reference refers to source token matrix and reference token matrix respectively.\n"
196
+ },
197
+ "6": {
198
+ "table_html": "<figure class=\"ltx_table\" id=\"A2.T6\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A2.T6.4\" style=\"width:318.0pt;height:154.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(47.6pt,-23.1pt) scale(1.42784291421716,1.42784291421716) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A2.T6.4.4\">\n<tr class=\"ltx_tr\" id=\"A2.T6.4.4.5\">\n<td class=\"ltx_td ltx_border_tt\" id=\"A2.T6.4.4.5.1\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"A2.T6.4.4.5.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"A2.T6.4.4.5.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A2.T6.4.4.5.3.1\">Text Alignment</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"A2.T6.4.4.5.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A2.T6.4.4.5.4.1\">Information Preservation</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T6.4.4.4\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T6.4.4.4.5\">Model Structure</td>\n<td class=\"ltx_td ltx_border_r\" id=\"A2.T6.4.4.4.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T6.1.1.1.1\">MI\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T6.2.2.2.2\">WER\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T6.3.3.3.3\">WER\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T6.4.4.4.4\">SIM\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T6.4.4.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A2.T6.4.4.6.1\">CNN+LSTM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A2.T6.4.4.6.2\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T6.4.4.6.3\">27.60</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A2.T6.4.4.6.4\">20.71</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T6.4.4.6.5\">9.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A2.T6.4.4.6.6\">0.74</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T6.4.4.7\">\n<td class=\"ltx_td\" id=\"A2.T6.4.4.7.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T6.4.4.7.2\">RVQ-1:8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T6.4.4.7.3\">28.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T6.4.4.7.4\">20.38</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T6.4.4.7.5\">5.44</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T6.4.4.7.6\">0.97</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T6.4.4.8\">\n<td class=\"ltx_td ltx_align_left\" id=\"A2.T6.4.4.8.1\">CNN+BiLSTM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T6.4.4.8.2\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T6.4.4.8.3\">30.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A2.T6.4.4.8.4\">15.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T6.4.4.8.5\">9.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A2.T6.4.4.8.6\">0.74</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A2.T6.4.4.9\">\n<td class=\"ltx_td ltx_border_bb\" id=\"A2.T6.4.4.9.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A2.T6.4.4.9.2\">RVQ-1:8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T6.4.4.9.3\">29.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A2.T6.4.4.9.4\">16.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T6.4.4.9.5\">5.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A2.T6.4.4.9.6\">0.97</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Results of BiLSTM ablation experiment on SLMTokBench. We employ the average representation across all HuBERT layers as semantic teachers in this experiment.\n</figcaption>\n</figure>",
199
+ "capture": "Table 6: Results of BiLSTM ablation experiment on SLMTokBench. We employ the average representation across all HuBERT layers as semantic teachers in this experiment.\n"
200
+ },
201
+ "7": {
202
+ "table_html": "<figure class=\"ltx_table\" id=\"A3.T7\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A3.T7.5\" style=\"width:318.0pt;height:189.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(68.3pt,-40.6pt) scale(1.75243222379612,1.75243222379612) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A3.T7.5.5\">\n<tr class=\"ltx_tr\" id=\"A3.T7.5.5.6\">\n<td class=\"ltx_td ltx_border_tt\" id=\"A3.T7.5.5.6.1\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"A3.T7.5.5.6.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"A3.T7.5.5.6.3\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A3.T7.5.5.6.3.1\">Text Alignment</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" colspan=\"2\" id=\"A3.T7.5.5.6.4\">\n<span class=\"ltx_text ltx_font_bold\" id=\"A3.T7.5.5.6.4.1\">Information Preservation</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T7.5.5.5\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T7.1.1.1.1\"></td>\n<td class=\"ltx_td ltx_border_r\" id=\"A3.T7.5.5.5.6\"></td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T7.2.2.2.2\">MI\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T7.3.3.3.3\">WER\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T7.4.4.4.4\">WER\n</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T7.5.5.5.5\">SIM\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T7.5.5.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A3.T7.5.5.7.1\">T-Axis</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T7.5.5.7.2\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T7.5.5.7.3\">26.65</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A3.T7.5.5.7.4\">21.10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T7.5.5.7.5\">10.75</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A3.T7.5.5.7.6\">0.76</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T7.5.5.8\">\n<td class=\"ltx_td\" id=\"A3.T7.5.5.8.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T7.5.5.8.2\">RVQ-1:8</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T7.5.5.8.3\">25.97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T7.5.5.8.4\">21.54</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T7.5.5.8.5\">5.29</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T7.5.5.8.6\">0.96</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T7.5.5.9\">\n<td class=\"ltx_td ltx_align_left\" id=\"A3.T7.5.5.9.1\">D-Axis</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T7.5.5.9.2\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T7.5.5.9.3\">30.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A3.T7.5.5.9.4\">15.58</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T7.5.5.9.5\">9.57</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A3.T7.5.5.9.6\">0.74</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A3.T7.5.5.10\">\n<td class=\"ltx_td ltx_border_bb\" id=\"A3.T7.5.5.10.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A3.T7.5.5.10.2\">RVQ-1:8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T7.5.5.10.3\">29.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A3.T7.5.5.10.4\">16.03</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T7.5.5.10.5\">5.04</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A3.T7.5.5.10.6\">0.97</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span>Results of continuous distillation loss ablation experiment on SLMTokBench. We employ the average representation across all HuBERT layers as semantic teachers in this experiment.\n</figcaption>\n</figure>",
203
+ "capture": "Table 7: Results of continuous distillation loss ablation experiment on SLMTokBench. We employ the average representation across all HuBERT layers as semantic teachers in this experiment.\n"
204
+ },
205
+ "8": {
206
+ "table_html": "<figure class=\"ltx_table\" id=\"A6.T8\">\n<div class=\"ltx_inline-block ltx_align_center ltx_transformed_outer\" id=\"A6.T8.1\" style=\"width:159.0pt;height:82.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(9.8pt,-5.1pt) scale(1.14108568203268,1.14108568203268) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"A6.T8.1.1\">\n<tr class=\"ltx_tr\" id=\"A6.T8.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_tt\" id=\"A6.T8.1.1.1.2\">Tokenizer</td>\n<td class=\"ltx_td ltx_border_r ltx_border_tt\" id=\"A6.T8.1.1.1.3\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_tt\" id=\"A6.T8.1.1.1.1\">PNMI\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T8.1.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_t\" id=\"A6.T8.1.1.2.1\">HuBERT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A6.T8.1.1.2.2\">KM500</td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" id=\"A6.T8.1.1.2.3\">0.43</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T8.1.1.3\">\n<td class=\"ltx_td ltx_align_left\" id=\"A6.T8.1.1.3.1\">EnCodec</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A6.T8.1.1.3.2\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center\" id=\"A6.T8.1.1.3.3\">0.28</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A6.T8.1.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_bb\" id=\"A6.T8.1.1.4.1\">SpeechTokenizer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb ltx_border_r\" id=\"A6.T8.1.1.4.2\">RVQ-1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_bb\" id=\"A6.T8.1.1.4.3\">0.71</td>\n</tr>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 8: </span>PNMI of different discrete speech representation.\n</figcaption>\n</figure>",
207
+ "capture": "Table 8: PNMI of different discrete speech representation.\n"
208
+ }
209
+ },
210
+ "image_paths": {
211
+ "1": {
212
+ "figure_path": "2308.16692v2_figure_1.png",
213
+ "caption": "Figure 1: Left: Illustration of information composition of different discrete speech representations. Right: Illustration of unified speech language models. AR refers to autoregressive and NAR refers to non-autoregressive. Speech tokens are represented as colored circles and different colors represent different information.",
214
+ "url": "http://arxiv.org/html/2308.16692v2/x1.png"
215
+ },
216
+ "2": {
217
+ "figure_path": "2308.16692v2_figure_2.png",
218
+ "caption": "Figure 2: Illustration of SpeechTokenizer framework.",
219
+ "url": "http://arxiv.org/html/2308.16692v2/x2.png"
220
+ },
221
+ "3": {
222
+ "figure_path": "2308.16692v2_figure_3.png",
223
+ "caption": "Figure 3: Visualization of quantized output of different RVQ layers of SpeechTokenizer.The first layer is denoted as RVQ-1, while the sum of the second layer to the eighth layer is denoted\nas RVQ-2:8.",
224
+ "url": "http://arxiv.org/html/2308.16692v2/x3.png"
225
+ },
226
+ "4": {
227
+ "figure_path": "2308.16692v2_figure_4.png",
228
+ "caption": "Figure 4: Visualization of the conditional probability P\u2062(phoneme|code)\ud835\udc43conditionalphonemecodeP(\\text{phoneme}|\\text{code})italic_P ( phoneme | code ) on TIMIT train set. The y-axis is the phoneme set and the x-axis is the codewords of the first RVQ layer sorted by the most correlated phoneme.",
229
+ "url": "http://arxiv.org/html/2308.16692v2/extracted/5361536/Figures/phone_code_coef2.png"
230
+ },
231
+ "5": {
232
+ "figure_path": "2308.16692v2_figure_5.png",
233
+ "caption": "Figure 5: Melspectorgram of raw speech, resynthesized speech of SpeechTokenizer and EnCodec RVQ-1 tokens.",
234
+ "url": "http://arxiv.org/html/2308.16692v2/extracted/5361536/Figures/mel.png"
235
+ },
236
+ "6": {
237
+ "figure_path": "2308.16692v2_figure_6.png",
238
+ "caption": "Figure 6: Melspectorgram of German speech and resynthesized speech of SpeechTokenizer RVQ-1 tokens.",
239
+ "url": "http://arxiv.org/html/2308.16692v2/extracted/5361536/Figures/mel_de.png"
240
+ }
241
+ },
242
+ "validation": true,
243
+ "references": [
244
+ {
245
+ "1": {
246
+ "title": "Layer normalization, 2016.",
247
+ "author": "Ba, J. L., Kiros, J. R., and Hinton, G. E.",
248
+ "venue": null,
249
+ "url": null
250
+ }
251
+ },
252
+ {
253
+ "2": {
254
+ "title": "wav2vec 2.0: A framework for self-supervised learning of speech\nrepresentations.",
255
+ "author": "Baevski, A., Zhou, Y., Mohamed, A., and Auli, M.",
256
+ "venue": "Advances in Neural Information Processing Systems,\n33:12449\u201312460, 2020.",
257
+ "url": null
258
+ }
259
+ },
260
+ {
261
+ "3": {
262
+ "title": "Estimating or propagating gradients through stochastic neurons for\nconditional computation.",
263
+ "author": "Bengio, Y., L\u00e9onard, N., and Courville, A.",
264
+ "venue": "arXiv preprint arXiv:1308.3432, 2013.",
265
+ "url": null
266
+ }
267
+ },
268
+ {
269
+ "4": {
270
+ "title": "Audiolm: a language modeling approach to audio generation, 2022.",
271
+ "author": "Borsos, Z., Marinier, R., Vincent, D., Kharitonov, E., Pietquin, O., Sharifi,\nM., Teboul, O., Grangier, D., Tagliasacchi, M., and Zeghidour, N.",
272
+ "venue": null,
273
+ "url": null
274
+ }
275
+ },
276
+ {
277
+ "5": {
278
+ "title": "YourTTS: Towards zero-shot multi-speaker TTS and zero-shot\nvoice conversion for everyone.",
279
+ "author": "Casanova, E., Weber, J., Shulby, C. D., Junior, A. C., G\u00f6lge, E., and\nPonti, M. A.",
280
+ "venue": "In Chaudhuri, K., Jegelka, S., Song, L., Szepesvari, C., Niu, G., and\nSabato, S. (eds.), Proceedings of the 39th International Conference on\nMachine Learning, volume 162 of Proceedings of Machine Learning\nResearch, pp. 2709\u20132720. PMLR, 17\u201323 Jul 2022.",
281
+ "url": null
282
+ }
283
+ },
284
+ {
285
+ "6": {
286
+ "title": "Distilhubert: Speech representation learning by layer-wise\ndistillation of hidden-unit bert.",
287
+ "author": "Chang, H.-J., Yang, S.-w., and Lee, H.-y.",
288
+ "venue": "In ICASSP 2022-2022 IEEE International Conference on Acoustics,\nSpeech and Signal Processing (ICASSP), pp. 7087\u20137091. IEEE, 2022.",
289
+ "url": null
290
+ }
291
+ },
292
+ {
293
+ "7": {
294
+ "title": "WavLM: Large-scale self-supervised pre-training for full stack\nspeech processing.",
295
+ "author": "Chen, S., Wang, C., Chen, Z., Wu, Y., Liu, S., Chen, Z., Li, J., Kanda, N.,\nYoshioka, T., Xiao, X., Wu, J., Zhou, L., Ren, S., Qian, Y., Qian, Y., Wu,\nJ., Zeng, M., Yu, X., and Wei, F.",
296
+ "venue": "IEEE Journal of Selected Topics in Signal Processing,\n16(6):1505\u20131518, oct 2022.",
297
+ "url": null
298
+ }
299
+ },
300
+ {
301
+ "8": {
302
+ "title": "Club: A contrastive log-ratio upper bound of mutual information,\n2020.",
303
+ "author": "Cheng, P., Hao, W., Dai, S., Liu, J., Gan, Z., and Carin, L.",
304
+ "venue": null,
305
+ "url": null
306
+ }
307
+ },
308
+ {
309
+ "9": {
310
+ "title": "W2v-bert: Combining contrastive learning and masked language modeling\nfor self-supervised speech pre-training, 2021.",
311
+ "author": "Chung, Y.-A., Zhang, Y., Han, W., Chiu, C.-C., Qin, J., Pang, R., and Wu, Y.",
312
+ "venue": null,
313
+ "url": null
314
+ }
315
+ },
316
+ {
317
+ "10": {
318
+ "title": "Fast and accurate deep network learning by exponential linear units\n(elus), 2016.",
319
+ "author": "Clevert, D.-A., Unterthiner, T., and Hochreiter, S.",
320
+ "venue": null,
321
+ "url": null
322
+ }
323
+ },
324
+ {
325
+ "11": {
326
+ "title": "Polyvoice: Language models for speech to speech translation, 2023.",
327
+ "author": "Dong, Q., Huang, Z., Tian, Q., Xu, C., Ko, T., Zhao, Y., Feng, S., Li, T.,\nWang, K., Cheng, X., Yue, F., Bai, Y., Chen, X., Lu, L., Ma, Z., Wang, Y.,\nWang, M., and Wang, Y.",
328
+ "venue": null,
329
+ "url": null
330
+ }
331
+ },
332
+ {
333
+ "12": {
334
+ "title": "High fidelity neural audio compression, 2022.",
335
+ "author": "D\u00e9fossez, A., Copet, J., Synnaeve, G., and Adi, Y.",
336
+ "venue": null,
337
+ "url": null
338
+ }
339
+ },
340
+ {
341
+ "13": {
342
+ "title": "Textually pretrained speech language models, 2023.",
343
+ "author": "Hassid, M., Remez, T., Nguyen, T. A., Gat, I., Conneau, A., Kreuk, F., Copet,\nJ., Defossez, A., Synnaeve, G., Dupoux, E., Schwartz, R., and Adi, Y.",
344
+ "venue": null,
345
+ "url": null
346
+ }
347
+ },
348
+ {
349
+ "14": {
350
+ "title": "Visqol: The virtual speech quality objective listener.",
351
+ "author": "Hines, A., Skoglund, J., Kokaram, A., and Harte, N.",
352
+ "venue": "In IWAENC 2012; International Workshop on Acoustic Signal\nEnhancement, pp. 1\u20134, 2012.",
353
+ "url": null
354
+ }
355
+ },
356
+ {
357
+ "15": {
358
+ "title": "Hubert: Self-supervised speech representation learning by masked\nprediction of hidden units.",
359
+ "author": "Hsu, W.-N., Bolte, B., Tsai, Y.-H. H., Lakhotia, K., Salakhutdinov, R., and\nMohamed, A.",
360
+ "venue": "IEEE/ACM Transactions on Audio, Speech, and Language\nProcessing, 29:3451\u20133460, 2021.",
361
+ "url": null
362
+ }
363
+ },
364
+ {
365
+ "16": {
366
+ "title": "Speak, read and prompt: High-fidelity text-to-speech with minimal\nsupervision, 2023.",
367
+ "author": "Kharitonov, E., Vincent, D., Borsos, Z., Marinier, R., Girgin, S., Pietquin,\nO., Sharifi, M., Tagliasacchi, M., and Zeghidour, N.",
368
+ "venue": null,
369
+ "url": null
370
+ }
371
+ },
372
+ {
373
+ "17": {
374
+ "title": "Hifi-gan: Generative adversarial networks for efficient and high\nfidelity speech synthesis.",
375
+ "author": "Kong, J., Kim, J., and Bae, J.",
376
+ "venue": "Advances in Neural Information Processing Systems,\n33:17022\u201317033, 2020.",
377
+ "url": null
378
+ }
379
+ },
380
+ {
381
+ "18": {
382
+ "title": "On generative spoken language modeling from raw audio.",
383
+ "author": "Lakhotia, K., Kharitonov, E., Hsu, W.-N., Adi, Y., Polyak, A., Bolte, B.,\nNguyen, T.-A., Copet, J., Baevski, A., Mohamed, A., et al.",
384
+ "venue": "Transactions of the Association for Computational Linguistics,\n9:1336\u20131354, 2021.",
385
+ "url": null
386
+ }
387
+ },
388
+ {
389
+ "19": {
390
+ "title": "Unifyspeech: A unified framework for zero-shot text-to-speech and\nvoice conversion, 2023.",
391
+ "author": "Liu, H., Wang, T., Fu, R., Yi, J., Wen, Z., and Tao, J.",
392
+ "venue": null,
393
+ "url": null
394
+ }
395
+ },
396
+ {
397
+ "20": {
398
+ "title": "Self-supervised speech representation learning: A review.",
399
+ "author": "Mohamed, A., yi Lee, H., Borgholt, L., Havtorn, J. D., Edin, J., Igel, C.,\nKirchhoff, K., Li, S.-W., Livescu, K., Maaloe, L., Sainath, T. N., and\nWatanabe, S.",
400
+ "venue": "IEEE Journal of Selected Topics in Signal Processing,\n16(6):1179\u20131210, oct 2022.",
401
+ "url": null
402
+ }
403
+ },
404
+ {
405
+ "21": {
406
+ "title": "Gpt-4 technical report, 2023.",
407
+ "author": "OpenAI.",
408
+ "venue": null,
409
+ "url": null
410
+ }
411
+ },
412
+ {
413
+ "22": {
414
+ "title": "Librispeech: An asr corpus based on public domain audio books.",
415
+ "author": "Panayotov, V., Chen, G., Povey, D., and Khudanpur, S.",
416
+ "venue": "In 2015 IEEE International Conference on Acoustics, Speech and\nSignal Processing (ICASSP), pp. 5206\u20135210, 2015.",
417
+ "url": null
418
+ }
419
+ },
420
+ {
421
+ "23": {
422
+ "title": "Speech resynthesis from discrete disentangled self-supervised\nrepresentations, 2021.",
423
+ "author": "Polyak, A., Adi, Y., Copet, J., Kharitonov, E., Lakhotia, K., Hsu, W.-N.,\nMohamed, A., and Dupoux, E.",
424
+ "venue": null,
425
+ "url": null
426
+ }
427
+ },
428
+ {
429
+ "24": {
430
+ "title": "MLS: A large-scale multilingual dataset for speech research.",
431
+ "author": "Pratap, V., Xu, Q., Sriram, A., Synnaeve, G., and Collobert, R.",
432
+ "venue": "In Interspeech 2020. ISCA, oct 2020.",
433
+ "url": null
434
+ }
435
+ },
436
+ {
437
+ "25": {
438
+ "title": "Autovc: Zero-shot voice style transfer with only autoencoder loss,\n2019.",
439
+ "author": "Qian, K., Zhang, Y., Chang, S., Yang, X., and Hasegawa-Johnson, M.",
440
+ "venue": null,
441
+ "url": null
442
+ }
443
+ },
444
+ {
445
+ "26": {
446
+ "title": "Unsupervised speech decomposition via triple information bottleneck.",
447
+ "author": "Qian, K., Zhang, Y., Chang, S., Hasegawa-Johnson, M., and Cox, D.",
448
+ "venue": "In International Conference on Machine Learning, pp. 7836\u20137846. PMLR, 2020.",
449
+ "url": null
450
+ }
451
+ },
452
+ {
453
+ "27": {
454
+ "title": "Robust speech recognition via large-scale weak supervision.",
455
+ "author": "Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey, C., and Sutskever, I.",
456
+ "venue": "In International Conference on Machine Learning, pp. 28492\u201328518. PMLR, 2023.",
457
+ "url": null
458
+ }
459
+ },
460
+ {
461
+ "28": {
462
+ "title": "Audiopalm: A large language model that can speak and listen, 2023.",
463
+ "author": "Rubenstein, P. K., Asawaroengchai, C., Nguyen, D. D., Bapna, A., Borsos, Z.,\nde Chaumont Quitry, F., Chen, P., Badawy, D. E., Han, W., Kharitonov, E.,\nMuckenhirn, H., Padfield, D., Qin, J., Rozenberg, D., Sainath, T., Schalkwyk,\nJ., Sharifi, M., Ramanovich, M. T., Tagliasacchi, M., Tudor, A.,\nVelimirovi\u0107, M., Vincent, D., Yu, J., Wang, Y., Zayats, V., Zeghidour, N.,\nZhang, Y., Zhang, Z., Zilka, L., and Frank, C.",
464
+ "venue": null,
465
+ "url": null
466
+ }
467
+ },
468
+ {
469
+ "29": {
470
+ "title": "Weight normalization: A simple reparameterization to accelerate\ntraining of deep neural networks.",
471
+ "author": "Salimans, T. and Kingma, D. P.",
472
+ "venue": "Advances in neural information processing systems, 29, 2016.",
473
+ "url": null
474
+ }
475
+ },
476
+ {
477
+ "30": {
478
+ "title": "Method for the subjective assessment of intermediate quality level of\naudio systems.",
479
+ "author": "Series, B.",
480
+ "venue": "International Telecommunication Union Radiocommunication\nAssembly, 2014.",
481
+ "url": null
482
+ }
483
+ },
484
+ {
485
+ "31": {
486
+ "title": "Aishell-3: A multi-speaker mandarin tts corpus and the baselines,\n2021.",
487
+ "author": "Shi, Y., Bu, H., Xu, X., Zhang, S., and Li, M.",
488
+ "venue": null,
489
+ "url": null
490
+ }
491
+ },
492
+ {
493
+ "32": {
494
+ "title": "Llama: Open and efficient foundation language models.",
495
+ "author": "Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix,\nT., Rozi\u00e8re, B., Goyal, N., Hambro, E., Azhar, F., et al.",
496
+ "venue": "arXiv preprint arXiv:2302.13971, 2023.",
497
+ "url": null
498
+ }
499
+ },
500
+ {
501
+ "33": {
502
+ "title": "Neural codec language models are zero-shot text to speech\nsynthesizers, 2023.",
503
+ "author": "Wang, C., Chen, S., Wu, Y., Zhang, Z., Zhou, L., Liu, S., Chen, Z., Liu, Y.,\nWang, H., Li, J., He, L., Zhao, S., and Wei, F.",
504
+ "venue": null,
505
+ "url": null
506
+ }
507
+ },
508
+ {
509
+ "34": {
510
+ "title": "One-shot voice conversion by vector quantization.",
511
+ "author": "Wu, D.-Y. and Lee, H.-y.",
512
+ "venue": "In ICASSP 2020-2020 IEEE International Conference on Acoustics,\nSpeech and Signal Processing (ICASSP), pp. 7734\u20137738. IEEE, 2020.",
513
+ "url": null
514
+ }
515
+ },
516
+ {
517
+ "35": {
518
+ "title": "Hifi-codec: Group-residual vector quantization for high fidelity\naudio codec, 2023.",
519
+ "author": "Yang, D., Liu, S., Huang, R., Tian, J., Weng, C., and Zou, Y.",
520
+ "venue": null,
521
+ "url": null
522
+ }
523
+ },
524
+ {
525
+ "36": {
526
+ "title": "Soundstream: An end-to-end neural audio codec, 2021.",
527
+ "author": "Zeghidour, N., Luebs, A., Omran, A., Skoglund, J., and Tagliasacchi, M.",
528
+ "venue": null,
529
+ "url": null
530
+ }
531
+ },
532
+ {
533
+ "37": {
534
+ "title": "Speechgpt: Empowering large language models with intrinsic\ncross-modal conversational abilities, 2023.",
535
+ "author": "Zhang, D., Li, S., Zhang, X., Zhan, J., Wang, P., Zhou, Y., and Qiu, X.",
536
+ "venue": null,
537
+ "url": null
538
+ }
539
+ }
540
+ ],
541
+ "url": "http://arxiv.org/html/2308.16692v2"
542
+ }